Mabl: Sneak Peak at New AI Test Tool with Dan Belcher
Everywhere you look today, folks are talking about Artificial Intelligence. Will AI be the next big thing in automation testing? In this episode we’ll get a sneak peak into a brand new, AI-based test tool, Mabl, which was founded by a bunch of ex-Google employees. Today we’ll be test talking with Mabl co-founder Dan Belcher as he shares his views on AI/machine learning-based automation and gives a sneak peek at some AI-based features found in the soon-to-be-released AI test tool.
About Dan Belcher
Dan is a product person focused on building tools that make software companies more productive. Currently, he is a Co-Founder at mabl, a company that is making software testing simple, using machine learning. Dan was formerly the Product Lead for Stackdriver within Google Cloud. Dan joined Google via its acquisition of Stackdriver, a company that he also co-founded, in 2012. Prior to Stackdriver, Dan served in various roles at Sonian (acquired by Barracuda), VMware, and Microsoft.Tutorial.
Quotes & Insights from this Test Talk
- From an automation perspective I think it's really going to be about enabling people who generally have been trying to test that things are working to start to look at quality with a different definition, ease of use, performance, things that are affecting user satisfaction.
- We've been really on a journey to make dev ops easier for the last 10 years that preceded Google. We had started another company called StackDriver that was about getting operations people the right visibility into their systems and infrastructure. And then Google acquired StackDriver and then while we were at Google we saw that QA people were feeling a lot of pain as the pace of development sort of picked up out in the market. And of course you can't be at Google without getting excited about machine learning and so the opportunity to you know use machine learning to help solve the problem for QA was too great to pass up.
- One thing that we've learned since starting the company is that we're really not out to build an ML solution. It's really about making QA easier. And part of that is using machine learning. Part of that is using heuristics. Part of that is using other types of analysis and we sort of bring them together to provide the right solution. An example on the Machine Learning side is that you know we have a lot of customers who have very simple problems like they want to understand when the behavior of their application changes between tests. And as a user it's really easy to see when something changes visually or to look at a chart of load times for a page and see the difference and load time out but really we think machines are a lot better of that. And so we use machine learning algorithms as an example to identify visual differences in the behavior of an application and also to identify anomalies and timing and other factors.
- When we started the company before we wrote the line of code. We surveyed over 100 companies you know just trying to understand their pain around testing and the ability to sort of create and maintain test automation and to find test automation engineers was clearly the number one pain point. And I guess you'd expect that because as teams move to continuous integration and continuous delivery and merging changes into their product on an hourly daily or weekly basis development is moving at such a pace that QA really risks being the bottleneck without finding a better approach.
- Selenium is the most broadly used framework certainly among the users that we're working with and the the challenge with selenium for them is that it's you know pretty rigidly tied to the specific elements on the front end that we're trying to test and attributes and those elements. And so the flakiness can often arise when you make what seems like a pretty innocent change to a UI it could be as simple as changing an ID for an element or putting it in a different div on the front end. And unfortunately for many tests that have been written automation test the system ends up being unable to find those tests. One of the things that we did at the very beginning and Mabl is develop a much smarter way of referring to front end elements in our test automation so that those types of changes don't actually break your tests. As Mabl understand the difference between you know what might be you know a very small change to an element and you know a new feature or a broken front.
- The one thing I'd offer is that as we spend time with a sort of hundreds of you know teams building cutting-edge products what we see are the teams that are really at a high level of performance. The QA sort of side of the houses is is really adopting a more product and user focused approach to their world. And so by thinking about the user journey and how can we ensure that we have the right amount of coverage and testing the user journeys that our users are actually exercising and so they're going and finding the right data out of tools like Google Analytics and Mixpanel and logs and so forth to say all right these are the journeys that are most important to our users and so let's make sure that we have really strong coverage across those regardless of how quickly our product is changing. And what that may mean is no small changes don't get the right coverage and that you know the QA team is sort of more focused on testing the product as it's used by their users then perhaps you know trying to cover the code that the developers are sort of checking in.
Connect with Dan Belcher
May I Ask You For a Favor?
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.
Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!