Podcast

202: AI for User Flow Verification with Jason Arbon

By Test Guild
  • Share:
Join the Guild for FREE

The most significant challenge for an app team in today’s Agile, Continuous Integration (CI), and DevOps-driven world is just making sure the app still works like it did yesterday. There isn’t enough time to manually test each build, let alone craft test automation. Even when teams do try to test, it adds days of delay and thousands of dollars of cost, and yet the team is still not happy with the results. In this episode, Jason Arbon explains how Artificial Intelligence (AI) and Machine Learning can solve this problem, and many others, at scale. 

About Jason Arbon



Jason Arbon is the CEO of test.ai which is redefining how enterprises develop, test, and ship mobile apps with zero code and zero setup required. He was formerly the director of engineering and product at Applause.com/uTest.com, where he led product strategy to deliver crowdsourced testing via more than 250,000 community members and created the app store data analytics service. Jason previously held engineering leadership roles at Google and Microsoft and coauthored How Google Tests Software and App Quality: Secrets for Agile App Teams.

Quotes & Insights from this Test Talk

  • AI Summit Guild is going to be a one-day conference that brings together both vendors and real-world, hands-on practitioners who are already experimenting with testing. They’ll give you the straight dope on what’s going down within AI–the pros and cons, what it is what it isn’t, and what we see going forward with the automation to get you up to speed so that you’re not disrupted. I’m also working in conjunction with the Artificial Intelligence for Software Automation Association to bring you this first-of-its-kind summit, which means our speakers will be discussing some of the newest A.I. testing tools out there.
  • I realized a few years ago that AI is actually the one interesting transformative technology for testing specifically because the very process of how we train machines to drive cars, to recognize of an image is a cat or a dog–It's fundamentally a testing problem. You apply some inputs to a system which is the neural network, in this case, you see what the output is and you check it against your true set. And if the machine got it wrong you say dumb robot try again. He gets it right you say good robot. So it remembers the good things and forgets the bad things. And that's a trained AI. So fundamentally I realized that the actual process of training AI was they testing job.
  • It's a beautiful mapping on the testing problem. If we build on AI we leverage all the other research that goes on in the world like Google, the deep mind projects, at Berklee and Stanford. Every one of these postdocs and engineers is building basically testing infrastructure at scale. And all we need to do is apply that to our particular testing or product problem. Because basically, they're teaching these things to be great testers and do it automatically.
  • Most people in testing and still most engineers aren't actually where the technology is. So AI is just a broad term. There are two high levels of AI. One is artificial general intelligence (AGI) which is like Ultron passing the Turing machine tests. It's a robot that will do your dishes and have conversations with you. The other branches are just machine learning. Machine learning is based kind of on statistical inferences but the idea is you just train it. The machine learning aspect of the AI is is really when I talk about AI and machine learning in testing I'm talking about the machine learning part.
  • Machine learning is really just a bunch of algorithms. They use Tensor flow off the shelve, you can use Python. These are just function calls and you just pass arrays of data to it and you say train and it tells you hey I was able to figure it out. Ir it says oh I wasn't able to figure out how to emulate that input-output function. That's really what machinery learning is.
  • So really think about it as the test job of the future is actually what we kind of look at as a test lead or manager that synthesizes all that data and reports in as says this is how quality is of the business.

Connect with Jason Arbon

May I Ask You For a Favor?

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.

Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

SponsoredBySauceLabs

Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

267: Smart Test Execution with Eran Sher

Posted on 08/25/2019

Do you run an entire automation test for every build because you don’t ...

266: Automation Journey and TestNG with Rex Jones II

Posted on 08/18/2019

In this episode we’ll test talk with Rex Jones about his automation testing ...

265: TestProject a Community Testing Platform with Mark Kardashov

Posted on 08/11/2019

In this episode, we’ll talk to Mark Kardashov, CEO and Co-Founder of TestProject, ...