Automation Delivery Channels with Maciek Konkolowicz
Today we’ll to be test talking with Maciek Konkolowicz about automated delivery channels. In this episode you’ll discover what automated delivery channel actually is, as well as what bottlenecks to avoid, why refactoring is not always the right choice, how to get a quicker feedback loop, how to set up consistent environments and much, much more. Listen up and discover how to take your automated test execution to the next level.
About Maciek Konkolowicz
Maciek has been a quality champion his entire professional life. After graduating from university he found himself working in IT process auditing, which introduced him to the need for quality processes. After a brief stint in the process area, he decided to become more technically focused and found his QA passion. As many of his QA compatriots have done, he started with manual testing, but quickly became interested in automation. For the past seven years, he's been focusing on learning, implementing, showing and spreading the automation butter to whoever he can corner, be it Dev, QA, BA, or even Project Managers. He's a passionate technologist who loves to externalize his thoughts to gain perspectives of others. He tries to document his lessons learned at http://macie-j.me and his bad jokes on twitter. He has spoken at local meetups and conferences and loves to share his passion for the quality crusade.
Quotes & Insights from this Test Talk
- So it is kind of like DevOps except from my perspective it's a little bit more specific than Devops. It's DevOps for automated tests and I kind of like to think of it as a test delivery pipeline. So I kind of came up with this delivery channel concept in my head from my experience because I notice that a lot of my team members and a lot of my my friends who are writing automation really spending a lot of time upfront on designing tests and really making them really solid test. And while running tests and turning out a bunch of tests on their machine and then asking the question well how do I scale this to an environment where I don't need to be the executor of the test. And so I started thinking about how to take that concept and provide an answer to that question. And that's where the DevOps experience kind of brought me into this concept of well you know if we can ship software on a continuous delivery continuous integration type of pipeline why can't we incorporate tests into that. And actually in DevOps that is already a big a big focus. But traditionally the focus is on unit tests and really quick running tests so in an automation delivery channel the product similarly to how the DevOps channel the product is the actual application and being pushed through the stages from an automation delivery channel perspective the product are the test being pushed through the channel and the end result is the feedback that you receive as a tester with respect to how well your test executed. So that's kind of what to me and automation delivery channel is it's kind of like a test delivery pipeline.
- Depending on how your team does things and depending on how sensitive you are to the feedback loop will really drive how you design your test automation pipeline. I've seen some situations where teams like to run only unit tests as part of their application delivery process. While some teams run unit tests and integration tests for a subset of integration test and then even some teams run unit tests integration tests and a small subset of UI tests before they ship. The frequency of those tests also differs so in most situations what I see is the unit tests run on a C.I. continuous integration/delivery type of strategy whereas the integration tests and the automated UI tests run on their own frequency where they they provide feedback. But it's kind of a little separated from the actual development application pipeline. So I kind of think of it as there is a pipeline to provide immediate feedback from unit tests and to as applications are deploying. And then there is a pipeline to run tests and provide feedback on something like a nightly basis so we're kind of talking about the channel that delivers nightly automated runs. And from my perspective specifically what I mean it could be nightly some teams could be running at once per builds some teams would be running it you know once a week it depends on what your team is requiring But the whole point is that there is a very stable uniform way to run your tests whatever they may be. And from my perspective the best test to put into this pipeline would would be a subset of each of the testing pyramid. Right so you have some UI test a smaller amount a bigger amount of integration test and you could probably even leave out the unit test or you could run them as well. On my team we leave the unit test to be shipped with the CI/CD the pipeline is shipping the actual product and then we have a separate pipeline to run the test. And then we have two ways of looking at quality. One from a unit test perspective and then one from a compiled more user facing perspective of integration and automated tests.
- So the main driver behind this type of delivery channels for me for automation specifically was stability. I noticed that a lot of my teammates were receiving inconsistent results from the tests. So sometimes they pass sometimes they wouldn't. And obviously we do all we can to stabilize our tests. You know we never do kind of hard pauses or you know we never make assumptions. We really try to kind of think through what can go wrong in an automated test run but we never seem to be able to get 100 percent of the environmental issues that we see from one environment to another. So for example if you're writing to us and I was writing to us and then I wanted to run your test on my machine sometimes your test would not work because the way that my machine was set up. Well so with the concept of a uniform stable automation delivery channel there is one environment that we're building for and that environment is the same for you as it is for me as it is for our other team members. And so what that gives you is this concept of almost like an exit criteria. So in agile we obviously talk about concepts such as the definition of done . So part of a team that I was working on earlier couple of years ago we defined the definition of done to encompass this idea of running tests in the delivery channel and specifically on the the the virtualized lab that is part of that delivery channel. So when we designed our test we started with developing them locally, running and locally checking the tests and to our source control. Running one or many. One or two tests in the delivery channel and then finally putting those sessions and nightly run. And that's only only when those test passed that we say that we're done developing that test.
- It gave us a really cool kind of benchmark to know when our tests are done.Or when they're not done. And provided a lot of consistency and a lot of really high percentage of pass tests and a really low percentage of false negatives. So the channel really just stabilize our test effort.
- I think generally because we get so excited about writing our test scripts and discovering new things we don't think about executing them right away in an independent environment. So the one piece of actionable advice is avoid wasting your time and turning your machine into a text executor build your delivery channel as early as you can.
- AutomationGuild online conference dedicate to automated testing
- PerfGuild online conference dedicate to performance testing
Connect with Maciek Konkolowicz
May I Ask You For a Favor?
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.
Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!