Podcast

80: Eric Proegler: Performance Testing in New Contexts

By Test Guild
  • Share:
Join the Guild for FREE

Performance Testing in New Contexts

Virtualization, Cloud Deployments and Cloud-Based Tools have challenged and changed performance testing practices. Today’s performance tester can summon tens of thousands of virtual users from the Cloud in a few minutes at a cost far lower than the expensive, on-premise installations of yesteryear. These newer environments can also cause us to struggle to understand our performance results and have to puzzle out the essential message in each.

In this episode, Eric discusses strategies for engaging with these new contexts, performance testing them effectively. He also explains how to better interpret performance test result and create better reporting.

downloadMP3
 About Eric

EricProeglerHeadshot

Eric Proegler has worked in testing for 15 years, and performance and reliability testing for 13. He works for Mentora Group from his home in Mountain View, California.


Eric designs and conducts experiments that use synthetic load to help assess and reduce risk, validate architecture and engineering, and evaluate compliance with non-functional system requirements. He tests in a wide variety of contexts, using tools appropriate to the job at hand.


Eric is an organizer for WOPR, the Workshop on Performance and Reliability, and a Community Advisory Board member for STPCon. He’s presented and facilitated at Agile 2015, CAST, WOPR, PNSQC, STPCon, and STiFS.

Quotes & Insights from this Test Talk

  • My opinion about a performance test project is that there are a couple of phases to it and coming up with a plausible simulation is a big part of that, and then coding that is another part of it. When you actually interpret what you learn from your experiment and then turn that into something actionable, I think that's a really key part of the process. In the performance testing work we talk about tool jockeys… about people who can do scripting but don't understanding what it means… can't put it the context of something that the business could make a decision on. That's sort of taking a interesting engineering exercise and turning it into something that has an impact on what the business does.
  • Network bandwidth is a little more key, I think, when you're talking about a distributed user base. The example we always use in troubleshooting here is the person who's connecting via their phone over Panera wifi at lunchtime where network conditions are not fantastic. So paying attention to what the load is on the client end and then also coming into your data center are key. 
  • The whole idea of reporting I think people talk about graphs, but any report that I give to an executive is going to have 2 maybe 3 graphs that have 2 or 3 data points on them and then have a paragraph- I think I got this from Mike Kelly, I don't know where he got it from. Every graph deserves a paragraph. There's a paragraph under the graph that says what you're looking and what it means. I don't use the graphs to try to tell a story as I do to try to support a narrative that the rest of my analysis is generated. You can't send somebody 50 graphs like the default tool report button will give you and expect them to understand what occurred. You have to explain it. 
  • It's a human psychology thing I'm sure, but if I send somebody 20 graphs, that's less useful to them than if I send 2 because when I send them 20 I don't tell them what they need to focus on. 
  • One of the important things is that there's no jumping to a conclusion. That's really easy to do when you've been performance testing for a while is to look at results and then spit out root cause.
  • I think the best way to improve your performance testing efforts is to take a step back and find somebody to take a look at what you're doing. I think that these projects turn into these multi-week, multi-month, multi-year where you go down a rabbit hole and you're trying to build a certain test that may not necessarily be relevant to what your stakeholder want. May not tell you what you think it's telling you, and in testing is really important to be able to focus and de-focus, but when I look at where I really wasted time or the mistakes I really wish I could have back, it was spending 5% on planning and then a lot of time on executing and figuring out that I didn't spend enough time one planning.

Resources

Check out Eric's excellent OreDev 2015 Presentations:

  • Interpreting and Reporting Performance Test Results:

  • Performance Testing in New Contexts: Virtualization and Public Cloud:

Connect with Eric

May I Ask You For a Favor?

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.

Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

SauceLabsSponser

Test Talks is sponsored by the fantastic folks at Sauce LabsTry it for free today!

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

267: Smart Test Execution with Eran Sher

Posted on 08/25/2019

Do you run an entire automation test for every build because you don’t ...

266: Automation Journey and TestNG with Rex Jones II

Posted on 08/18/2019

In this episode we’ll test talk with Rex Jones about his automation testing ...

265: TestProject a Community Testing Platform with Mark Kardashov

Posted on 08/11/2019

In this episode, we’ll talk to Mark Kardashov, CEO and Co-Founder of TestProject, ...