Terrible Testing

Projects can fail for many reasons. Some fail because they test the wrong things. Others fail because they test too much. In this episode Todd H. Gardner, an enterprise consultant and founder of TrackJs, shares his many years of development experience in testing atrocities (AKA terrible testing) and what we can learn from them. You’ll come away questioning your own testing. So let’s forget about our long-held testing dogmas and start doing a better job of testing the right things in our software.

Read Full Transcript

About Todd



Todd H Gardner is the president and co-founder of TrackJS (trackjs.com) and an independent software consultant putting software and businesses on the web. With over a decade of experience building software systems, Todd has built large enterprise systems, complex software products, and launched companies.

Quotes & Insights from this Test Talk

  • The question of, “Are you testing the right thing?” I don't think is asked often enough, because it's really hard. It's probably one of the hardest problems in software development to answer, because the fundamental question inside of it is, “What is the risk of this thing that we're doing?” and that's a really contextual question about what your project is. There's all different kinds of risk that comes in with software development, because this is a incredibly creative and complex activity that we're doing, but we're also working in an incredibly complex market, or niche, that you're operating in, so there's all different things that can go work at so many levels. You have to think really critically about that.
  • The Testing Pyramid is misleading. I think that that is a flawed model for two reasons. The first is that it misses this whole thing that we've been talking about: market risk. About whether or not the project itself is a good idea, and how do you test that? Which is really operating at a level above system tests. The second problem that I have with the model is that it's implying volume. It's saying you should have more unit tests than integration tests, because unit tests are cheap to write, and I think that that's missing the point. It's not how cheap or expensive it is to write and maintain. It's what kind of risks are being addressed by those sort of tests.
  • You can never test everything. It doesn't make financial sense to test everything. There's always some parts of your system that you can't adequately test for, nor should you, because it's just not financially worth it. That's where monitoring comes in.
  • TrackJS fits firmly into that monitoring phase of testing. What TrackJS is, is a JavaScript error monitoring service. Say you're building a web application using any of the hot frameworks right now, Angular, Ember, React, Ember, or just Native, whatever. When you build that application, it's a little different than building a server site application, because you're taking code and you're shipping it out to your client. Your client is a web browser, and your application is running in their web browser. There's tons of different web browsers that all have their different quirks and the users, themselves, can inject all kinds of toolbars and extensions into them that can change how your app runs, so it's really no surprise that JavaScript applications tend to break a lot in ways that the developers don't always predict.
  • The overarching point of my Talk and my message is that it depends. I'm not saying, “Hey, Todd said you don't need to test! Go ahead and push it into production – it's fine!” That's totally not what I'm saying. What I'm saying is, that you need to think critically about your application and your tolerance for risk, and build a testing structure and a monitoring structure that accommodates what you're willing to do as cheaply as possible.
  • The best advice I can give around testing JavaScript applications is to separate your code into the parts that are orchestrating events from the parts that are actually doing real logic. When you convolute the event callbacks and interactions with the DOM with the actual logic that is manipulating data structures and communicating with the server, you get into these nasty callback messes that have become really hard to test and really hard to write targeted tests around the parts that can actually break. Thinking about the architecture of your front-end code, to put together pieces that are easy to test the parts that can break, and don't worry about testing the parts that won't break.


Connect with Todd

May I Ask You For a Favor?

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.

Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.


Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

79: Todd Gardner: Terrible Testing | CipherStaff | CipherStaff - November 22, 2015

[…] post 79: Todd Gardner: Terrible Testing appeared first on Test […]

79: Todd Gardner: Terrible Testing | Testing Podcast - November 26, 2015

[…] Show notes: 79: Todd Gardner: Terrible Testing […]

Why the Testing Pyramid is Misleading - Joe Colantonio - Succeeding with Test Automation Awesomeness. I’ll show you how! - December 9, 2015

[…] and misunderstood. And honestly, sometimes the testing pyramid itself is misleading. I spoke with Todd Gardner of TrackJs recently, and he believes that the testing pyramid concept is flawed for two main […]

How to Make Your Test Automation Efforts Visible to Everyone on the Team - Joe Colantonio - Succeeding with Test Automation Awesomeness. I’ll show you how! - March 30, 2016

[…] I actually did a TestTalks interview with Todd Gardner of TrackJs around this concept (he calls it terrible testing) that you should definitely check out. Make your Tests More Visible with Code Reviews You should […]

ernie - May 15, 2016

That tweet box (with the quotation “Any sufficiently advanced testing system is indistinguishable from monitoring” attributed to Gardner) is a bit odd, especially as Gardner specifically states during the podcast “I’m just going to repeat it anyway and say that it wasn’t me but it’s really clever. ”

The quote intrigued me, so I did a bit of research this morning, and looks like it goes back to a 2007 presentation at GTAC by Ed Keyes:


Click here to add a comment

Leave a comment: