Select Page

Eight Burning Questions for Three Test Automation Leaders

Evan Edwards

Evan Edwards

Vice President of Engineering

Evan Edwards is the Vice President of Engineering for Tryon Solutions and oversees the development of Cycle, our behavior-driven test automation solution that allows all personnel to join the testing fold and helps implementation teams to deploy with confidence. He ensures that Cycle grows in response to industry needs while remaining stable, streamlined, and easy-to-use.

Andy Knight

Andy Knight

Software Engineer

Andy Knight is a software engineer who specializes in building test automation systems from the ground up, which involves both software development for test code as well as the infrastructure to run it in continuous integration. He regularly speaks at testing conferences and has a well-renowned test automation blog at automationpanda.com.

Paul Merrill

Paul Merrill

Principal Software Engineer in Test

Paul Merrill is Principal Software Engineer in Test and founder of Beaufort Fairmont Automated Testing Services. Frequent writer on testing and test automation in top-notch testing publications and international speaker at testing conferences, Paul works with clients every day to accelerate testing, mitigate risks, and increase the value of testing processes with test automation. An entrepreneur, tester, and software engineer, Paul has a unique perspective on launching and maintaining quality products. Learn more about test automation at http://beaufortfairmont.com/webinars.

We asked three of today's top test automation leaders about problems, strategies, and solutions in testing. Read on to find out what they had to say!

What’s the most common pitfall that you see testing teams stumble into, and how can they help avoid it?

Evan Edwards:  Testing the wrong thing.  You have to talk to people and communicate and make sure you are trying to test what you actually need to test.

Andy Knight:  I'll answer this question specifically for testing teams - assuming that the organization has a dedicated team for doing manual and automated testing. The most common pitfall into which I see these types of testing teams stumble is the lack of attention given to the stack that supports test automation. Most test teams focus on test cases and high-level test code. They often don't think about the framework, the packages, and the infrastructure. To be honest, many people on testing teams also lack the expertise to truly scale a testing solution. Hacking onto the framework tends to be more popular than designing new features into it. The best ways to solve this problem are (a) recognize that automation is not "just test scripts" and requires full software development practices, (b) allocate time in planning for automation solution development and maintenance because it's not a "free time" activity, and (c) bringing framework and infrastructure expertise to the team through training or recruitment.

Paul Merrill:  Trying to "test everything" instead of identifying the risks they need to mitigate. Use a typical testing methodology to assess risk and identify what to test. 

 

What is the best way for a testing team to show ROI for test automation?

Evan:  You are a part of a larger team and a larger effort most of the time.  Focus on meeting the overall objective, which in turn should support whatever business case you’ve drawn up.

Andy:  ROI is tough to truly measure. I even wrote an article about it on my blog. Here are ways I seek to measure ROI for automated tests:

  • Feature priority - does the test cover important things?
  • Test execution frequency - how often is the test actually run?
  • Coverage uniqueness - does the test not duplicate existing coverage?
  • Cost of ownership - how much time and money do we spend to keep this test running?
  • Bug discovery - how soon are bugs discovered, and by what severities?

Raw bug counts, however, are a terrible metric for ROI. Consider this question: Is a high bug count good or bad? Trick question – during a release, it indicates good test quality but poor product quality; after a release, it indicates all-around poor quality. What matters is that a minimal number of bugs happen at all, and that most of those bugs are caught and fixed before a release. Plus, keep in mind that bugs happen by accident. Finally, focusing exclusively on bug count to determine test value ignores the positive side of testing – that passing tests give confidence that features work correctly.

 

This certainly can be project dependent, but generally speaking, what are the most important test management metrics?

Evan:  Some combination of “coverage” and “regressions” isn’t a bad place to start.  So, how much of the thing you want to validate are you actually testing, and how often does something that was working before stop working.

Andy:  Please see my articles on test quality metrics and process quality metrics. In short, make sure tests maximize ROI. Make sure they have high, unique coverage and run frequently to give feedback as fast as possible.

 

What is your favorite method to help translate a user story into an end-to-end test case?

Evan:  I’m old school.  Though it’s not modern and edgy I really enjoy walking out to the supervisor responsible for the job function and having an informal chat.

Andy:  No doubt: Example Mapping is the best way I've found to derive behaviors and tests from stories. Once rules, examples, and questions are mapped, writing Gherkin scenarios for the test cases themselves is a natural extension.

 

How should a testing team determine how often they execute automated tests?

Evan:  How long do your tests take to execute and how often are updated results helpful to your team members?

Andy:  Tests should run continuously! Test automation without continuous integration is dead. Tests that don't run don't provide value - they just provide debt. Teams should focus on setting up CI systems to run tests quickly and reliably. If test suites take a long time to run, teams should set up parallel test execution to reduce the start-to-end testing time. Teams could also run smaller test suites continuously and larger test suites periodically (like nightly).

Paul:  Depends on the methodology they use, but many teams are moving toward DevOps with Continuous Integration. In my "ideal state" for our clients, we'd love to see test automation running continuously - some every push, some every PR, some periodically (each as is appropriate).

 

What is the most important consideration when a team starts their test automation journey?

Evan:  A clarified objective and really good communication.  You need to be in agreement on what your target is and at least the skeleton plan of how you intend to get there.  It’s a high-level statement and its true of nearly all projects of all kinds.

Andy:  A team's most important consideration should be their goal with testing and automation. What problems do they have? How can testing and automation help? What do they hope to accomplish? What can they reasonably achieve? Are they willing to make necessary changes to achieve their goals? Without a goal, the team will end up with either a lousy solution or no solution.

Paul:   "Why are we automating?" Teams that have clear understanding for why they are moving toward automation tend to be more successful.

 

Does it ever make sense to automate acceptance tests?

Evan:  Sure.  We do it all of the time with Cycle as most of our customers are testing packaged software, but it’s naïve in most cases to assume that you can entirely skip the step where a human inspects your deliverable.

Andy:  Maybe. I've found testing words and phrases like "acceptance tests" to have overloaded meanings, so I always want to carefully define terms before using them. I define "acceptance tests" and black-box feature tests that should be run at the end of a development iteration (like a sprint in Agile Scrum) as part of the team "accepting" that the work for the ticket is "done". Acceptance tests should be automated whenever they provide ROI for testing and development and make sense to run repeatedly (continuously or periodically).

Paul:  Yes.

 

What’s your favorite testing slang term, i.e. 'Smug Report'?

Evan:  Can I have two? I like Dummy Data and I really enjoy hearing “that’s functioning as designed” from across the conference room.

Andy:  I like to "Gherkinize" my tests.

Thanks to Evan, Andy, and Paul for their input! 

This post was written by:

James Prior
Technical Pre-Sales Consultant

James has been working in software pre-sales and implementation since 2000, and has more recently settled into focusing on technical pre-sales. He takes care of our hands-on demonstrations, and eagerly awaits your request to see our Cycle test automation software in action. Drop him a line at: james.prior[at]tryonsolutions[dot]com.

Recommended Content

 

 

Checklist: Deciding which Tests to Automate

Essential Checklist for Deciding Which Tests to Automate To automate or not to automate?That is the question a lot of test professionals ask themselves- when deciding what tests should be run by software and which ones are best performed by people?To help, we've come...

Cycle & Behavior Driven Development: How It Works Demo

What We Covered: High level explanation of Cycle as a behavior-driven test automation solution Overview of advantages including collaboration, lower maintenance, streamlining, and how Cycle is used to mitigate deployment risk Overview of Cycle interface, syntax, key...

Cycle 2.0: What's New Webcast

  Who Should Watch: Chief Information Officer Chief Technology Officer Developers Business Analysts QA Team Project Managers What We Covered: How the feature upgrades will increase productivity and decrease system downtime More Modern UI/UX. A new Inspector Panel,...

Regression Testing for WMS Upgrade

In this case study, we look at how Cycle was used to save a 3PL time and money by testing their heavily-customized Warehouse Management System (WMS) upgrade.