Sixteen Burning Questions for a Test Automation Leader

Javier Velasquez

Javier Velasquez

Software Developer in Test

Javier Velasquez has nearly a decade of experience as a software professional. Early in his career he did software testing with QTP, and after a few years he moved into testing with Selenium and REST/SOAP RPC - and along the way built multiple integration and system testing frameworks.  After working in QA for a handful of years, he moved into a more collaborative role working directly with developers on Agile teams and eventually transitioned into a role that focused on back-end development and build automation. Recently he moved back into QA with a SDET role, and jokingly describes it as "coming out of QA retirement". When he's not writing code or talking about testing, you can find him gardening, jogging, and reading.
We've previously polled a few test automation leaders for their thoughts on the burning questions in the realm of test automation that keep all of us up at night.  This time we decided to focus on one test automation leader in particular, and we've poked and prodded our very patient interviewee Javier Velasquez for his invaluable thoughts and insight!

 

What are some common pitfalls that you see testing teams stumble into all too often?

Javier Velasquez: Some common pitfalls I have noticed are lack of communication between developer and quality engineering stakeholders and an artificial siloing of QA and development team members. This lack of communication often leads to teams to expect QA to own all testing and quality processes instead of creating a culture that encourages collective ownership of quality by all stakeholders.

 

How do you define software quality?

Javier: Even though people think a component of my job is about defining quality, I think quality is something that all stakeholders should agree on collectively. I believe there is a clear difference between quality, testing, and automated testing, and it is crucial for teams to acknowledge the differences of each of these. Testing may help promote quality deliveries but testing is not a substitute for focusing on quality throughout the SDLC. Part of team expectations and the definition of done can outline behaviors and workflows that enable quality. Does the definition of done include tests? Do coworkers hold each other accountable in code reviews? Do team ceremonies like sprint planning involve in-depth discussion on ironing the finer details? How are unknowns handled?

Personally when considering software quality, I would evaluate two components:

  1. Cognitive Overhead: How much intrinsic or germane cognitive overhead is involved in your application? How does that reflect the domain you are working in? How difficult are your releases, and why is that? What is the lifecycle of a new feature? How much risk is there associated with minor changes? Does it take 3 teams and a ceremonial chicken to add a new button to  a page? Does your system design abstract out complexities and make things simple for your users and developers? 
  2. Stakeholder Confidence: How do team members genuinely feel about an upcoming release? Does the team understand the risks associated with their changes? How happy are your customers? 

These are the questions I would pose when defining quality, but the most vital thing to consider is that defining quality should be an intrinsically collaborative process.

 

What is more important, test speed or test quality?

Javier: I think focusing on the quality of tests will ultimately lead to more efficient tests and faster tests. Prioritizing test speed over quality first can be problematic. I've even seen cases where test design emphasizing on speed, skips out on certain aspects of the original intention of the test. 

Focusing on speed may inadvertently harm effective test design. As the software you're testing grows, it may be that those design decisions, focusing on speed, made previously yield tests that do not scale well with your software.

 

What are some test case design techniques you use? 

Javier: I try to look at tests as specifications or contracts vs. "test cases." Each specification should abstract away complex setup, ensuring focus on isolated behavior. Each contract should be small and isolated. And each contract should be about the user or consumer and the thing being tested. This could be a workflow or something as small as an internal API. I try to think about how the customer is going to use the thing I am building or testing. As a part of the automated test design, I guarantee that non-deterministic components are swapped out during my continuous integration processes promoting immediate and valuable feedback on how my software works based on codified contracts. These interactions are tested later in isolation.

 

Do you involve cross-functional teams or assign a specific member of the testing team to design test cases?

Javier: In my experience, I've always found the most valuable interactions around test case design or specification design occur when all stakeholders are involved in discussing and committing to the expectations of how things should work. 

Your product team may have an idea of what they want, and the development staff may have different expectations given existing system constraints. Involving QA and all stakeholders upfront allows for discussions on potential challenges and how features or changes may affect other parts of the system. 

These discussions provide a forum for multiple perspectives to enable meaningful ideas to present themselves, which can potentially increase the value provided to the customer before development even begins.

 

Do you involve cross-functional teams or assign a specific member of the testing team to determine the level of test coverage?

Javier: I think that developers and quality engineers should proactively collaborate on coverage at all layers of the application. Developers should express confidence in the coverage they provide at the unit and integration layer, while quality engineers should assess coverage at the systems layer.

 

What are your favorite methods of showing test automation ROI?

Javier: I think measuring ROI is pretty hard to do, at least when it comes to testing. At a macro level, evaluating stakeholder confidence could play a factor when it comes to ROI. How do teams feel about the upcoming release? How do teams further build on increasing stakeholder confidence? 

Is there an area of an app known to break with almost all releases? Can we invest in providing automation in that part of the app to mitigate these regressions? 

As these tests provide value by preventing regressions, the ROI of testing begins to grow. Let's factor in the time spent on putting out fires and the time lost working on new features for your customer. How does this fire fighting affect employee happiness, productivity, and even retention? These things are hard to measure, but they can also be pretty obvious. 

The important thing here is taking the steps to incrementally improve things by having honest and transparent retrospectives and dialogues that invest in your product and your teams.

 

What are some of the more important features in a test automation solution?

Javier: I think this depends on what sort of test automation solution is being discussed. Is this a tool for system testing or a tool for API testing? Or even lower level with unit tests since those are technically automated tests as well. Generally speaking, I've found tools that focus on API expressiveness, ease of use, flexibility, and documentation most useful in my experiences.

 

How do you review test run metrics?  Do you skim a report from the previous night’s runs, do you check aggregate metrics, etc…

Javier: When I focused on more system-level testing in the past, I would review runs by skimming reports. Recently, most of my focus involves tests that run as part of continuous integration systems. That way, test failures will arise as soon as there are regressions. I've discovered that when you focus on more deterministic types of tests, such as unit and integration tests, and have them gate the build—the collective team is able to more easily own various automated testing efforts.

 

Do you think there is genuine artificial intelligence in test automation now, or is it simply a marketing buzzword at this point?

Javier: I think it's a little bit of both. I've seen some platforms use certain types of AI, such as machine learning (ML). A few examples that I have seen use machine learning for photo and image verification purposes. 

 

Where is the test automation space headed?  This is an intentionally open-ended question 😊

Javier: When it comes to the future of the automation space, I think there will be much more ownership and collaboration of automated testing from developers and increased focus on the SDET role. I believe automation testing is becoming more of a component of engineering rather than of a separate process.

With increased ownership from engineering, I think businesses will be able to realize the value of test automation more quickly. I mean automation incorporated within the build process, part of continuous integration systems, empowering the continuous deployment and delivery of software.

While this evolution of automation that I’ve described above has already been happening for a few years, arguably decades, I think it’s only going to continue to refine over the next few years.

 

When interviewing a candidate for a test automation engineer position, what skills do you consider the most important?

Javier: If you asked me this question a few years ago, I would have said possessing an understanding of Java and Object-Oriented programming - at least if this was for a position at a Java shop.

In recent times, I focus on potential teammates having skills like curiosity, intellectual honesty, desire for growth, emotional intelligence, and empathy. I've found individuals possessing these traits excel as test automation engineers and as engineers.

A lot of things about test automation can be taught. For example, it's easy to train folks on how to use a test automation framework, develop coding skills and general test automation techniques. 

But many of these skills aren't always trainable but provide immeasurable value to the team. Great engineers possess curiosity, and curiosity drives exploration and experimentation.  Intellectual honesty is something I view as another form of transparency. This sort of thinking encourages productive conversations around designs, decisions, and potential improvements and areas of risk. Something you don’t always get in instances where transparency isn’t valued on teams.

I think emotional intelligence (eq) is also crucial. EQ provides test automation engineers with the ability to understand how other stakeholders approach problems. It assists with the development of stakeholder relationships and also provides greater self-awareness. I think another aspect of what I look for in a test automation engineer is empathy; empathy for both customers and coworkers. 

 

What’s the difference between a traditional QA tester and a SDET?

Javier: I think this depends on the level of experience and the organization. Arguably some attributes of an SDET may apply to a QA tester as well. Right now QA is in a weird place. I look at job titles and roles in the QA space kinda like DevOps and the UI Development world. In some organizations, DevOps engineers fall into the category of System Administrators while others fall much more in the definition of a SRE. In the UI realm, some UI engineers consider themselves more UX/CSS/What-the-user-sees focused professionals, while others focus on components, layout, and framework-level problems. Others may do a little bit of everything. This is the same case for different types of QA roles: QA Tester, Quality Engineer, Automation Engineer, or SDET.

I think the fundamental difference between a traditional QA tester and a SDET is that QA testers view testing as a user problem and SDETs look at testing as an engineering problem.

In my experience, I’ve seen QA testers tend to focus on black-box testing methods, such as systems, e2e, and UI testing. In some instances, I’ve seen more experienced QA testers also do gray-box testing as well. While SDETs may incorporate block-box or gray-box testing strategies, they will also consider white-box testing when appropriate. 

Although not required, SDETs can possess a background or experience in development, and often take approaches that incorporate the various aspects of the testing pyramid.  See Martin Fowler's website and my Github page for more info on the testing pyramid. 

SDETs may emphasize unit or integration testing over traditional UI testing. However, in other situations, they may suggest UI testing when they believe that UI automation will provide more value vs. something like traditional unit testing.  SDETs work more proactively with developers and focus on enabling them to own and actively contribute to automated testing efforts.

On the other hand, a QA tester predominantly focuses on user interactions, user-facing data models, and functional testing. SDETs may view software from the user's perspective and consider the underlying software architecture and try to understand all the unique application boundaries (e.g., REST), internal data models, and how developers model the application domain. SDETs attempt to understand how data flows throughout the entire software system.

However, just to clarify, I think some of these points can also describe a seasoned QA tester as well.

 

How important is open source software to the testing community?

Javier: I think open source is critical to the testing community. 

The proliferation of open source has provided many profound innovations to the software and testing communities. I think the classic example of this is Selenium. Other examples would be the multitude of software platforms built with comprehensive testing solutions (such as Angular or Sprint Boot).

 

What tool should one start with if they are interested in test automation?

Javier: I think the answer to this question depends on the person asking. It depends on the role and ecosystem you are working in. For quality engineers, I believe Selenium is an excellent way to get started with automation. For individuals interested in delving into programming, I would highly recommend Ruby & Cucumber for both building software and automated tests (specifications).  Both “The Cucumber Book” and “The RSpec Book” are fantastic reads. 

For developers who already do some test automation but may want to do more, I would recommend looking into the testing techniques provided by the software system ecosystem you are working in. Most frameworks provide excellent documentation on helpful integration testing techniques. I would also research the documentation provided by your desired testing framework. You would be surprised how much functionality your testing framework provides and how much more productive and valuable it makes your tests. For you Java enthusiasts out there, I'd recommend checking out TestNG. It’s a great testing framework for both unit and integration testing. Most testing frameworks, like TestNG, provide mechanisms used for automated testing like data-driven testing, complex test and suite setup (and teardown), custom reporting, and tagging.

 

What’s your favorite testing slang term, i.e. 'Smug Report'?

Javier: My favorite slang term is the "Showstopper" defect. I remember using ALM/Quality Center back in the day and finding that phrase in one of the dropdowns when classifying test priority. I've always found that phrase amusing and applicable in some of the regressions that I've encountered in my career.

   
Thank you for your time Javier!

Interview conducted by:

James Prior Technical Pre-Sales Consultant James has been working in software pre-sales and implementation since 2000, and has more recently settled into focusing on technical pre-sales. He takes care of our hands-on demonstrations, and eagerly awaits your request to see our Cycle test automation software in action. Drop him a line at: james.prior[at]tryonsolutions[dot]com.

Recommended Content

 

 

Global Engineering Group Upgrades WMS with Tryon Solutions & Cycle

Global Engineering Group Upgrades WMS with Tryon Solutions & CycleOur customer is a global, high-technology engineering group founded in 1862. The company is a leader in tools and tooling systems for metal cutting; equipment, tools and services for the mining and...

Checklist: Deciding which Tests to Automate

Essential Checklist for Deciding Which Tests to Automate To automate or not to automate?That is the question a lot of test professionals ask themselves- when deciding what tests should be run by software and which ones are best performed by people?To help, we've come...

Cycle & Behavior Driven Development: How It Works Demo

What We Covered: High level explanation of Cycle as a behavior-driven test automation solution Overview of advantages including collaboration, lower maintenance, streamlining, and how Cycle is used to mitigate deployment risk Overview of Cycle interface, syntax, key...