Automated Testing Risks and Mitigation

Automated Testing Risks and Mitigation


Scott Bussey
June 30, 2021     Posted in Testing

Easy Come, Easy Go.

There’s been an onset of automated testing platforms that are easier than ever to develop. It’s become relatively easy for an automation engineer or engineering team to quickly create a testing suite that contains and successfully executes a large number of tests. In such instances, the testing suite appears full of promise, productivity, and seemingly sound design.

Unfortunately, these positive and uplifting moments of success fade as time progresses. The addition of numerous new tests, frequent changes to a system’s codebase, fluctuating development environments, and updates to third-party libraries produce a myriad of technical debt and mounting maintenance costs that would make anyone’s head spin.

It becomes clear that these issues arise through a lack of or poor planning, side-stepping best practices, lack of process and knowledge.

Achieving Balance

Understanding the balance between the automation testing suite’s development, adequate planning of the test suite, the application of QA theory, and domain knowledge is critically important. How and why you execute automated testing is as essential (or more so) as the suite’s testing development effort itself.

To be successful, couple these elements of success with an engineer or supporting QA team with appropriate business and product domain knowledge levels. Lack of expertise within automation, foundational QA, business, or product domains will result in recurring failures. An automation engineer and collaborative team need to ensure all domains have adequate coverage to achieve an appreciable level of testing success.

Neglecting any of these elements of success will lead to severe consequences with ramifications that are far-reaching. These ramifications not only affect the automation engineers and associated testing team but have the ability to jump cross-functionality to delivery, DevOps, UAT/production testing efforts, and evaluation feedback loops found in fast-paced Agile working environments.

Taking A Step Back.

There’s an immense value in reviewing the automated testing strategy and planning at predefined intervals. In particular, a review of the planning and strategy activities found at a test case or testing level, which can have profound impacts on an automation testing suite as a whole.

Schedule sessions to perform this analysis and review the suites’ code base. This session includes page and component models, processing flows, test data, fixtures, and the tests themselves.

Performing these reviews at specific intervals can ensure consistency and adherence to best practices by the QA automation engineer and automation team.

Because of time and budget constraints, it can be easy to lose focus and the resources needed to execute these critical reviews. Consider the alternative, the prospect of incurring a mounting technical debt whose overflow at the wrong time can create a perfect storm. This deficit may place a highly functioning project and its team in jeopardy when it occurs.

Applying Focus at The Test Level

The success points listed above guide us toward best practices, obtaining practical levels of knowledge, and operational effectiveness. It is of the author’s opinion that these elements require additional content and discussion to cover at an appropriate level.

I highly encourage readers to scour the internet looking for related content, success stories, failures, lessons learned to further one’s working knowledge.

Given constraints relating to the inception of this document, the remaining focus will focus on elements directly relating to the suite tests, specifically at the testing level.

We’ll review these concepts to help alleviate the associated outcomes previously mentioned. Use these concepts as a starting point for risk mitigation with automated test engineering and development.

Test’s Single Responsibility

The goal of an automated test should mirror objectives and principles found with a well-written test case: a single responsibility of the test conveys simplicity. A test case can contain hundreds or possibly thousands of atomic steps to achieve its end goal and be very complex in functionality.

The objective of the test case and its assertion, however, complex the test steps may be, should be relatively simplistic.

An example from an automation suite is a test case titled "Log in using a valid loyalty account". This titling alone sets the stage for a test case that is easy to develop by having clear objectives and a concise assertion on how it will pass or fail.

As important as a simple assertion, we do not want to create overly simplistic tests. Writing a test that looks for a single simple element on a screen doesn’t sound like a very compelling test case, even for a simple build verification test.

Obtaining the correct balance between the overly complex and overly simple is key to a great test case.

The Test Data

It is imperative that you consider your test data in a system under test. Some questions to consider:

  • How will the test data be created?
  • How do you foresee test data conflicts?
  • How will the test data be cleaned up at the end of a test case execution, and more so after an entire test suite’s execution?
  • Will you manually create the test data?
  • Can you acquire a snapshot of the database for some sort of import at a later time?
  • Will the automation software tests generate the test data at the startup of the test run or test suite execution?

In addition, also consider test management: the inadvertent corruption and destruction of your test data from external systems or manual testing.

An Immutable Assertion

An assertion should be consistent in its ability to produce a pass or fail. Assertions should not be flexible and should employ tight tolerances in any calculation. Automated testing assertions should perform identically to any previous or future test executions.

Striving For Reuse

Reuse in many aspects of your design and development is vital. This understanding may be secondary in nature for a developer, but this concept can prove to be elusive for a newly aspiring engineer. Everything is fair game for reuse. Do not overlook a suite’s processes orchestration, branching and high-level process flow, static testing data, dynamic test data, reporting, and test metadata. Anything you find within your code should be considered and handled with the purpose of reuse.

Performing this valuable service early on in design and development saves a tremendous amount of technical debt and refactoring in the future.

Maintenance Pitfalls

While developing, be sure to employ a scheduled break strategy to take a step back and perform a high-level review looking for clues of rigidness, previously discussed reuse and, an overall sound design strategy.

Take a moment to consider the future maintenance efforts that will occur due to a myriad of changes to the system under test, which happen when fixing defects and implementing new feature requests into the codebase.

Will a particular component have significant changes requiring great maintenance efforts? Are there other ways to accomplish the same task that can save time, money, and headaches? How can I best adapt my test to be less rigid against the system in which it executes?

The ability to see ahead is worth its weight in gold and is an achievable goal. Obtaining this vision is where business and product domain knowledge are valuable assets to base your work.

Naming Conventions and Identification

Most testing platforms allow for naming and describing your tests. Some platforms also have additional allowances for metadata, such as an identification schema, to promote quick identification of your tests. You can also program these attributes at a very detailed level into a system by extending the system’s inherent capabilities.

However, you can create an opportunity to generate clear and concise names for your tests. Test names should reflect the true intent of the tests. Also, make a convention to develop quick referenceable IDs for your tests to efficiently index and track your tests in the future. These IDs promote an added efficiency to test identification during a test run, test reporting, and defect management activities.

Tests Stand Alone

Well-designed tests should be able to operate independently without any dependencies from other tests. Development of tests relying on a previously executed test to run correctly should raise a red flag.

These red-flag situations occur with test interoperability which requires passed data, shared assertions, reliance on artifacts, or modified test data within the testing environment.

Over time, this can result in the generation of test defects, false assertions, and inaccurate reporting. In addition, variables including constant changing development environments and additional code releases create a scenario where unforeseen technical debt evolves and thrives.

Summary:

The Tortoise and the Hare story still resonates today in our professional lives. Long-term strategy and planning will always win over planning that receives short-term gains from a quick and easy approach.

Remember to target and minimize technical debt by obtaining, promoting continuously, and growing your knowledge base. Strive in one’s efforts not to reinvent the wheel.

It’s paramount to ensure a blend of knowledge, technical and domain, for an automation engineer and an associated team. Take time to review your work and set tollgates.

Celebrate hard-earned success. Be sure not to self-berate and gain those tough lessons learned from your mistakes.

Rinse, repeat and profit.


Recent Posts