E2E Testing Best Practices

Rakesh KB
6 min readJun 15, 2021

--

Introduction

End-to-End testing (E2E) examines the real-world scenarios of an application from start to finish, touching as many functional areas and parts of the application’s technology stack as possible. Compared to unit tests, which are narrow in scope, E2E tests have a broad scope, and so are sometimes called “Broad Stack” or “Full Stack” tests. E2E tests focus on validating the workflows of an application from the perspective of the end-user, which makes them highly valued by management and customers. E2E testing is usually and should be performed last in the testing process, following lower-level unit, integration, and system testing.

E2E tests can be complex to build, fragile, and challenging to maintain. As a result, a common approach is to plan a smaller number of E2E tests than unit and integration tests, as shown in the figure: “Test Automation Pyramid”. Google often suggests a 70/20/10 split: 70% unit tests, 20% integration tests, and 10% end-to-end tests. The exact mix will be different for each team, but in general, it should retain that pyramid shape. E2E testing is conducted in as realistic environment as possible, including the use of back-end services and external interfaces such as the network, database, and third-party services. Because of this, E2E testing can identify issues such as real-world timing and communication issues that might be missed when units and integrations are tested in isolation.

Figure: Test Automation Pyramid

Best Practices for E2E Testing

Keep an End-User Perspective

E2E tests are supposed to be designed from the perspective of an end user focusing the features of the application rather than its implementation. It is a good practice to use documents such as User Stories, Acceptance Tests and BDD scenarios to capture the end user(s) perspective.

Limit Exception Testing

Focus E2E tests on high-use “happy path” or “golden path” cases that capture typical use scenarios like for example: creation of an IT General Request Ticket. Use lower-level unit and integration tests to check bad path/sad path exception cases, such as a user attempting to order more of an item than is currently in inventory, or returning an item past the allowable return date.

Leverage Risk-Based testing

Risk-based testing an approach to software testing that acknowledges that not all parts of the application are created equal but, instead, differ in several criteria. For each part of the application, analyse factors such as code complexity, how critical that area of the application is for the line of business, and how often it’s changed, among others. That way you can figure out which parts of the application are simultaneously A) more likely to get defects introduced to them, and B) would cause the most harm had they been broken. You can then concentrate your testing efforts on those areas, at least in the beginning.

Apply Risk Analysis

Given the relative expense of performing E2E tests manually or automating them, concentrate on your application’s high-risk features. To determine a high-risk feature, consider both how likely a failure is to happen, and the potential impact that it would have on end users. A risk assessment matrix is a useful tool in identifying risk.

Test in the Right Order

When a single unit test fails, it’s relatively easy to figure out where the defect occurred. As tests grow in complexity and touch more components of an application, the increase in potential points of failure make it harder to debug them when a failure occurs. Running the unit and integration tests first allows you to catch errors when they are relatively easy to resolve. Then, during E2E testing, complete your critical smoke tests first, followed by sanity checks and other high-risk test cases.

Manage Your Test Environment

Make your environment setup process as efficient and consistent as possible. Document the requirements for the test environment and communicate them to system administrators and anyone else involved in the setup of the environment. Include in your documentation how you will handle updates to the operating system, browsers, and other components of the test environment to keep it as similar as possible to the production environment. One solution may be to use an image backup of the production environment for testing purposes.

Separate Test Logic from UI Element Definitions

To make your automated E2E tests more stable, separate the logic of your tests from the UI element definitions. Use an object repository or a page object pattern to avoid having your test logic interact directly with the user interface. This makes the tests less likely to fail due to changes in the structure of the UI.

Select Page Elements the Smart (a.k.a) Stable Way

Asynchronous loading of the elements is a common feature of many modern web applications. This presents a challenge when automating E2E tests since attempting to interact with an element that’s not yet available results in errors in most testing tools. Also, asynchronous loading of elements isn’t the only way in which page elements can be challenging. Changes to their attributes — such as ID or name — can lead to fragile tests. There are several ways to locate a UI element but not all of them are robust enough. It’s the case of CSS selectors: classes are liable to change, which means more tests failing because of it. You can use ID selectors: they rarely change, but they can sometimes. The best option is to create a custom data-* attributes or instance data-test=”unique_value” attribute. This is an attribute added to the target element, used only for test purposes. As long as the value is unique, the test will be able to find it. Also, most developers/UX people know they cannot mess with it.

Handle Waits for UI Elements Appropriately

Don’t allow an E2E test to fail unnecessarily when waiting for a page to load or a UI element to appear on the screen. The wait time should be at least as long as the normal time that it should take for the UI element to appear. But don’t make it too long. Excessively long wait times may indicate a problem with the application, interfaces or environment, and are annoying to end-users. In addition, allowing long wait times in your automated tests can also affect the overall execution of your E2E testing. In general, set a wait time that is just little longer than the normal time it takes for a UI element to appear.

Ensure You Have Proper Test Data

Use It is very important to ensure that the tests get high-quality data in suitable quantities exactly when they need it. Just copying data from production and calling it a day might sound like a good solution, but doing that has plenty of problems. For starters, production data might lack representation of edge-case scenarios that need to be tested. It also might lack data for recently added database tables. The most egregious risks are the risk of exposing sensitive data such as personally identifiable information (PII) or business-sensitive data. That’s why you need a solid test data management (TDM) process. You can go with the approach of automatically generating test data, which is generally recommended. If you do need to resort to production cloning, make sure to employ data masking capabilities to prevent sensitive data leaking to non-production environments.

Conclusion

It is very important to plan Manual and Exploratory testing as a part of the E2E testing, to address difficult-to-automate aspects such as Usability and User Experience. And also to ensure that we have complete and well-balanced set of tests it is better to include Automated Performance and Load Testing as well.

--

--

Rakesh KB

Mobile, Web, can work on Back-end as well. Learning for Life :)