Engineering

By Merim Bungur

How to Write Good Automation Tests

Is writing automation tests simple and straightforward?

There is an enormous difference between just writing automation tests, and engineering stable atomic automation tests.

Are you are stuck fixing someone else's bad tests, or maybe you are writing tests and not giving them much thought? If you are, it's not looking good. Ask around, being just a regular test writer is often a thankless job. The Internet is filled with confessions from automation testers confirming the sour relationship between them and their fellow developer colleagues and managers. How did it get to that?

The relationship between automation testers and the rest of the development team is based on how well their tests perform, and how slow is the development cycle because pipeline is clogged with poorly written, unstable tests.

So what makes a test stable, or in other words, good?

Stable test passes when the component it is testing works, and it fails when that component functionality is not working, potentially indicating what needs to be fixed.

Unstable test on the other hand sometimes passes and sometimes fails, thus making the test unreliable and just an invitation to investigate what is going on.

Because nobody likes not knowing what went wrong, your team will quickly lose confidence in you if tests you wrote are flaky and unstable. If some tests consistently fail in the pipeline even after being "fixed", they will get followed up with "Ah, that's the one that always fails" sentence. This is an excuse to ignore and not fix it anymore, which marks the beginning of a bad relationship between developers and testers.

Throughout the next couple of articles, I will discuss automation testing strategies we use here at Infinity Mesh.
I'll tell you about issues you will run into when writing and executing tests.
I will give you solutions, alternatives and discuss what is the best approach depending on circumstances.
But right now, in this article, I'll talk about the mindset you need to have when writing automation tests.

Design patterns are not commandments that must be obeyed at all costs

- You started working on a new project.
- Your new team looks confident, and you are writing code from scratch this time, not inheriting any bad leftovers.
- Maybe you'll even get to use that design pattern you always wanted to use.
- Fast forward a bit and it looks like your code just doesn't look like the one from that best practice tutorial you watched.
- You take a look at your code, and you can see bits of best practices here and there, but also some hacks and repeated code you are not very proud of.

This is ok. No, but really, it's fine, no need to panic. This is the real world, where things are sometimes hardcoded, and a couple of things are repeated in code.

Design patterns and best practice automation strategies are just agreed upon guidelines you strive to follow as best as possible while fixing bugs, going on meetings, and trying to meet deadlines. They have their advantages and disadvantages, and recognizing when to apply them is knowledge obtained by making many mistakes. This means that when you are tackling a specific problem, you might or might not be able to follow best practices depending on the business issue and code constraints you have in your project.

Page Object Pattern is a fundamental concept in automation testing used for over a decade by everyone. Is it good? Definitely. Does it follow SOLID principles? No. It immediately fails the first one, the Single Responsibility Principle. And this shouldn't stop you from using it on most projects unless your business requirements dictate you should use something else.

You will never really learn what bad tests are until you write them yourself

I can't tell you how much time will it take for you to obtain the ability to recognize which parts of tests should be refactored, and which test should be broken down into multiple tests, but I will tell you exactly what to do to achieve this goal:

1. Write a bunch of tests for any feature/page/control
2. Execute them locally and watch Selenium do its magic
4. Enjoy that warm feeling of test passing
5. Notice that there is no number 3 on this list. This is how its gonna feel when you get unexpected errors and those tests fail in production. Just because your tests run locally, it does not mean they will work in another environment.

This is why you, as an engineer, must allocate daily time to critically observe the code you wrote so far, and reflect on what can be improved. It's important to write tests at first to clear daily tasks and have progress. Just get it done and don't overthink and optimize everything. When you finish writing some tests, take a look at your code, previously written and new, inspect test results and think about what can be improved. This is the secret ingredient to writing good automation tests because even the very definition of what makes a good test is to some degree relative to the project you are working on.

Did you read somewhere that 160 automated tests need to run in parallel on multiple servers in under 5 minutes? This is not an objective measure. While this might hold for a simple website where every action has a near-immediate response, what about business processes that don't? Maybe you have to wait for 5 minutes after clicking a button for some third party service to finish executing something before you get a response back to resume test execution.

There is no need for you to impose these constraints and conditions when beginning automation test development on a project. Work on tests, integrate them into the pipeline and then discuss execution time with your team. Discuss if optimization is needed, and when is the right time to start optimizing.

Your first good tests will be born from your own very poorly written tests after some refactoring and cleanup.

Refactor as you go

I'm a strong proponent of opportunistic refactoring. Refactoring shouldn't be something you do when it's too late, and code is already in such a state that no one dares to touch some parts of code because they might break something else. Refactoring should be an opportunistic activity:

Always leave the code behind in a better state than you found it.
If everyone on the team is doing this, they make small regular contributions to codebase health every day.

During automation testing, you strive to make your tests short and simple, which means complex code is spread within the testing framework and only surface methods with clear names exposed to be used by tests. In agile development, you don't have the luxury to dedicate a big chunk of time for framework building. This is why you start off with writing tests, then refactor parts of code into the framework as you see needed. With this approach, you are always developing something you are using, and you are building a framework slowly in small iterations.

Format tests to identify what each section of the test does

For starters, just follow the "Arrange-Act-Assert" pattern, also known as given/when/then pattern.

Make sure that each method has these functional sections:
1. Arrange all necessary preconditions and inputs
2. Act on the object or method under test
3. Assert that the expected result has occurred

If you refactor your test method and have Assert and Act sections mixed up and intertwined, there is a chance that you should refactor that method and break it down into multiple methods.

I'll talk more about test frameworks, writing atomic tests and automation testing in C# in the following articles.

 

Written by Merim Bungur Lead Software Engineer

Subscribe to Infinity Mesh

World-class articles, delivered weekly.

This field is required. This is not a valid email.

Subscription implies consent to our privacy policy.

Thank you for subscribing!