One thing I do not like is wasting my time; and an area prone to time-wasting is test automation. Test automation needs the same continual improvement approach, as anything else on a project.
Firstly you want to be a “Test Engineer”, not an “Automated Tester”, or should do. The wikepedia definition is not bad at all: “A test engineer is a professional who determines how to create a process that would best test a particular product”. Test automation is major activity within that role.
I have a few assumptions for this article (rant) - that you are in a team that believes in user story-driven approach, with engagement from all in debating and improving the way things are done. It’s what I have been used for the last 6 years, but I am well aware that it is not always the base state. With an engaged Product Owner, a good UX team, and coders who love to code, you will create something good, even if it’s not entirely the original vision. A healthy software development project is one that is fluid, that moves and changes every day, as software that delivers to peoples expectation evolves, it can’t be worked out from Day One. We are in far less dictatorial times regards delivering quality, and immersing the client as fully as possible in the team, is essential.
With good UX, incidentally, can save you a lot of time with working out test steps, as UX drive not only the vision of the stories but also scenarios. UI test scenarios right there. UX elicit examples from the client, in order to define the US and design, and is a useful process for the client as they can see the software before it coded. There are many discussions around how many test resources a project needs, but I would argue that one very good one is sufficient, because as a whole, testing should always be a team activity. The test automation should be focused on acceptance testing against new builds, building tests from stories and exploratory testing areas or bug fixes. Manual testing should never stop.
A few UI tests do not mean good test automation, no does talking about it at length. It’s a “doing thing”, and central to that activity is being an active member of the team. Before lumbering in, think about your mindset, of which you need several. Firstly, and perhaps most importantly is the direction which requirements come from. These can provide you with the initial idea of what you are going to do, and provide the opportunity to give your opinions. You don’t just want stories, you want scenarios, i.e. examples. This is something the whole team can be part of, and provide hooks for the initial tests. It is worth flagging up non-functional requirements at this stage, as the tests require more thought and criteria. So mindset number one is quality assurance rather than test automation. It’s about ensuring the path from what people want, to what people get is as clear as can be at the initial start of a project.
On daily basis, you will be working amongst developers, as developing tests purely from written specs, with no consultation means firstly, you could end up in a different direction to development, and secondly, you could misunderstand the specs. A daily conversation should be the norm for a project. As developers work through developing features, they approach their own technical obstacles, or simply a lack of clarity in specs. This is an area a test engineer can alleviate problems by investigation and sharing that information with the developers. This process improves and expands tests, by default.
The priority tests, and ones that prove most useful on daily basis, are UI and API tests, which act as quick check for any regression. Though they don’t have to run on the same branch, accessibility, security and performance checks, cross-browser mobile should be integral to the pipeline also. All this can be managed from the DevOps side on the build server. Commonly, only a few simple commands are needed to run tests, as part of the application under test build. You can have all these, and in the build pipeline, but the cost is time and effort. So these need to be managed tightly, which is why you should have a ruthless approach to reviewing tests and the test code. Reuse test code, wherever possible - that mindset will keep coding cleaner. Do more test coding - packages are great for quick test features, but if it can be avoided, they should be - it’s overhead.
The easiest way to work out what tests to run at certain points, is to thinking about the points that test automation is beneficial. For example …
2 minutes is sensible maximum, there can be many builds happening in a day, and on more branches than just master. UI and API tests are commonly part of these and need to run effectively:
- Running tests in parallel saves time, so ideally ensure your scenarios can run with no dependency on other scenarios.
- Maintain an optimal set of tests that run with every build, and avoid repetition, i.e. avoid testing things more than once.
- Keep test framework low on overhead - don’t add features that don’t deliver any value, apart from for show.
A very minimal framework, with basic features, NightmareJS, which runs UI tests in headless (i.e. web browser without a graphical user interface) Electron browser, and doesn’t require the usually obligatory Selenium.
Load tests are useful to highlight bottlenecks and inefficiencies, and it doesn’t need to be a heavy load test, to highlight problems.
Accessibility and security often overlooked areas. Accessibility has multiple benefits, along with making your app accessible, it encourages good front-end coding practice. While security testing is a wide area, there are checks you can do to avoid the more common browser-side security holes.
This all assumes you are running tests against the build environment, but of course tests can be run against other environments, such as staging and even production. 5Now you will have to get your hands dirty with some DevOps, but for the modern test engineer, that’s a given. But nothing to be hesitant to try - I would recommend playing on TravisCI, which is geared for opensource projects, and JenkinsCI is popular build server software.
Test framework tlc
It’s important to maintain the test framework itself, including an eye on those great open source packages you have got used to. It’s important to ensure you are managing versions, as an unexpected update could cause a break in entire framework. Don’t look at packages as less work, they provide convenience, but the onus is on you to ensure they work within the context of your test framework.
A test framework is comprised of a combination of tools and custom code, that are designed to help to automate acceptance testing more efficiently. I have used some many tools, I am losing count, but this learning is vital, as projects have different requirements. If the requirements are following in a specification by example approach, then perhaps framework supporting Gherkin would be a good starting point, but only if the team can see the value. Only add major layers, such as Cucumber, for example, it’s beneficial to the project as a whole.
There is an art to test automation, and these are the main reason most test frameworks crumble during projects:
- There is no framework, it’s a loose collection of test scripts
- Too many cooks … too many people trying to fix or improve elements of the framework, outside of good working practice
- Not enough work was done to make it easier for others to add tests
- Lazy use of multiple helper packages, that cause performance overhead
- Not optimising the way tests are run, and which ones.
- Developers are not engaged in the test automation process, due to disinterest (must show value to everyone).
- The whole team is not engaged in the test automation process; if the product owner doesn’t understand it’s value, that indicates a very bad perception issue.
- Changes to core test tool code - never a solution to challenges