No matter how well your back-end is written and tested, the front-end is a kind of mediator that connects your back-end with the user. Therefore, it becomes an area where errors can not only be created but are clearly visible to the user. This means errors in the front-end immediately erode trust in your application. So, if you think that a testing phase is important for the back-end, then – in my opinion – the same strongly applies to the front-end part of your software.
What options do you have to improve web app development process in your company? Let me quickly present the most commonly used types of tests. You might find different typologies, definitions and scopes of each test types I talk about in this section. There are also other types of tests, less connected with front-end on its own (for example ones that cover system security, such as penetration tests). The most important thing is to not get overwhelmed by the wide variety and approaches there are to testing - try to get the general understanding of what you need and then follow the path that works best for your company.
Division by automation:
• Manual – Made manually by developer or tester, or both. These are relatively costly and slow. Also are prone to errors connected with routine. Testing an ever-increasing amount of software manually soon becomes impossible — unless you want to spend all your time with manual, repetitive work instead of delivering working software. But it still worth having manual tests that should concentrate on usability and exploratory testing (exploring the software systems without using test plans or test suites, what helps to uncover discrete problems)
• Automated – This type of testing uses scripts and tools that prepare data and a state, then execute the steps required to verify the scenario in an automated way. These are often supplied with helpful logs and statistics, generated out of the box by testing tools. They can be integrated into CI/CD processes and can help in preventing deployment of erroneous code. They are faster, more reliable, less time consuming and, at the end of the day, cheaper. Each time the automated test is done, it is done in the same quality, manner and order. You can test as much, as you need – it will not interfere with developer time.
Division by granularity level / application layer:
• Unit tests – They test the smallest chunks of functionalities isolated from other components or the whole process. They are, relatively speaking, the easiest, quickest and cheapest to create, maintain and modify. But they do not guarantee, that the application as a whole will work after deployment.
• Integration tests – They play a helper role to bridge the gap between unit and E2E tests. They test how parts of an application interact with one another.
• End-to-end(E2E) tests - They test the whole application’s processes. Can also check UI layer and test for visual regressions (any unwanted quirks of user interface, that may emerge after latest changes being deployed). Due to their complexity, they are slower, more expensive and harder to create, maintain and modify. But at the same time, they give you the most confidence when you need to decide if your software is working or not and if it will fulfil user’s needs.
There are different philosophies of what to test, how much of the code should you cover with tests, and what proportion of each test types should you have. One of the most known examples that helps to grasp this idea is the testing pyramid created by Mike Cohn.
Not all companies decide to have a pyramid shape like this(where the proportions may be 70%-E2E, 20%-Integration tests, and 10% Unit tests for example). But whatever your approach to testing is, should be strongly connected to your company’s needs and goals and preceded by an analysing and planning phase. You can get more detailed information on this topic from these sources:
I have worked for a few different companies now, and each had a different attitude to testing front-end code. Let me share with you with several examples:
‘Only using manual tests made by developer on his/her own’
I would say that this happened most often in mature companies that have code written X-years ago. These were companies with hundreds of thousands of users and mostly focused on the sales process.
Tracking issues was generally left to the individual. So, this is manual testing but with no real procedure in place. It was really time consuming as you never can be sure if one small change hadn’t influenced, by the deepest and darkest connections of unknown lines of code, another important process. Those companies did have error monitoring tools (like Sentry.io, Crashlytics or DataDog), which helps, but they only show you an existing problem, they are not preventative measures. This strategy also means that the person (or helpline) that handles customer issues caused by software errors, will have less time for other important tasks.
The reason for not introducing testing as a recognized procedure was either not having a person who knew how to write them, or – more often – lack of time. It’s like running with an empty wheelbarrow that you don't have time to fill up.
‘Manual tests + automated End-to-End tests’
In my opinion, this is a much better approach. This ensures that the customer will not encounter any obstacles on the road to finalize his/her goals on the webpage. Unfortunately, this does not ensure that all functionalities would have desired outcome (for example, multiplying and summing up all products in the basket), but at least user experience had not been harmed by something such as error-terminated process.
‘Manual tests + unit test for reusable components only’
This approach is also much better than the first one: of not having any real testing suites. Here, the company decided to write complex unit tests for the component's library, that was heavily used across its webapps. This ensured that the foundations of all apps were working correctly. But it did not tell us anything about user experience or the ability of the app to finalize the aimed process.
I would conclude that each company I worked for had a testing phase: either automated with manual, or at least manual. It’s a very important process that ensures (to different extents, depending on its complexity) that whatever we deploy into the real world won’t negatively influence customers satisfaction, our reputation and finances, as well as the team collaboration and spirit.
And what are your experiences?