Test smart: how to deal with biases around testing?
It is time to be more vocal about the profession’s pain points and shed light on the testing craft.
In the fast-moving IT world, it is vital not to lose your compass when drifting into the ocean of opportunities. The hype around automation and related tools pushes us to rethink the role of testers in digital product development. The biases around testing form the perception of testing professionals in tech companies which is sometimes reduced to “regression tests maintainer” or “bug detector”. The same stereotypes likely affect the self-perception of testers and create the feeling of being undervalued and as a result, underpaid.
However, in agile teams, the Tester’s job is not just applying various testing techniques but much more: analysis of user stories, reviewing prototypes, scripting test cases, monitoring system’s behaviour, researching user feedback, maintenance of tools for testing and test management, and so on.
At the same time, the existing biases might simplify the tester’s role. I observe testers are actively sharing their concerns within the professional community. However, fewer talks are circulating outside this bubble, and it needs to be changed. From my experience, a few pain points affect a modern Tester: a cult of automation, praising tools over testing strategy and inappropriate professional labelling.
Cult of automation
“What is your experience with automation?” — that was the question from the hiring manager. If this question is asked as the first one, it feels that the company is looking for a tester who will automate the tons of tests that no one hasn’t touched before. However, there are a few issues to be aware of before any team rushes into automation.

Firstly, there is unrealistic thinking that automation of repetitive (regression) tests is a remedy for all the quality pain points. Let’s imagine that we have automated the majority of current repetitive tests. Cool! Let’s exhale deeply. Yet if we think that now we have more time for meditation, that’s far from the reality. What about the tests that will be added as new features are introduced? What about the tests that will be changed on the way as soon as some feature gets updated? What about the tests that cannot be automated? A team will need a person who will orchestrate all of these things. Or even a few people if the scale of the product is impressive.
Secondly, sometimes the companies don’t even realise how expensive automation efforts are. One thing is to write the scripts in some tool: if the tool is like Cypress or Playwright, there are many hours of coding work behind the scenes. Another thing is maintaining automated tests and keeping them in line with all the product’s updates (it means another bunch of hours to “repair” the previously written testing code). Hopefully, this will be changed with the advancement of AI-driven tools. However, not every company has grabbed AI tools to apply in their testing efforts. Some prefer rolling down the hill and continue applying the tools their teams have been using for years (hello old buddy Cypress).
Thirdly, once we have a new batch of features that should be tested for tomorrow (yesterday), automation will hardly be our priority. Very often automation doesn’t look as fancy as it sounds. We’ll test new features by applying exploratory techniques in the first round because automating the tests is time-consuming. Even if we don’t have any burning deadlines for testing new features, we still need to use exploratory testing before automating tests in any specific tool. And it requires a bunch of time and effort from QA Engineers, on top of automation.
It would be better if the mentality of hiring companies shifted into a more holistic view towards testing roles. A similar change is required in the tech community: the teams should embrace the constraints of automation and analyze the current state and resources carefully before starting any automation project. Because automating tests is always a project!
Praising tools over testing strategy
From the recent job interviews, I’ve learnt that people like to talk about the tools and frameworks they use. Perhaps, we tend to think this is already an achievement if one learns how to apply this or that tool. But what about the actual testing strategy? Is there any strategy that the team follows when using these tools? Do we apply some techniques randomly or on demand from the client (a request about a special type of testing, like a security one, etc.)?
Tools and frameworks are helpful when we know how to utilize them within testing strategies. It’s another (sad) story if we don’t have a testing strategy for our product. We can use some trendy tools as nice-haves. Yet, do we know where we are moving when applying this or that tool? What are our team’s actual needs regarding testing?
So, an overall testing strategy that includes a set of techniques and tools is an essential asset for every team. It is something that testing should be based on. Like a grammar book for language learners contains the set of the rules, the testing strategy reflects the system of testing methods and tools. But who should be responsible for it?
To make it more efficient, Wayne Roseberry suggests assigning a test owner: “The test owner describes the test approach in the test strategy. The team will execute on that approach as agreed by the team.” For me, it is a brilliant idea to have a person who initiates the discussion and keeps an eye on the testing strategy in the team.
And the best profile that will match this task is, of course, Tester! However, if our team does not have one, then this role should be delegated to a team member who is familiar with testing fundamentals and strategic planning. It is not an easy pair of shoes to wear, though.

Sometimes it is smart to stop and think about the current situation. We may write or draw our testing strategy down. To check where we are, it is handy to apply Agile Testing Quadrants, originally introduced by Lisa Crispin and Janet Gregory. By discussing them with the team, we’ll see if we are moving in the right direction and fill the gaps in our unique testing strategy.
Inappropriate labelling
As humans, we love to use labels when naming others around us. It is easier then to explain the world around us but it may also narrow our perspective on some subjects. I believe that testing is undervalued today due to prejudices caused by the wrong labelling.
Adding the adjective “manual” or “automated” to the job title might simplify the role. For instance, manual QA could be stereotypically perceived as a low-qualified professional who does manual checks at the end of the development process, a kind of factory worker. An automation specialist might be misinterpreted as a tester and developer, two in one, and it is not clear if this person only codes the repetitive tests or does something else.
However, the “manual” or “automated” labels are the shortcuts to the wrong perception of the software tester’s role which is a way wider than just quality checks of developed digital products. So let’s get to the root of the problem.
Michael Bolton captures it brilliantly. He notes: “The false and unhelpful idea that testing can be automated prompts the division of testing into “manual testing” and “automated testing”.” I do agree. We cannot “automate manual testing away” as some voices from the tech community suggest. Testing is a human-driven activity by default.
Rudolf Groetz makes a great comparison to illustrate the irrelevance of the debate around manual versus automated testing: “Let me just say it upfront: the whole debate about “Manual Testing vs Automated Tests” doesn’t make sense. Comparing these is like comparing a hammer to a screwdriver — they’re tools for different jobs.”
Jeff Nyman argues that the division between “manual” and “automated” testing is a consequence of the weak community reaction to corporate trends: “Testers have abdicated the discussion to others — mostly developers — on the ambit of testing.”
Indeed, we do not hear often about “manual” and “automated” developers. Or do we? 🙂
Overall, I agree it is the testers’ responsibility to use proper terminology. The idea of dividing testing into manual (human-driven) and automated is pointless, although it is trendy among the tech community. Sometimes, you might run into a profile labelled “Automation Specialist” or “Manual Tester”. So, maybe we, testers, need to start with ourselves and be the change we’d like to see regarding our craft.
To sum up, as testing professionals, we could be more vocal, share our thoughts and move a discussion in the tech community in the proper direction — towards a more holistic approach (without dividing testing into “manual” or “automated”). The testers should not only advocate the quality of the products in the companies but also share knowledge about their profession that is underestimated and misunderstood in some cases. For the teams, it is smart to apply automation where it is applicable and relevant, think creatively about the overall testing strategy and be cautious about the labels. And hug your Tester if you are lucky to have one in the team! The testers are rare species.
You may check my LinkedIn page if you feel like connecting with me or are curious about my background. As a QA Engineer with over 7 years of commercial experience in the industry, I’m ready to communicate with teams looking for guidance and help in enhancing product quality and testing. At this very moment, I’m looking for a new role as a QA Analyst, QA Engineer or QA Lead.
Illustrations: by me (Apple Pencil, iPad, and no AI 🙂)
Resources:
- Iryna Suprun, No Time To Test and No Time To Automate: https://iryna-suprun.medium.com/no-time-to-test-and-no-time-to-automate-306e0b4cedc6
- Michael Bolton, Alternatives to “Manual Testing”: Experiential, Interactive, Exploratory: https://developsense.com/blog/2021/08/alternatives-to-manual-testing-experiential-attended-exploratory
- Rudolf Groetz, Manual Testing vs Automated Testing? That’s Missing the Point: https://www.linkedin.com/pulse/manual-testing-vs-automated-thats-missing-point-rudolf-groetz-i7zyf/
- Jeff Nyman, A History of Automated Testing: https://testerstories.com/2023/01/a-history-of-automated-testing/