Lean coffee this month was hosted by Towers Watson. Our group today had a mix 50/50 of testers and developers which made for an automation bias in the topics which made for an interesting change of discussions.
Why do we need to employ testers? (when we have good test automation)…. very common question from non-testers.
The question framed from the query about at which point do you add testers to a team/company when your developers have or are planning to have solid automated tests.
The discussion quickly broached the checking/testing subject as a way to define the difference between automation and testing that a person could do. It was also suggested that a good tester adds much more to a project than just the physical testing activities. Testers usually have a whole system view, attention to detail, inquisitive nature, that lends itself well to requirements analysis, design review etc. Also to help influence the way in which the product is architected and built.
Something we didn’t mention (time ran out), but testers also bring many other facets of testing to the party such as the parafunctional testing areas (performance, security, usability…).
Stop writing automated tests identical to manual tests….
This was a discussion about why we often just look to automate existing manual tests, and what other approaches there may be.
It was suggested that the essence of the manual test should be ascertained, before deciding whether and what to automate. Consideration needs to be made as to whether it’s worth automating that test. However, automating it could open doors to parameterising it to exercise more variables within the same test scope.
Building in testability and encouraging good development practice (eg SOLID), can be very big influencers in improving the general quality of a product, therefore reducing the amount of testing drudgery required.
How much effort to put into automation, as against simply manually retesting?
One tester had to run >1000 test scripts over the course of two months. When to decide to automate? Weighing the cost of automating a legacy system, vs time spent running them manually was the opening discussion, followed by the impact such activities have on the wellbeing of the tester involved.
It was suggested that different approaches could be tried to make the task more interesting and fruitful. Using user stories together with session based testing, by creating charters to define small timeboxed test sessions could lead to the testing being more interesting, and allow for a more exploratory approach. Recording the actions taken during the exploration session could then be used for traceability and reproducibility, without having to define the scripts up front.
It was a very interesting session, especially with the increased developer presence. Test automation is often something that the test community shuns in favour of “Real Testing(™)” such as exploratory testing. However, it’s a very misunderstood paradigm, which often leads to confusion over the value of professional testers. The flip side to this is that automated testing/checking can be a very valuable ally to testers, but can also be a massive time/resource sinkhole if done improperly.