Automatic headlights on cars are very common place now. Usually operated by a light sensor on the dash, they remove the need for the driver to manually operate the headlight switch other than to switch it to ‘Auto’.
I have them on my Honda Civic (great little car!), and whilst I was skeptical about them to start with, it is very much a fire-and-forget feature now. When it is dark or approaching darkness, they turn on; when it’s light, they are off.
Daytime running lights (DRL) are another addition to most modern cars, and a legal requirement for all new ‘types’ of car since february 2011. These are typically implemented in the form of bright LED units in or around the car headlights. One notable exception being Volvo who (due to regulations in Scandinavian countries) meet the requirement by having standard headlights on permanently on most models.
For the most part, I love the auto-lights on my car. I know that during the daytime, my LEDs on the front are giving me additional road presence, and I’ve come to trust that the headlights will activate in low-light conditions.
BUT… As I travelling into work this morning, driving through the early morning fog, with visibility at about 50 metres, it occurred to me that although my DRLs were blazing away, none of my rear lights were on, effectively making me invisible from behind. I switched my lights on.
This got me wondering for the rest of the journey whether a large proportion of drivers who fail to have their lights on, are not in fact irresponsible idiots, but in fact have become complacent because they expect their car to switch their lights on for them. Perhaps they do not understand that while visibility is very low, the light levels are fairly high, and will therefore not trigger the auto-lights.
Has technology lulled us into the false sense of security that we don’t need to think about headlights anymore, where perhaps it has shifted how that thinking happens and made our choices more complex?
You’re probably thinking “that’s interesting, but what the hell does it have to do with testing?”… ok, fair question, but bear with me!
Replace the term automatic lights, with automated checks, and replace car with the name of your product and then apply the analogy.
Your product has automated checks running against it in a CI environment. The checks fail red when they detect a bug, and pass when they do not. The checks are simply an oracle of expected behaviours for a set of specific situations.
Consider the foggy situation; the situation that the automated checks haven’t been coded to handle, what will they do in that case? Chances are they won’t do anything. You might get lucky and one of them will fail because of a side effect of the unhandled situation, but what if nothing happens and the checks all continue to pass. On the face of it, everything is green and the build looks good, but it’s only a matter of time before a user encounters the issue. Returning to the car analogy, it’s only a matter of time before a lorry comes and smacks you up the backside because your lights aren’t on.
Of course, the analogy had to start breaking down (excuse the pun) at some point, and I think this is it. The car lights have an override; in the flow charts, if the auto lights haven’t switched on and they need to be on then you can manually override it and permanently switch them on. Job done.
Product development is not as cut and dried. There isn’t a simple override to the automated checks. But that’s not the point. The point I’m trying to make is how easy it is to become complacent and overly reliant on automation. It certainly has it’s place, but needs to be a part of the testing process, not the whole process. There is a great set of articles by Michael Bolton and James Bach on defining Testing and Checking.
I’m a pretty technical guy, so I understand that the automatic lights on my car work on a little light sensor, and that’s the only factor it uses to determine when to switch the lights on or off. But consider the non-savvy driver, the one who just uses a car to get from A to B and doesn’t know their dipstick from their gearstick. They may not appreciate the conditions under which the automatic lights work, therefore because they are ‘automatic lights’ they will just expect them to work and will be bemused when they do not.
The same thing applies to automated checks. To those who are not test-savvy, “automated tests” are the same tests done by people, just automated, and are the answer to everything (and I use the word test deliberately), and surely the product is good to release if the “tests” are all green. What they fail to understand is those “tests” are merely checks and whilst they are useful at checking that under specific scenarios, the specific areas of the software using specific inputs matches the specified expected outputs, that is all they will do.
This article isn’t intended to be another test vs check debate, but it is one angle to consider.
The other day I learnt about a company that was looking to introduce test automation in order to speed up the release process that is currently done by many testers running scripts. That’s fine. However, I then went on to learn that these testers would be distributed to other roles or moved on. That’s not fine, in fact that’s a perfect example of the misunderstanding of what testing is. That’s almost the same as having automatic lights on your car with no way to manually override them. Well, ok, it’s not really the same, but I had to tie in the analogy somehow!
Just because automated checks are present, does not mean testers become redundant. In fact (and to hijack a quote by Obi-wan Kenobi), “if you stop testers doing boring scripted testing, they will become more powerful than you could ever imagine”.
In conclusion, my experience with the lights on my car reminded me not to become too reliant on my automated checks. Sure, they are great under the right conditions, but know their limitations, and make sure others know of their limitations. Despite those limitations, automated checks can be a testers best friend by taking on the burden on the repetitive regression tests, therefore allowing the tester to unleash their skills in more valuable testing activities. Of course there is the argument about how valuable are automated regression tests, but that’s an argument for another day.