Fundamental point: Tests must only exist to test the product/component. All other types of "test" are waste.
The following conversation is based on true events, but it is fictional. However you will likely find it familiar.
Developer 1 is a developer who has just contributed to a component and is preparing their work for release by talking with an experienced developer from the team originally responsible for the component, developer 2. Developer 1 has just ran their work through the components unit test suite.
Developer 1: OK. We've just gone green on the jenkins jobs, let's release the component.
Developer 2: Hold on. We can't let that go anywhere without installing that package on a server and running a few bits on the UI.
Developer 1: Why? tests have just gone green?
Developer 2: Ah well those unit tests, we only wrote those because our manager requires us to have code coverage above 90%
Developer 1: So our "real" tests are done manually?
Developer 2: Ya, we only find real issues, running on a real server. The unit tests always break when we change anything. They don't tell us very much at all.
Developer 1: OK. Where's the test spec? And where can I install the package?
Developer 2: We don't have test spec's any more. QA only require us to have a green bar in jenkins. We only have one real server left due to cost cutting by stupid managers so you'll have to wait a few days for Developer 3 to finish troubleshooting that bug.
Developer 1: So you are telling me that I have unit tests I can't trust just because they are green and I've to follow a test spec that doesn't exist before I can release our component? On top of that I've to wait a few days just to get hold of an environment to run the tests?
Developer 2: Yes. That's right.
Developer 1: So what's the point of our vast suite of unit tests?
Developer 2: We are one of the highest performing teams in the company. Our test coverage metrics are best in class.
Here we have a pretty typical conversation I've witnessed (or been part of) in the last 10 years, since we've been adding automated test to products. Typically the initial focus is on coverage, mainly because its a very easy metric to track. Teams get rewarded for hitting coverage targets, in a relatively short period of time. Successful hitting of coverage targets across many components and products, drives decisions at higher levels to reduce or divert computing resources else where. The developer time investment in automated test needs to be recouped in hard currency!
In this scenario we are locked into this waste. We cannot reduce coverage by simply removing the test suite. The bad test suite is the base reference used to add more tests system. Product quality and time to market suffer, but this is an expected side of effect of adding new features to a product that has an ageing architecture anyway, so it's just accepted. Ultimately justifying the re-write that everyone wants, only for the same mistakes to be repeated on the re-write.
How does your organisation promote discussion on this scenario? By how much could you reduce time to market if you had a better suite? What's your answer to reduce the cost of change on a product/component?
No comments:
Post a Comment