Wednesday, 25 February 2015

"Internal" Bugs

Software is full of issues. Tests explore software and uncover some of these issues. Sometimes we won't write enough tests, due to time constraints and reasonable effort already expended. So some of these issues are found by people outside our team - ie our customers.

And its perfectly reasonable of us to provide a bug tracking system for all external stakeholders so they get feedback on their bug as it goes through production.

But why do we record issues we find ourselves, as bugs?  

I've often heard that bugs are actually a good thing! It must mean the customer is using our software. Surely the customer is expecting a few bugs. Right?

Wrong. Bugs are bad. Always bad. Every time a customer finds a bug, the effect on our reputation from the customers point of view as software developers is negative. Even if we label the software an Alpha or Beta release just to lower their expectations, they still get a little disappointed when things don't work. Even if we write the bug ourselves before the customer finds it, we still suffer reputation loss. Bugs are expensive in terms of time and money for the customer and we are charging the customer more for they actually already expected.

Analogy: I employ a cabinet maker to make me a new chair. The apprentice and the master craftsman take the measurements, cut the pieces, carve the design and begin the assembly. At one point they figure out that piece X doesn't fit slot X. Do they ring me to tell me that there is an issue? Do they just fix it up? How about when the chair is delivered and I find that it wobbles? There is no difference for software.

So in software development, why do we make public our own sloppiness in what we deliver? Bugs are one indication of sloppiness.

In a scrum team, we must try and establish the culture of honest craftsmanship. If opening the code for a user story, uncovers an issue we need to fix that issue as a task on the story. We must ship a good quality product. So often this occurs right at the end of the sprint. And so often we force ourselves to write the bug, but ship the code anyway. When this happens you have to ask: "Why do we ship sh*t?

As scrum masters, we need to be disciplined and honest with our team and customers. We are done and ready to ship if we are done and there are no outstanding issues. Otherwise we are not done

To be true to software craftsmanship and our profession, we need to have a 0 tolerance to bugs.

Tuesday, 3 February 2015

Delivering too early...

This is a transcript of a conversation between 3 developers on a real life project.

Background: Dev 1 releases a product to the deployment without integration testing. As soon as deployed, the system is broken. The pressure is on to resolve and repair the deployment. So Dev 2 agrees to help Dev 1 troubleshoot with a view to sorting it out. Dev 3 is their other senior team member, who takes part in the conversation.

Dev 1: OK I have seen what the problem is. My merge wasn't completed like I thought it was and the impacts are missed in the master branch. So it's not in the RPM we delivered.
Dev 3: So lets back out the packages that include your changes.
Dev 1: I really want it to stay in.
Dev 3: Is that really a good idea. It's not tested. It's not really professional to keep it in there, is it?
Dev 1: Ya, well its tested locally and this is valuable feedback for me and the code is finished. You have to crack a few eggs to make an omelet and all that.
Dev 3: OK. Your call.
Dev 1: One other thing, Dev 4 actually made the update to this component. I'm not entirely sure what he did, but there are 4 unit tests failing now. He never updated the unit tests.
Dev 3: OK, so you are still leaving the component in the build? Are you going to remove them or fix them up?
Dev 1: Well I don't know how to fix them up right now. But I'm definitely leaving the rpm in.
Dev 3: So what are you proposing? Commenting out the test cases?
Dev 2: No, we are going to comment out the assert statement at the end of the test case.
Dev 3: ???WTF? So the tests will just pretend to pass?
Dev 2: we want to keep the coverage value high.
Dev 3: <Insert swear word here> lads, that can't be the right thing to do.
Dev 1&2: Code coverage is watched by management, it can't reduce!
Dev 3: I'm not sure I want to hear this. Not sure whether believing the lie or knowing the truth is better!! [laughter all around]

We see lots of dysfunction above. Dev 1 definitely isn't persevering to take total ownership of the product that they have assumed responsibility for (the person doesn't know how to fix the tests correctly). Dev 2 has probably suggested the worst possible solution to the failing tests problem, because he wants to make sure the teams' product code coverage statistic doesn't decrease as it's being watched by management (but he is doing it in a very unprofessional way). Dev 3, on the face of it might look like the best guy here, but that person hadn't the bravery to make sure the right thing is done. Dev 4, the guy who actually made the original code has ignored valuable test feedback, either by not running or ignoring the failing tests.

Ultimately we have three developers who come to the wrong conclusion, despite having the right conversation. There is little excuse for not backing out the package and re-introducing it to the main delivery when all testing is satisfactorily complete, either by Dev 1 or Dev 4. The main track delivery is not the place to be getting initial feedback.

It's interesting is that the management code coverage target drives probably the worst possible behavior with unit-tests - False positives just to keep a coverage figure. When coverage is a target, you have to work hard on the culture of your developers.