Tuesday, 17 April 2018

Code review 7 - Code Reviewers Checklist

So you've just agreed to be a code reviewer for another developers work. What should I check is available before the review can go ahead.

  1. The problem specification
  2. Any supporting documentation or knowledge
  3. Test reports
  4. Clearly outlined key tests
  5. Static analysis reports
  6. Source code or artefact ready for review is committed in source control

The author should provide these clearly to the reviewer. Note they do not all apply to all situations and all artefacts, so a pragmatic approach needs to be taken. It is important not to waste the reviewers' time by making them trawl through large amounts of information. The author must make it easy for the reviewer to provide feedback quickly. This might mean supplying documents or web pages or a short whiteboard session between the author and the reviewers to set the context for the review.

The Problem Specification

Why does this artefact/change exist? What problem is it supposed to solve? This is the primary question we should answer. Obviously we are looking for other things like code readability, code quality etc but does it actually do what it's supposed to do?

Supporting knowledge or documentation

What's the context of this artefact that is up for review. The quickest way to transfer this to the reviewer maybe via a quick face to face session.

Test Reports

Does this change or artefact break existing functionality? Test reports of regression should prove that the changes do not impact legacy.

Clear Tests

Take a look at the tests that are accompanying the artefact/change. Do they conform to industry standard like Given When Then? Do the tests reflect the language of the problem? Are they correctly coupled? Are they Good tests?

Static analysis

There are some fantastic tools out there like PMD, findbugs and Sonar. Is your project using any of them? Has the artefact for review passed all static analyses checks? For any failures, are there good, objective reasons why we are violating the rule?

Committed into source control

There are few things more annoying than code passed around via email, slack or instant chat. This is mainly because it rapidly becomes impossible to keep track of what that code currently does. One changed line can have a profound impact on the result. There's just no debate about it, use a source control system. Git combined with googles Gerrit system is very good.

Monday, 18 December 2017

Observations of software development

These are some observations of mine of various software development teams that I've been in or seen working over the past 18 or so years.

  1. Opportunity cost of doing the current best practice well, prevents us from moving quickly to a newer, better practice. Mastery vs agility.
  2. Existing culture is the prevailing limiter of change in mindset.
  3. The tax for using the construction industry model for software development, while useful for scaling, makes it harder to get value from software in the short and medium term.
  4. We are teaching programming by getting people to write code, instead of learning to read code. This is the inverse of other languages.
  5. Our industry lacks a great "masterpiece" of design and code. Design patterns are a start, but not comprehensive enough.
  6. When groups and organisations are transforming their ways of working and they fail, instead of correcting the failure, they fall back on what worked from their old way of working even if that prevents the transformation from completing.
  7. Engineers will change tools more quickly, than change behaviour with the tool. In fact they will carry existing "mastery" into the new tool, unless the new tool explicitly prohibits it somehow.

Thursday, 14 December 2017

Team Trotters

Let's explore some of the good qualities of and bad qualities of a well formed team, by debriefing a video of the team in action.

The team in question have been working together for many years. Have a look at the video, a few times if you need to. Then take 5 minutes to think about what the team does well? Now take 5 minutes to think about what they do badly?

Video

While the resulting work of the team turns out to be a funny disaster, this team demonstrates some good and bad attributes of a well formed and performing team

Good team attributes

  1. Knowledgable
  2. Obvious leader
  3. Had enough people to do the job
  4. Delegation
  5. Defined roles
  6. Stronger men in the "right" place
  7. Confidence
  8. Right tools for the job
  9. Succeeded in getting the job in the first place
  10. Displayed expertise
  11. Trusted each other
  12. "good" communication
  13. Motivated - money, prestige
  14. Courage to take on the job
  15. One, very clear goal
Bad team attributes
  1. All planning was in dels head
  2. Del's assumptions
  3. other team members reliance on Del's thinking
  4. Del's leadership style, Del was dismissive, belittling of butler
  5. Hadn't actually done it before
  6. No real energy
  7. Hierarchical team
  8. members didn't share leaders motivations
  9. no continuous improvements
  10. Del always compensated for others lack of competence
  11. Del didn't realise his own limitations
  12. Poor Communication - All talk, very little listening.
  13. Complacency - Members are overly familiar with each other
  14. Poor result - they break the chandelier

Reflect

Can you recognise any of these attributes in your team? What are you doing well? What aspects of your team could you improve?

Wednesday, 22 November 2017

A format for Release Planning

Summary

Output: An estimated backlog. A likely release date for the plan.

Overview: Start with creating understanding and clarifying the big picture with everyone. Use large macro sizing to roughly size everything. Refine these sizes using a selection. Allocate to a future sprint using historical velocities as a predictor of future performance.

In this article I will outline a format for release planning that I have successfully used on a number of occasions.

Release Planning Day Agenda

Agenda is what the room people see. Focus on the results of each part of the day. At the start people want to know what they will achieve, not how they will achieve it.
  1. Introduction
  2. Vision
  3. Estimation - First pass
  4. Estimation - second pass
  5. Backlog prioritisation
  6. Release plan - first pass
  7. Review and agree
  8. Retrospective

Release Planning Plan

Release planning is a necessary part of scaled software development. The customer might take your software every sprint, but they want a clear indication about how much you will deliver over the next 6 to 12 months. You will probably find that sprint planning and backlog grooming alone won't scale to this length of time.

Output

  • A fully estimated and refined backlog
  • A likely release plan
  • Identified Risks, Issues, Assumptions and Dependencies and what to do for each.
  • A better understanding of what is required by the team in the medium term

Preparation

Required people 

  1. Whole team including SM x 2. This ceremony will scale up to many teams, but be mindful of venue capacity.
  2. Product Owner
  3. Project Manager
  4. Facilitator x 2. I have found it useful to use pair facilitators.

Required inputs

  1. A Vision
    prepared by the PO. What would excite out users? What would delight our customers?
  2. Averaged velocities and predictability metrics.
    Prepared by scrum masters.
  3. Drop Plan
    Sprint names, dates. Prepared by Project manager, Scrum Of Scrums or Product Owner)
  4. Printed out backlog
    Broken down by the product owner as best they can. Each story on A4 Sheet. (Product Owner or Scrum Master)
  5. A1 flipchart paper
    Facilitator or scrum master should bring this
  6. Postits
    Facilitator or scrum master should bring these
  7. Markers
    Facilitator or scrum master should bring these
  8. Pens
    Facilitator or scrum master should bring these

Room layout

  1. A large round table per team
  2. Facilitator table
  3. Projector
  4. Lots of Wall space
  5. 1-2 flipchart stands per team

Day plan

For the facilitators. This is how you will achieve your objectives in step-by-step way.
StartEndActivity
09:0009:15Arrive at venue - arrange venue. Scone & Fruit for warmup. People mingle, chat and get into an open frame of mind.
09:1509:45Intro, set the scene. 2 truths one lie. Set the objective for the day. Talk about estimation - high level and how we will do it. Talk about commitment. Set the ground rules.
09:4510:00Product owner presents the vision.
10:0010:30First pass on backlog. Read the backlog. Each story. Clarify doubts, document RAID.
10:3010:45Tea break, fresh air.
10:4511:15Read the backlog. Talk to PO. Write any clarifications directly upon the A4 pages. The output of this phase are a clarified backlog that everyone as read.
11:1512:30Size the backlog using relative estimation. Take each story and compare it to the ones on the table. Smaller stories on the left. Larger stories on the right. Approximately the same size, place it on an existing story. The output of this phase is a number of piles of stories, approximately the same size.
12:3013:15Lunch. Fresh air.
13:1513:30Reset Exercise.
13:3013:45Review Piles of stories and RAID.
13:4515:00Select a story from each pile. Story point these 5-8 stories using planning poker or whatever method the team is used to. Points are cascaded to similar sized stories automatically by writing on the A4 page. We now have n estimated backlog.
15:0015:15Tea break. Fresh Air.
15:1516:00Scrum master presents team sprint history. Their previous velocity and predictability. This will be used for release planning. Review our current Definition of Done. Product owners should re-organise the piles of stories into the order of priority, highest priority stories at the top.
16:0016:45Using A1 sheets to represent sprints, allocated prioritised stories according to the sprint capacities as per history. Discuss RAIDs. Plan prudently. Note any dependencies. Stories should be places in sprints where they will finish, if they must start in an earlier sprint, note that in writing on the story.
16:4517:15Stand back and review the plan. Talk to project manager and PO. Ask can we commit to this plan? Is this a likely plan or a committed plan?
17:1517:30Wrap up. Thank everyone. Make sure POs and SM's are bringing the stories. Note what sprint each thing is landing. Tidy up room.

Friday, 10 November 2017

Testing definitions

There are lots of test terms bandied about. I find there is general inconsistency among software development practitioners in what each one means. Here are my definitions of the different types of test. I have tried to establish simple, clear definitions for each test term.

Manual test
One that is executed by a human against the target system.

Automated test
One that is run using some sort of scripting, with the scripting kicked off by a continuous integration system.

Unit test
A single test case where the target piece of functionality under test is running in a contained or mocked environment. Automation is implied.

Basic Integration test
The target code runs in a "basic" environment that includes live instances of all it's immediate dependencies such as databases, file systems, network I/O, messaging buses etc. Automation is implied.

Integration test
A test case is one that runs towards a software entity, and that entity is running on the actual environment it is meant to run on in production. An Integration test may be automated or manual.

Acceptance test
These are tests that are understood by the user of the output of the system. That user could be human or machine.

White box test
A white box test is one that is written by some one with full knowledge and access of the solution source code and architecture.

Black box test
A black box test is one that is written by some one that has no or limited knowledge of the solution source code and architecture.

Test Spec or Test Specification
A list of tests. It should specify the use case, the inputs and the expected behaviour or output. For automated tests, the specification is a description of the test function.

Test suite
A collection of automated tests. Should be equivalent to a test specification.

Test Report
This is a summary of the output

Saturday, 21 October 2017

What's released trumps what's documented

What is released and is working in the live system, is what counts. Not what's written down.

A while back I had to do some Fault Slippage Analysis on a bug in high level component, ComponentA, in an application stack. A specific use case in our application had stopped working at a customer site after a delivery. Complicating the troubleshooting was the fact that there was no delivery from ComponentA, where the symptom of the problem was presenting itself.

The root cause of the bug was caused by a lower level component, ComponentB, tightening constraints on a method call that ComponentA was using. Sure enough the javadoc of ComponentB had indicated not to use the method in this way. However the method had worked fine up to the point of adding the constraint. There wasn't a good reason to enforce the constraint - it was a theoretical, artificial limit. However the ComponentB developer had changed the code, while fixing another problem in the same file.

There was a long argument between the two teams that handled each component. Eventually the system architect made the call for corrective action. ComponentA had to change because it had violated what had been documented in ComponentB.

Ultimately this added considerably to the lead time for the customer/user.

It is my view that the incorrect call was made. The correct call was to revert the change in the delivered component.

I arrive at this view by applying the following guidelines:

  1. Interfaces don't terminate dependencies
    We also need to consider the run time behaviour of a component behind it's interface. This means that tightening constraints is a backwards incompatible change.
  2. Defer the released implementation to the latest moment
    To be as correct as possible, you should implement it as late as possible, but no later. At this time you have as much information as possible to hand to avoid unforeseen changes. In this case the provider of the method didn't have a good understanding how clients were using the released method, or the effects their changes would have on client use cases.
  3. Open/Closed principle on components
    Open to extend. Closed to change. Priority to working software at the customer side. Even if the opposite was documented on the API documentation.
  4. YAGNI
    Don't write or publish your interface until there is a client to use it. And then it's the client who dictates how it changes. In this case, when the original method was introduced, it was needed for a future release. However clients were forced to use it right away to run a constraints check. It was written too early for all the detail and tests to be written by the providing ComponentB, thereby creating risk for calling components.

The benefits of Pair Programming

In this post I will outline some of the benefits I have witnessed of Pairing. Pairing is not limited to just programming and can be applied to creation of any artefact or task, for example an email, a presentation, a document, a course, etc. The challenge with pairing is always trying to get on the same wavelength as your collaborator.

Common benefits

  1. Output
    The quality of the product produced, in terms of subjective and objective measures have been better.

Benefits for the person

  1. Fun
    It's more fun, more social. When you are used to it.
  2. Focus
    There are less distractions from outsiders from the task at hand.
  3. Work-life balance
    Work more set hours in conjuction with your team mates and less overtime.
  4. Feedback
    Constant stream of feedback from different sources. We are less likely to stray from the problem or to over deliver.
  5. Rapid up skilling
    Knowledge transfer is super fast. Learn by doing.
  6. Professional Freedom/Mobility
    Since others have learned from you, you can move to newer more interesting work because your excellent performance hasn't locked you into your current project.

Benefits for the organisation

  1. Cost Neutral in short term
    Work is completed faster than if each individual worked on separate tasks. Almost twice as fast.
  2. Quality cost savings in longer term
    Bugs avoided. A lower defect rate is injected into product.
  3. Morale
    Once a critical mass of pro pairing people is achieved, we have better morale and a happier workforce.
  4. Knowledge Transfer
    People learn by doing with the experts. Beats courses, books, conferences, videos for effective learning.
  5. Better "Bus-Factor"
    More people know how to do more of the tasks in a team.
  6. Natural enforcement of attendance policy
    There is a peer to peer commitment to work common hours.

Saturday, 23 September 2017

"I need my unit tests for code coverage..."

Fundamental point: Tests must only exist to test the product/component. All other types of "test" are waste.

The following conversation is based on true events, but it is fictional. However you will likely find it familiar.

Developer 1 is a developer who has just contributed to a component and is preparing their work for release by talking with an experienced developer from the team originally responsible for the component, developer 2. Developer 1 has just ran their work through the components unit test suite.

Developer 1: OK. We've just gone green on the jenkins jobs, let's release the component.

Developer 2: Hold on. We can't let that go anywhere without installing that package on a server and running a few bits on the UI.

Developer 1: Why? tests have just gone green?

Developer 2: Ah well those unit tests, we only wrote those because our manager requires us to have code coverage above 90%

Developer 1: So our "real" tests are done manually?

Developer 2: Ya, we only find real issues, running on a real server. The unit tests always break when we change anything. They don't tell us very much at all.

Developer 1: OK. Where's the test spec? And where can I install the package?

Developer 2: We don't have test spec's any more. QA only require us to have a green bar in jenkins. We only have one real server left due to cost cutting by stupid managers so you'll have to wait a few days for Developer 3 to finish troubleshooting that bug.

Developer 1: So you are telling me that I have unit tests I can't trust just because they are green and I've to follow a test spec that doesn't exist before I can release our component? On top of that I've to wait a few days just to get hold of an environment to run the tests?

Developer 2: Yes. That's right.

Developer 1: So what's the point of our vast suite of unit tests?

Developer 2: We are one of the highest performing teams in the company. Our test coverage metrics are best in class.

Here we have a pretty typical conversation I've witnessed (or been part of) in the last 10 years, since we've been adding automated test to products. Typically the initial focus is on coverage, mainly because its a very easy metric to track. Teams get rewarded for hitting coverage targets, in a relatively short period of time. Successful hitting of coverage targets across many components and products, drives decisions at higher levels to reduce or divert computing resources else where. The developer time investment in automated test needs to be recouped in hard currency!

In this scenario we are locked into this waste. We cannot reduce coverage by simply removing the test suite. The bad test suite is the base reference used to add more tests system. Product quality and time to market suffer, but this is an expected side of effect of adding new features to a product that has an ageing architecture anyway, so it's just accepted. Ultimately justifying the re-write that everyone wants, only for the same mistakes to be repeated on the re-write.

How does your organisation promote discussion on this scenario? By how much could you reduce time to market if you had a better suite? What's your answer to reduce the cost of change on a product/component?

Thursday, 18 May 2017

Code review 6: Code Review Flow

In this post I outline the optimal flow for good code reviews. It is important to strictly follow the process to prevent dysfunctional code reviews from taking place and to improve the experience and speed of code reviews for the entire team.

This article does not cover when code reviews should be called in your over all development process. However it applies to any development process that included a code review.

A code review always starts with an artefact that is “Ready for Review”. This could be a feature or part of a feature. It will likely include some code updates, some tests (hopefully all automated) and some documentation for end users and/or developers. “Ready for Review” means that the artefact is reasonably complete i.e. there is enough completed to do something useful and in the case of code, it executes. It is the author’s call as to when they are ready for review.

The author should look for and appoint two reviewers. You can choose to appoint more reviewers, but two should be sufficient. At least one reviewer should be an experienced developer. You may also be required to appoint a “guardian”.

The reviewers must commit to concluding the review as quickly as possible because we value completed work of the author that is closer to delivery to master, over features that are in progress in the reviewer’s backlog today. Hence the review is always treated as the highest priority task.

The reviewers must read and understand the original “Problem Statement”. What is it that the author is trying to achieve?

All supporting material should also be read and understood. The author may have completed some sort of technical report, design specification, whiteboard diagram of the solution etc. These artefacts should be all used to gain an understanding of the proposed solution. However, absence of these is not a deal-breaker for the review if the solution is the correct one.

Next the reviewers should review the tests that are written for the code update. Tests should be well described and their terminology should match the problem domain. Do the tests follow a Given-when-then format? Are the tests correctly coupled? Are they manual or automated? There should be some working tests available prior to reviewing the code.

The author will already have looked at the static analysis tooling and should have removed all the trivial violations as part of getting ready for the review. The reviewers should review any static analysis tooling output. Are there any new failures since the last build? Are there possible hotspots in the proposed solution? Any violations that are entirely reasonable to leave in the code?

Now that we have a good understanding of the problem, the overall solution proposed and all the automated static analysis has been done, the reviewers should review the code. All feedback should be ranked as major or minor and logged independently of each reviewer at this stage. Genuine and honest comments should be recorded with consideration for the feelings of the author.

Next the Authors and reviewers should come together, supported by tooling, and discuss the feedback that the reviewers have for the Author. For each piece of feedback a decision must be agreed as to whether re-work is required or not. In the exceptional cases where consensus cannot be reached, another senior developer should be brought in to arbitrate. It is reasonable to assume that their decision would be final.

The outcome of the review is a three way decision and it is the reviewers’ call. The code and supporting artefacts can be accepted and merged to master. Alternatively re-work is required. The fact that this option could have serious implications for the project awaiting the output, means that this option needs to be handled professionally and with courage by the Author, the reviewers and the team’s leader. At all times we must be aware of our deadlines and take actions that are pragmatic given the time allowed and the implications for the user or customer.

Should minor re-work be required, the author should complete this as soon as possible. A quick glance over the re-work maybe required by the original reviewers to ensure the work is satisfactorily completed, however this is optional and should have been clearly stated by the reviewers at the decision stage.

If substantial re-work is required, the story or task should be brought back to the beginning and any shortcomings addressed using the hands-on help of more senior developers or a mentor. It is preferable to assemble the same reviewers when the code is ready for review again as these reviewers already have an awareness of the problem being addressed. Otherwise it should be considered a brand new code review.

In summary, this article outlines a simple, but clearly defined flow or set of steps for performing excellent code reviews. Following it strictly will ensure consistent and high quality code reviews in your team, resulting in a better product coming out of your team.


Thanks to contributions from Shane McCarron on this article

Friday, 21 April 2017

Code Reviews 5: Good articles on code reviews

Let me start by saying "good" is subjective and it's based on my opinions of how code review must be done, if we are working together.

This post is a collection of resources I've come across on code reviews.

Good code review contributions

These articles align, by and large, with how I feel code reviews should be done correctly

  1. Preparing your code Review
  2. Code Review Tips
  3. How to be a better code reviewee
  4. How to be a better code reviewer
  5. Code Review Guidelines for Humans

Code review resources

These are generic inputs that you will need for code reviews

  1. A Style guide for our source code. Googles Java Style Guide or the original Java Style guide, might not be the best. Write one or pick one, but once you choose for a particular product, stick consistently to it.

Code review tools

Effective tooling really helps enforce a proper code review flow and keep the quality of the review high. Here are a selection I've come across.

  1. Gerrit. This is the tool I'm currently using and I'm happy with it.
  2. Review4Eclipse. Another tool I've used for reviews. It's a lot better than having no code review tool.
  3. Jet Brains Upsource. From the makers of IntelliJ. This is next on my list to try, I hope it's as good as IntelliJ!
  4. Smart Bear Collaborator. I haven't used this, but it looks interesting. It appears to be the most feature rich. One I would like to try out.