Monday, 18 December 2017

Observations of software development

These are some observations of mine of various software development teams that I've been in or seen working over the past 18 or so years.

  1. Opportunity cost of doing the current best practice well, prevents us from moving quickly to a newer, better practice. Mastery vs agility.
  2. Existing culture is the prevailing limiter of change in mindset.
  3. The tax for using the construction industry model for software development, while useful for scaling, makes it harder to get value from software in the short and medium term.
  4. We are teaching programming by getting people to write code, instead of learning to read code. This is the inverse of other languages.
  5. Our industry lacks a great "masterpiece" of design and code. Design patterns are a start, but not comprehensive enough.
  6. When groups and organisations are transforming their ways of working and they fail, instead of correcting the failure, they fall back on what worked from their old way of working even if that prevents the transformation from completing.
  7. Engineers will change tools more quickly, than change behaviour with the tool. In fact they will carry existing "mastery" into the new tool, unless the new tool explicitly prohibits it somehow.

Thursday, 14 December 2017

Team Trotters

Let's explore some of the good qualities of and bad qualities of a well formed team, by debriefing a video of the team in action.

The team in question have been working together for many years. Have a look at the video, a few times if you need to. Then take 5 minutes to think about what the team does well? Now take 5 minutes to think about what they do badly?

Video

While the resulting work of the team turns out to be a funny disaster, this team demonstrates some good and bad attributes of a well formed and performing team

Good team attributes

  1. Knowledgable
  2. Obvious leader
  3. Had enough people to do the job
  4. Delegation
  5. Defined roles
  6. Stronger men in the "right" place
  7. Confidence
  8. Right tools for the job
  9. Succeeded in getting the job in the first place
  10. Displayed expertise
  11. Trusted each other
  12. "good" communication
  13. Motivated - money, prestige
  14. Courage to take on the job
  15. One, very clear goal
Bad team attributes
  1. All planning was in dels head
  2. Del's assumptions
  3. other team members reliance on Del's thinking
  4. Del's leadership style, Del was dismissive, belittling of butler
  5. Hadn't actually done it before
  6. No real energy
  7. Hierarchical team
  8. members didn't share leaders motivations
  9. no continuous improvements
  10. Del always compensated for others lack of competence
  11. Del didn't realise his own limitations
  12. Poor Communication - All talk, very little listening.
  13. Complacency - Members are overly familiar with each other
  14. Poor result - they break the chandelier

Reflect

Can you recognise any of these attributes in your team? What are you doing well? What aspects of your team could you improve?

Wednesday, 22 November 2017

A format for Release Planning

Summary

Output: An estimated backlog. A likely release date for the plan.

Overview: Start with creating understanding and clarifying the big picture with everyone. Use large macro sizing to roughly size everything. Refine these sizes using a selection. Allocate to a future sprint using historical velocities as a predictor of future performance.

In this article I will outline a format for release planning that I have successfully used on a number of occasions.

Release Planning Day Agenda

Agenda is what the room people see. Focus on the results of each part of the day. At the start people want to know what they will achieve, not how they will achieve it.
  1. Introduction
  2. Vision
  3. Estimation - First pass
  4. Estimation - second pass
  5. Backlog prioritisation
  6. Release plan - first pass
  7. Review and agree
  8. Retrospective

Release Planning Plan

Release planning is a necessary part of scaled software development. The customer might take your software every sprint, but they want a clear indication about how much you will deliver over the next 6 to 12 months. You will probably find that sprint planning and backlog grooming alone won't scale to this length of time.

Output

  • A fully estimated and refined backlog
  • A likely release plan
  • Identified Risks, Issues, Assumptions and Dependencies and what to do for each.
  • A better understanding of what is required by the team in the medium term

Preparation

Required people 

  1. Whole team including SM x 2. This ceremony will scale up to many teams, but be mindful of venue capacity.
  2. Product Owner
  3. Project Manager
  4. Facilitator x 2. I have found it useful to use pair facilitators.

Required inputs

  1. A Vision
    prepared by the PO. What would excite out users? What would delight our customers?
  2. Averaged velocities and predictability metrics.
    Prepared by scrum masters.
  3. Drop Plan
    Sprint names, dates. Prepared by Project manager, Scrum Of Scrums or Product Owner)
  4. Printed out backlog
    Broken down by the product owner as best they can. Each story on A4 Sheet. (Product Owner or Scrum Master)
  5. A1 flipchart paper
    Facilitator or scrum master should bring this
  6. Postits
    Facilitator or scrum master should bring these
  7. Markers
    Facilitator or scrum master should bring these
  8. Pens
    Facilitator or scrum master should bring these

Room layout

  1. A large round table per team
  2. Facilitator table
  3. Projector
  4. Lots of Wall space
  5. 1-2 flipchart stands per team

Day plan

For the facilitators. This is how you will achieve your objectives in step-by-step way.
StartEndActivity
09:0009:15Arrive at venue - arrange venue. Scone & Fruit for warmup. People mingle, chat and get into an open frame of mind.
09:1509:45Intro, set the scene. 2 truths one lie. Set the objective for the day. Talk about estimation - high level and how we will do it. Talk about commitment. Set the ground rules.
09:4510:00Product owner presents the vision.
10:0010:30First pass on backlog. Read the backlog. Each story. Clarify doubts, document RAID.
10:3010:45Tea break, fresh air.
10:4511:15Read the backlog. Talk to PO. Write any clarifications directly upon the A4 pages. The output of this phase are a clarified backlog that everyone as read.
11:1512:30Size the backlog using relative estimation. Take each story and compare it to the ones on the table. Smaller stories on the left. Larger stories on the right. Approximately the same size, place it on an existing story. The output of this phase is a number of piles of stories, approximately the same size.
12:3013:15Lunch. Fresh air.
13:1513:30Reset Exercise.
13:3013:45Review Piles of stories and RAID.
13:4515:00Select a story from each pile. Story point these 5-8 stories using planning poker or whatever method the team is used to. Points are cascaded to similar sized stories automatically by writing on the A4 page. We now have n estimated backlog.
15:0015:15Tea break. Fresh Air.
15:1516:00Scrum master presents team sprint history. Their previous velocity and predictability. This will be used for release planning. Review our current Definition of Done. Product owners should re-organise the piles of stories into the order of priority, highest priority stories at the top.
16:0016:45Using A1 sheets to represent sprints, allocated prioritised stories according to the sprint capacities as per history. Discuss RAIDs. Plan prudently. Note any dependencies. Stories should be places in sprints where they will finish, if they must start in an earlier sprint, note that in writing on the story.
16:4517:15Stand back and review the plan. Talk to project manager and PO. Ask can we commit to this plan? Is this a likely plan or a committed plan?
17:1517:30Wrap up. Thank everyone. Make sure POs and SM's are bringing the stories. Note what sprint each thing is landing. Tidy up room.

Friday, 10 November 2017

Testing definitions

There are lots of test terms bandied about. I find there is general inconsistency among software development practitioners in what each one means. Here are my definitions of the different types of test. I have tried to establish simple, clear definitions for each test term.

Manual test
One that is executed by a human against the target system.

Automated test
One that is run using some sort of scripting, with the scripting kicked off by a continuous integration system.

Unit test
A single test case where the target piece of functionality under test is running in a contained or mocked environment. Automation is implied.

Basic Integration test
The target code runs in a "basic" environment that includes live instances of all it's immediate dependencies such as databases, file systems, network I/O, messaging buses etc. Automation is implied.

Integration test
A test case is one that runs towards a software entity, and that entity is running on the actual environment it is meant to run on in production. An Integration test may be automated or manual.

Acceptance test
These are tests that are understood by the user of the output of the system. That user could be human or machine.

White box test
A white box test is one that is written by some one with full knowledge and access of the solution source code and architecture.

Black box test
A black box test is one that is written by some one that has no or limited knowledge of the solution source code and architecture.

Test Spec or Test Specification
A list of tests. It should specify the use case, the inputs and the expected behaviour or output. For automated tests, the specification is a description of the test function.

Test suite
A collection of automated tests. Should be equivalent to a test specification.

Test Report
This is a summary of the output

Saturday, 21 October 2017

What's released trumps what's documented

What is released and is working in the live system, is what counts. Not what's written down.

A while back I had to do some Fault Slippage Analysis on a bug in high level component, ComponentA, in an application stack. A specific use case in our application had stopped working at a customer site after a delivery. Complicating the troubleshooting was the fact that there was no delivery from ComponentA, where the symptom of the problem was presenting itself.

The root cause of the bug was caused by a lower level component, ComponentB, tightening constraints on a method call that ComponentA was using. Sure enough the javadoc of ComponentB had indicated not to use the method in this way. However the method had worked fine up to the point of adding the constraint. There wasn't a good reason to enforce the constraint - it was a theoretical, artificial limit. However the ComponentB developer had changed the code, while fixing another problem in the same file.

There was a long argument between the two teams that handled each component. Eventually the system architect made the call for corrective action. ComponentA had to change because it had violated what had been documented in ComponentB.

Ultimately this added considerably to the lead time for the customer/user.

It is my view that the incorrect call was made. The correct call was to revert the change in the delivered component.

I arrive at this view by applying the following guidelines:

  1. Interfaces don't terminate dependencies
    We also need to consider the run time behaviour of a component behind it's interface. This means that tightening constraints is a backwards incompatible change.
  2. Defer the released implementation to the latest moment
    To be as correct as possible, you should implement it as late as possible, but no later. At this time you have as much information as possible to hand to avoid unforeseen changes. In this case the provider of the method didn't have a good understanding how clients were using the released method, or the effects their changes would have on client use cases.
  3. Open/Closed principle on components
    Open to extend. Closed to change. Priority to working software at the customer side. Even if the opposite was documented on the API documentation.
  4. YAGNI
    Don't write or publish your interface until there is a client to use it. And then it's the client who dictates how it changes. In this case, when the original method was introduced, it was needed for a future release. However clients were forced to use it right away to run a constraints check. It was written too early for all the detail and tests to be written by the providing ComponentB, thereby creating risk for calling components.

The benefits of Pair Programming

In this post I will outline some of the benefits I have witnessed of Pairing. Pairing is not limited to just programming and can be applied to creation of any artefact or task, for example an email, a presentation, a document, a course, etc. The challenge with pairing is always trying to get on the same wavelength as your collaborator.

Common benefits

  1. Output
    The quality of the product produced, in terms of subjective and objective measures have been better.

Benefits for the person

  1. Fun
    It's more fun, more social. When you are used to it.
  2. Focus
    There are less distractions from outsiders from the task at hand.
  3. Work-life balance
    Work more set hours in conjuction with your team mates and less overtime.
  4. Feedback
    Constant stream of feedback from different sources. We are less likely to stray from the problem or to over deliver.
  5. Rapid up skilling
    Knowledge transfer is super fast. Learn by doing.
  6. Professional Freedom/Mobility
    Since others have learned from you, you can move to newer more interesting work because your excellent performance hasn't locked you into your current project.

Benefits for the organisation

  1. Cost Neutral in short term
    Work is completed faster than if each individual worked on separate tasks. Almost twice as fast.
  2. Quality cost savings in longer term
    Bugs avoided. A lower defect rate is injected into product.
  3. Morale
    Once a critical mass of pro pairing people is achieved, we have better morale and a happier workforce.
  4. Knowledge Transfer
    People learn by doing with the experts. Beats courses, books, conferences, videos for effective learning.
  5. Better "Bus-Factor"
    More people know how to do more of the tasks in a team.
  6. Natural enforcement of attendance policy
    There is a peer to peer commitment to work common hours.

Saturday, 23 September 2017

"I need my unit tests for code coverage..."

Fundamental point: Tests must only exist to test the product/component. All other types of "test" are waste.

The following conversation is based on true events, but it is fictional. However you will likely find it familiar.

Developer 1 is a developer who has just contributed to a component and is preparing their work for release by talking with an experienced developer from the team originally responsible for the component, developer 2. Developer 1 has just ran their work through the components unit test suite.

Developer 1: OK. We've just gone green on the jenkins jobs, let's release the component.

Developer 2: Hold on. We can't let that go anywhere without installing that package on a server and running a few bits on the UI.

Developer 1: Why? tests have just gone green?

Developer 2: Ah well those unit tests, we only wrote those because our manager requires us to have code coverage above 90%

Developer 1: So our "real" tests are done manually?

Developer 2: Ya, we only find real issues, running on a real server. The unit tests always break when we change anything. They don't tell us very much at all.

Developer 1: OK. Where's the test spec? And where can I install the package?

Developer 2: We don't have test spec's any more. QA only require us to have a green bar in jenkins. We only have one real server left due to cost cutting by stupid managers so you'll have to wait a few days for Developer 3 to finish troubleshooting that bug.

Developer 1: So you are telling me that I have unit tests I can't trust just because they are green and I've to follow a test spec that doesn't exist before I can release our component? On top of that I've to wait a few days just to get hold of an environment to run the tests?

Developer 2: Yes. That's right.

Developer 1: So what's the point of our vast suite of unit tests?

Developer 2: We are one of the highest performing teams in the company. Our test coverage metrics are best in class.

Here we have a pretty typical conversation I've witnessed (or been part of) in the last 10 years, since we've been adding automated test to products. Typically the initial focus is on coverage, mainly because its a very easy metric to track. Teams get rewarded for hitting coverage targets, in a relatively short period of time. Successful hitting of coverage targets across many components and products, drives decisions at higher levels to reduce or divert computing resources else where. The developer time investment in automated test needs to be recouped in hard currency!

In this scenario we are locked into this waste. We cannot reduce coverage by simply removing the test suite. The bad test suite is the base reference used to add more tests system. Product quality and time to market suffer, but this is an expected side of effect of adding new features to a product that has an ageing architecture anyway, so it's just accepted. Ultimately justifying the re-write that everyone wants, only for the same mistakes to be repeated on the re-write.

How does your organisation promote discussion on this scenario? By how much could you reduce time to market if you had a better suite? What's your answer to reduce the cost of change on a product/component?

Thursday, 18 May 2017

Code review 6: Code Review Flow

In this post I outline the optimal flow for good code reviews. It is important to strictly follow the process to prevent dysfunctional code reviews from taking place and to improve the experience and speed of code reviews for the entire team.

This article does not cover when code reviews should be called in your over all development process. However it applies to any development process that included a code review.

A code review always starts with an artefact that is “Ready for Review”. This could be a feature or part of a feature. It will likely include some code updates, some tests (hopefully all automated) and some documentation for end users and/or developers. “Ready for Review” means that the artefact is reasonably complete i.e. there is enough completed to do something useful and in the case of code, it executes. It is the author’s call as to when they are ready for review.

The author should look for and appoint two reviewers. You can choose to appoint more reviewers, but two should be sufficient. At least one reviewer should be an experienced developer. You may also be required to appoint a “guardian”.

The reviewers must commit to concluding the review as quickly as possible because we value completed work of the author that is closer to delivery to master, over features that are in progress in the reviewer’s backlog today. Hence the review is always treated as the highest priority task.

The reviewers must read and understand the original “Problem Statement”. What is it that the author is trying to achieve?

All supporting material should also be read and understood. The author may have completed some sort of technical report, design specification, whiteboard diagram of the solution etc. These artefacts should be all used to gain an understanding of the proposed solution. However, absence of these is not a deal-breaker for the review if the solution is the correct one.

Next the reviewers should review the tests that are written for the code update. Tests should be well described and their terminology should match the problem domain. Do the tests follow a Given-when-then format? Are the tests correctly coupled? Are they manual or automated? There should be some working tests available prior to reviewing the code.

The author will already have looked at the static analysis tooling and should have removed all the trivial violations as part of getting ready for the review. The reviewers should review any static analysis tooling output. Are there any new failures since the last build? Are there possible hotspots in the proposed solution? Any violations that are entirely reasonable to leave in the code?

Now that we have a good understanding of the problem, the overall solution proposed and all the automated static analysis has been done, the reviewers should review the code. All feedback should be ranked as major or minor and logged independently of each reviewer at this stage. Genuine and honest comments should be recorded with consideration for the feelings of the author.

Next the Authors and reviewers should come together, supported by tooling, and discuss the feedback that the reviewers have for the Author. For each piece of feedback a decision must be agreed as to whether re-work is required or not. In the exceptional cases where consensus cannot be reached, another senior developer should be brought in to arbitrate. It is reasonable to assume that their decision would be final.

The outcome of the review is a three way decision and it is the reviewers’ call. The code and supporting artefacts can be accepted and merged to master. Alternatively re-work is required. The fact that this option could have serious implications for the project awaiting the output, means that this option needs to be handled professionally and with courage by the Author, the reviewers and the team’s leader. At all times we must be aware of our deadlines and take actions that are pragmatic given the time allowed and the implications for the user or customer.

Should minor re-work be required, the author should complete this as soon as possible. A quick glance over the re-work maybe required by the original reviewers to ensure the work is satisfactorily completed, however this is optional and should have been clearly stated by the reviewers at the decision stage.

If substantial re-work is required, the story or task should be brought back to the beginning and any shortcomings addressed using the hands-on help of more senior developers or a mentor. It is preferable to assemble the same reviewers when the code is ready for review again as these reviewers already have an awareness of the problem being addressed. Otherwise it should be considered a brand new code review.

In summary, this article outlines a simple, but clearly defined flow or set of steps for performing excellent code reviews. Following it strictly will ensure consistent and high quality code reviews in your team, resulting in a better product coming out of your team.


Thanks to contributions from Shane McCarron on this article

Friday, 21 April 2017

Code Reviews 5: Good articles on code reviews

Let me start by saying "good" is subjective and it's based on my opinions of how code review must be done, if we are working together.

This post is a collection of resources I've come across on code reviews.

Good code review contributions

These articles align, by and large, with how I feel code reviews should be done correctly

  1. Preparing your code Review
  2. Code Review Tips
  3. How to be a better code reviewee
  4. How to be a better code reviewer
  5. Code Review Guidelines for Humans

Code review resources

These are generic inputs that you will need for code reviews

  1. A Style guide for our source code. Googles Java Style Guide or the original Java Style guide, might not be the best. Write one or pick one, but once you choose for a particular product, stick consistently to it.

Code review tools

Effective tooling really helps enforce a proper code review flow and keep the quality of the review high. Here are a selection I've come across.

  1. Gerrit. This is the tool I'm currently using and I'm happy with it.
  2. Review4Eclipse. Another tool I've used for reviews. It's a lot better than having no code review tool.
  3. Jet Brains Upsource. From the makers of IntelliJ. This is next on my list to try, I hope it's as good as IntelliJ!
  4. Smart Bear Collaborator. I haven't used this, but it looks interesting. It appears to be the most feature rich. One I would like to try out.

Code Reviews 4: Three Principles for doing effective Code Reviews

In order to do effective code reviews I propose the following three Principles of Code Review that every professional developer in your team must sign up to.

Code Review Principles for Developers

  1. Nothing gets merged to the master branch without the approval of a code review. Ever. Treat master with respect.

  2. After two1 approvals have been obtained, the author merges work to master.

  3. When I take on a fellow engineers' code review I must progress the review immediately as the most urgent item on my list.

Reasoning the principles

Professionals operate by certain principles and they apply them consistency and fairly. Let's explore some of the reasons behind why each of these principles are important.

  1. Nothing gets merged to the master branch without the approval of a code review. Ever. Treat master with respect.
    1. I am human and I make mistakes. I require the validation by my peers on what I have done.
    2. As a professional, I have a duty to educate others. A peer review is one valuable way to share knowledge.
  2. After two* approvals have been obtained, the author merges work to master.
    1. The author should be responsible for the full completion of their own tasks.
    2. The author asks for and appoints people to complete the review. At least one should be senior and experienced. Those appointed people stay with the review until it is complete i.e. merged to master or abandoned
    3. All actions between the authors and reviewers are closed pragmatically with necessary rework inspected and complete. Code is never merged with any outstanding code review comments. All review comments are either implemented or discussed with reviewer and agreement reached as to why comment is not acted on. The decision and the why should be recorded in the source code as a comment.
    4. Sending the review to the entire group leads to confusion as to who is doing the code review. IF everyone is responsible, then no one is responsible. Be clear who exactly is doing the review and appoint them to the review.
  3. When I take on a fellow engineers' code review I must progress the review immediately as the most urgent item on my list.
    1. This is context switching, but we value “done” tasks over “work in progress”. To help flow of tasks in my team, we realise that people at code review are closer to “done” than I am – so unblocking them is my top priority and the context switch is more valuable.
    2. If I have more urgent items to tend to, I do not take on the code review.
    3. Some one must take the code review, so the team might need to discuss priorities.
    4. If I do this review now, when my turn comes I can count on my team mates to drop their tasks in progress so that my code review progresses rapidly.

1. I recommend two. You may choose whatever number suits your project. But when you choose – stick to it consistently. This is so we are fair to every developer working on the product. Picking too small a number, means we don't get enough feedback. Picking too large a number requires interrupting more tasks in the team to progress the task in review.



Thanks for contributions from: Rachel O'Toole, Shane McCarron, David Hatton

Monday, 20 March 2017

Pair Programming: Why isn't it more widespread?

In my experience we've been pretty good at adopting XP. We agree, by and large, that co-location is a good thing. We seem to have difficulty agreeing a coding standard, but mostly we do. We struggle with writing unit tests first - often we are writing tests to hit a “coverage target” and after the fact, but again most people agree we need to write automated tests. We agree that we need to integrate more often, requiring automated integration tests and the adoption of a continuous integration system for our products and this doesn’t generate too much debate. 

But after twenty years of extreme programming practices, the one practice that seems to have the least widespread adoption is Pair Programming.

So why don't we pair?

In my opinion our resistance is rooted in three different causes; Organisational, Personal and Societal

Organisational Resistance

Halving our number of people
Simple. We cannot afford to have two people working on one thing. We couldn’t be efficient doing pairing. Many managers just can’t see the sense in it.

Rewards, Recognition and Promotion
All our corporate models for reward and recognition are based on the individual, not pairs. From performance reviews to bonuses, remuneration and promotion, all these are based on the individual. You are given your task(s). How well you executed these tasks and the more of them you get done on your own, the better you are rated. This feeds into your exam/evaluation results. Ultimately your personal results and successes are reflected in higher monetary return. Most jobs and most industries are set up along this "Motivation 2.0 Model" – the better you do, on your own, the more reward you will have coming to you. If you need help, that kind of counts against you.

Personal Resistance

Loss of Freedom
A big challenge to overcome in pairing is the perception of giving up a large part of your personal freedom in work. The day we joined a company we were given a desk - your desk, your own piece of real estate. You are told when you are expected to be in the office but how you manage your time in the office is your business. You dictate when you go to the bathroom, when you take a 5 minute break to browse the daily paper or use social media… There is great satisfaction in having that autonomy and freedom within the confines of a professional environment. 

When we pair there is sense of obligation to be at either your desk or your pair partner’s desk during the working day. You will have to excuse yourself when you need a break. You perceive that you are accountable, on a social level at least, to someone else.

Arrogance
You’ll be the one giving everything and you'll gain nothing in return. You feel that your expertise is getting diluted and it's the main reason you're valued by the company.

Imposter syndrome
Can you live up to your own reputation in the eyes of others. You are an Engineer with many years experience and many projects behind you, but they are all on older technologies.

Social difficulties
Pairing is socially difficult. You are forced to work closely with someone you might personally dislike either for physical or psychological reasons. We are social animals and we all carry our own biases, conscious and unconscious. We all have our own personalities and we tend to like people with similar personalities and dislike those that are different.

Societal resistance

The vast majority of roles and jobs in our society do not use pairing. In fact there are a only a very few roles where pairing is widespread and established. A quick google test of “jobs that are done in pairs” returns jobs for “au pairs”, something entirely different. This means in general people are not well equipped or well practiced with the skills that pairing demands on the participants.

Introducing Pair programming to your organisation

To successfully introduce pairing to your development organisation, you have to design your change program to address and overcome all these aspects of resistance. And for some people their resistance is so established in their minds, that it will take a significant effort to bring them on board.


Thanks to Contributors: Shane McCarron, Rachel O'Toole

Thursday, 9 February 2017

Assumptions I held about software development, that no longer hold true for me

That architecture is more valuable than test

  1. Often what we do with software doesn't change that much. Good tests describe what we want to solve with software.
  2. Software changes greatly in many directions over time. The architecture we choose today will not "stretch" in every direction of change, requiring significant changes at some point.
  3. We can only sell features. Tests are a great description of features. We cannot sell an architecture.
  4. The right tests, written at the right level in the application will out live the architecture.

We need to re-write insert any software product from scratch

  1. It nearly always takes significant time to re-solve all the simple problems the old system does for your users.
  2. The expectation of the user is immense.
  3. Software is not construction, the foundations of our system are not in concrete. It's our own perceptions of the complexity of the current solution coupled with our desire to embrace new tools, that limits the changing the current solution. We think it's easier to start from scratch.
  4. One of the Facade, Adaptor or Proxy design patterns should be employed to bring fundamental change to a software system.

That you can be faster if you do Agile in software development, but keep all your traditional stakeholder, requirement, release and business management processes.

  1. Agility means you are agile from the customers’ point of view, i.e. they are getting something delivered from you every 1-4 weeks.
  2. It's hard to release every X weeks, if you can't put work through your production pipeline in less than X weeks.
  3. The overhead of breaking work down to fit into short sprints, actually makes the overall work take longer to do in the medium to long run.
  4. Agile allows you to maximise the amount of work not done. Can you choose what features not to do?

That I must enter "the zone" to be productive or that I work better on my own.

  1. We do require focus in a very busy world. But the zone has no feedback, so we often end up doing the wrong thing.
  2. It’s important to come out of the zone frequently so we don’t lose sight of our destination.
  3. We do need some quiet time to absorb information and reflect, but this isn’t productive time.

If I am efficient and productive, I am effective.

  1. Productive means you are active at building something.
  2. Efficient means that you are really great at building something.
  3. Effectiveness is measured by the receiver of the output of our work.
  4. We often build the wrong thing or we build a lot more than the user needs.
  5. It is better to build something small that is used and useful, than to build something large that is never used.

In software development, the "busy" team is the "best" team.

  1. It's nearly impossible to visualise the output of software teams.
  2. It's even harder to get objective measurement on that output.

Bad developers cause the most bugs.

  1. Bad developers usually deliver very little and the issues they create rarely get passed our first test loops, eg Code review and basic integration test.
  2. Developers who produce the most code, produce the most bugs.
  3. The code that is not written, contains no bugs.

Unit tests test methods

  1. Most unit testing is written towards all the methods in every class in our component. It is very tightly coupled to our design.
  2. If I have to change the test when I change the source, I can't be sure the method/class/component worked like it did before.
  3. The "acid test" for a good unit test is that you can change the implementation without changing the test.
  4. Refactoring means that I re-structure & re-write code, but it does exactly the same job it before.
  5. Good unit test is written towards the public API.

There are lots of good unit test and TDD articles/books and resources.

  1. Most say that for every method you write, you must have at least one unit test. This is wrong.
  2. Most say you must design for test. This is wrong. Good tests simulate real use.
  3. Most state that the "unit" is the method under test. The unit is the test itself.

Thanks to Contributors: Shane McCarron, Rachel O'Toole

Java interview questions - level advanced

This is the third in a series of articles on Java interview questions. These questions require subtle, but important, details of certain parts of Java to be correctly answered. These questions are less obvious than the intermediate questions.
  1. Why do we favour the use of StringBuilder over String concatenation?
    String concatenation used to be expensive because it creates more String objects. StringBuilder does not create as many objects and is unsynchronized. In later versions of Java, the compiler optimizes concatenation operator to use StringBuilder automatically on compilation, so it doesn’t matter if you use the “+” operator or StringBuilder.
  2. Why is it important to implement hashCode() if we override equals()?
    It is important to override hashCode because the java collections framework relies on hashcodes. Ultimately if two objects are equal they must return the same hashCode, however it is not required that two objects that are unequal produce distinct results, but collections might be more efficient if they do.
  3. What is the contract for equals() method?
    It is reflexive: for any non-null reference value x, x.equals(x) should return true. It is symmetric: for any non-null reference values x and y, x.equals(y) should return true if and only if y.equals(x) returns true. It is transitive: for any non-null reference values x, y, and z, if x.equals(y) returns true and y.equals(z) returns true, then x.equals(z) should return true. It is consistent: for any non-null reference values x and y, multiple invocations of x.equals(y) consistently return true or consistently return false, provided no information used in equals comparisons on the objects is modified. For any non-null reference value x, x.equals(null) should return false.
  4. If I have two object of the same type. If I pass the first object to the second object, can the second object directly access the first objects private member fields?
    Yes you can – you do not need to use getter methods or any other “fancy” solution. Private is class private, not object private.
  5. Outline how you would use a Future interface in Java?
    A future is used to get the result of an synchronous result from a ExecutorService. The ExecutorService will gather the return values of call() and call back the main application via the future interface
  6. What is strictfp? Where would you use it/side effects?
    strictfp makes the JVM put restrictions on the floating point datatypes to ensure they are portable. Some processors are able to make more accurate floating point calculations and the JVM spec allows these to be taken advantage of off. If you use it your calculations will be less accurate, but will be equally wrong on all platforms.
  7. How many times does finalise get called?
    Finalise gets called exactly once. If I manage to re-reference the object in my finalise method, making it no long eligible for garbage collection - the next time it comes around to garbage collecting the object finalise will not be called.
  8. Explain the substring memory leak problem in Java 1.6:
    http://www.dzone.com/links/r/the_introduction_of_memory_leak_in_java.html
  9. Whats the difference between Comparable and Comparator?
    Comparable has one method and makes a class comparable to another. The comparator takes two objects of the same type and applies logic to compare one to another.

Tuesday, 3 January 2017

Code Reviews 3 - As an Author, When am I ready for review?

This checklist for the author calling the review. The checklist helps avoid code reviews that just talk about the trivial stuff.

As an author, I should respect the time that my reviewers are putting into my submission. In that case, I wish to have my work as complete as possible and have a clear list of inputs ready for the reviewer, so as to get the most effective code review possible and not waste any ones time.

Authors's Code Review Checklist

  1. Tests complete and passing
  2. Code style adhered to
  3. Code complete
  4. Programmer documentation is complete
  5. Code, Test and Developer Documentation is committed
  6. All static analysis run and passing

Exploration

  1. Tests complete and passing. All legacy unit and integration tests are running and passing. Prove that our changes have no undesired side-effects on existing functionality. Any new tests that are relevant are written, running and passing. We apply the same source control standards to test code.
  2. Code style adhered to. I'm not saying that you have to use the same IDE as everyone else, but you should match the style that the majority of the code is written in as this enhances readability. So configure your IDE or run Checkstyle to ensure compliance.
  3. Code complete. A no brainer, obvious? Well the code should be complete enough to make sense, run and do something useful. It might not be the complete solution. The author should be pragmatic.
  4. Programmer documentation is complete. If you are writing a library, the API, javadoc and sample usages in the SDK are all complete.
  5. Code, Test and Developer Documentation is committed. Change management 101. Programmers share via source control. Typically it should be committed on a side branch, to be merged into main or master once the review is complete. The use of source control helps enforce discipline. It's obvious if the author changes the contents of the code up for review and helps prevent confusion for the reviewer.
  6. All static analysis run and passing. The reviewer shouldn't be looking for stuff that static analysers should be finding. You should have run PMD, findbugs and/or sonarqube with all pre-agreed rule sets.

Once I have completed all six parts, I am ready to submit my code review to reviewers.

Supporting resources

Clean Code, Robert C. Martin
The art of readable code, Boswell and Foucher
Pragmatic Programmer, Hunt and Thomas
A Guide for Code Reviews
Code Review Matters and Manners
Peer Reviews in software, Karl Wiegers
What to look for in a code review, Trisha Gee

What makes a good check in?

Updated 04/01/2016

Updated 13/04/2017

Updated 23/09/2017

Happy new year 2017

Welcome to 2017! 2016 was a busy year. I spoke a little on programming, code review and a fair bit on teams, agile and communication.

2017 will continue the theme. I'll focus on production of software and more on pairing and code reviews. Looking forward to it!