Wednesday, 29 April 2015

Scrum ceremony cheat sheet

Daily stand up

Purpose: For the team to recommit daily and evaluate where our sprint plan is. Individually each member informs other team members what I am doing and where I can get help
When: Daily, in the morning
Who: All team members. PO might be an observer.
Inputs: Sprint backlog up to date to reflect members input.
Output: Everyone knows what everyone else is working on and where they can get help. We know what we need to work on to secure the sprint goal

Sprint planning

Purpose: Set the scope of work for the sprint.
When: First meeting of every new sprint.
Who: All team members and PO
Inputs: Groomed and estimated backlog.
Output: A sprint backlog. Sprint started. Commitment from team.

Backlog grooming

Purpose: Adjust the backlog estimates and break up big stories/epics.
When: 1-3 times per sprint. 1-2 hours per session
Who: All team members and product owner
Inputs: A selection of X stories from the top of the product backlog
Output: A estimated subset of stories on the backlog. Full backlog will not be costed.

Release Planning

Purpose: Estimate of the size of the backlog on that date.
When: half to one full day, every so many sprints
Who: All team members and product owner and coach (optional)
Inputs: Ordered, populated backlog (as much as possible).
Outputs: A fully estimated backlog.

Retrospective

Purpose: Reflect on how we worked in the last sprint so that we can introduce improvements in the coming sprint.
When: Last meeting of the sprint, after demo and the sprint is closed.
Who: Team members. Product owner optional.
Inputs: none required
Outputs: Actions for each team member to implement

Demo/Sprint review

Purpose: To show off the teams work and get feedback on work complete.
When: At the end of the sprint. 2nd last meeting.
Who: Team members, Product owner. As many other stakeholders as can attend.
Inputs: "Done" stories.
Outputs: A list of feedback from stakeholders on the software just shown.


Updated 15th April, 2016

Monday, 27 April 2015

Great tests are like great wine, great architecture is like great cheese

A great automated test suite is one that allows you to change the internal architecture of the system in any way, with confidence that the system still behaves the same towards its clients after the change is committed. The more the system changes over time, the more the test proves its value over and over again. The older a great test gets, the more value it delivers in facilitating change. So a great test gets better with age. Very much like a great wine.

A great architecture is one that makes the system do what it needs to do and makes it easy and cheap to add many new features. Unfortunately there are always trade offs in the direction of change that an architecture allows. Ultimately the architecture of a product, with some very limited exceptions, becomes the limiting factor in change. In this respect its like a great cheese, it will age well for the first while, but eventually degenerate and rot.

Thursday, 23 April 2015

The -Iliities

In application development, there is a lot of talk about the -ilities. The purpose of this article is to gather a few definitions and explanations of the -ilities together.

Upgrade-ability - the ability to upgrade software, without interrupting service. Can be referred to as "0 downtime" upgrade.

Horizontal Scalability - The ability to increase or decrease capacity of the system by adding or removing instances of services. Ideally, we would like to achieve this without service interruption.

Vertical Scalability - The ability to increase or decrease capacity of the system by adding or removing threads to the running instance of the service. Or to put it another way, how much can the component itself stretch before we have to add more instances of the component.

Availability - The amount of time the service is available to users. Commonly measured % time uptime per year, for example 99.999%. This means the service can be unavailable for a total of 5.26 minutes per year.

Stability - the amount of time the system is running as expected. Less scientific to calculate than availability.

Capability - the list of features that the system/service supports.

Capacity - the load limitations of the system/service. This can be scientifically measured, but different metrics apply for different types of services. For example for certain services the number of operations per second may be important. For other services the amount of data (Bytes per second) processed per second might be important.