On September 25th I facilitated a Product-level retrospective. The purpose of this retrospective was to look back at how user stories find their way to production, and to find ways to shorten the process and increase quality.
The format of the retrospective was taken directly from the book Agile Retrospectives. This style can be used on iteration/release/product cycles. It breaks the meeting down in to 5 components: Set the Stage, Gather Data, Generate Insights, Decide What To Do, and Closing. There are many activities which can be used to help with each phase. Choosing the correct activities becomes much easier when you know what the goal of the retrospective is; putting all the pieces together and to reign back in discussion if it veers too far off topic.
Understand a typical user story’s journey from conception to production, so that we may discover ways to shorten the trip, while improving quality.
Set the Stage
Because it was going to be such a long meeting, to set the stage we ran the “ESVP” activity. Each person is asked to rate their self as either an Explorer(eager to discover), Shopper(browsing and hoping to get at least one good idea), Vacationer (checked out but glad not to be working) or Prisoner (forced to come) for being at the retrospective. We had a good mix between Shoppers and Explorers, with only one Prisoner. I forgot to mention at the break that coming back was voluntary and if that person so chose to return, was no longer forced to attend. When asked if there was anything in this data that suggests how the team feels about work in general, the answer (besides it only told us how we feel about the retro) was that the team is cautiously optimistic.
We did a value stream (process engineering) mapping exercise to look at how many steps a story goes through, how long it is worked on versus how long it waits. This was done in teams of four, each choosing their own story to map. The idea was to map out the most typical flow based on these instances to find the longest amount of waiting and discuss ways to shorten that, or to find steps that could be eliminated/refactored for efficiency. This was too much work for the retrospective and something that will happen as time goes on. We were able to discuss the various maps drawn and it helped when we looked at deciding what to do.
An attempt was made to take one of the stories and try and break it down in to smaller sizes that would still deliver customer value. The idea was to look at what could be considered a Minimal Marketable Feature, in as much that it is minimal because any smaller and it would not be marketable. This also did not happen, but will probably be an effort that we can immediately implement and came up in an action of what the team can do.
A short exercise was then run to demonstrate how limiting Work In Progress (WIP) can increase throughput. In the teams of four, 20 cards are given to one person and that person flips over all 20 and then hands them (pushes) to the next person, who flips them all, and so on around to the last person. Average time was around a minute to get each person to flip all the cards. At most times, there are 20 ‘things’ in process at any slice in time, with most people waiting to do their work. The exercise was run again, this time with the first person flipping a card, and then allowing the next person to ‘pull’ the card in to their area and flip it. At any one time there are 4 things in process, and the average time for everyone to flip them all was under 30 seconds.
Lastly, the questions were asked how many things were in process on the team, what the size of them are and if they have value to the customer. The answers were that around 70-100 things are in process at any one time, of highly varying size and all of them have value to the customer.
We used the Force Field Analysis activity to look at Ways to limit the work in progress, and for shortening the value stream. The teams of 4 discussed what would support the idea, and also what may inhibit the idea. After this the whole team shouted out answers, while I wrote them on a huge sticky. Ideas were enforced by darkening lines underneath with arrows pointing to the middle. The big supporters for Limit WIP were: Don’t go straight to ENG for stuff, swarm on a smaller amount of features, include QA, have visual tracking. Inhibitors were: Too much process, too much switching, and high-severity production issues. For the Value Stream, the supporters were: Limit WIP and break features into MMFs. Inhibitors were: external dependencies and tech lead unavailable due to meetings.
Decide What to Do
Even I was reaching my saturation point by this time, and the activity planned of Circle of Questions seemed out of context. I tried to just blow by it (bad) and someone called me out to go through the list and pull out specific things to do. The team decided that visual tracking, formalizing hand-offs, breaking features in to MMF and swarming of fewer things would be the best things to do. The next question, naturally was how to do this, which I didn’t want to get in to straight away but wanted to answer questions. I should have stuck to my guns and put off answering until the next meeting as it was getting late and you could feel the low energy in the room. There was a healthy discussion about getting developers to run tests before checking in and I suggested a quick workshop/walk through, which was canned.
The activity planned for this phase was the Return on Time Invested (ROTI) where people vote on a scale of 0 to 4 as far as benefit received to time spent. We didn’t actually vote like that, instead going for a thumbs up/down vote. Almost everybody was at thumb sideways. For the thumbs-down people, it was asked what was needed hat they didn’t get. Keeping the language simple and not speaking in agile mumbo jumbo was called out, as well as getting to the point quicker. Less exercises, only doing 1/2hour weekly retros were also listed. I also wanted to ask those thumb sideways to up what the benefit was, but we didn’t get that far. Overall, I would agree and grade myself about a B-/C+ on the effort.
- Work with product to break features in to minimal, marketable chunks.
- Value stream analysis
- Look for largest amount of waste
- Look for unnecessary process
- Enable engineers to run the test suites and habituate the process
Pingback: Kanban at Yahoo! | Lean Software Engineering
Pingback: Curious Cat Management Improvement Blog » Management Improvement Carnival #20