Application Services provides a platform for IT Development and Maintenance professionals to discuss and gain insights into best practices, process innovations and emerging technologies that will shape the future of this profession.

« September 2014 | Main | January 2015 »

October 30, 2014

How 'Kanban' can help in Agile development?

One of the reasons for many agile projects to not achieve their release targets could be that the team signs up too many work items without moving each of the items into 'Done' state. This could be because of 'waiting for dependency with other work items', pending with the testing team, environmental issues or build break issues. During the last weeks of the sprint there is always a rush to claim maximum story points committed for the sprint and hence the team hastily starts other work items although the other work items are still pending. For ex: The developers have completed coding for the user story and it is pending for testing and the testing team is unable to test the user story due to some environment issue. But the developers, as discussed earlier, in order to claim maximum story points continue to sign up newer user stories. In the end the team might achieve target velocity for the sprint but there is no potentially shippable product. So velocity often may not be sufficient to track the success of a sprint and hence we need other metrics to track to achieve sprint goal.

Teams can overcome this kind of scenario through Kanban. Kanban enforces continuous improvement and lean practice through metrics like WIP, Cycle time and Throughput which are transparent and actionable. Transparency here is the visibility into the team's progress. Let's see the definitions of the above metrics relevant to agile methodology.

WIP (Work in progress): It includes all the tasks which are in between the 'To Do' and 'Done' status on the sprint task board.

Cycle time: It is the total time elapsed for the task to reach from the status 'To Do' on sprint task board to 'Done' status.

Throughput: Throughput is the number of 'Done' work items per unit of time (like day, week or iteration)

These three metrics are related through 'Little's law' which states that:


Average cycle time = Average work in progress/ Throughput


Thus change in any of these parameters results in a change of the other. So if we want to decrease the cycle time of the tasks then we need to decrease the WIP. Hence to bring a positive change we need not undertake a complex transformation but simply control the number of things that are being worked on at any point of time. For ex: In the above example when there is an environment issue at the testing team end, instead of signing up for newer user stories the developers should help the testing team to resolve the issue and probably even test the user story by themselves to bring back the testing team to get rid of the backlog which piled up due to the issue. This is the whole idea of Agile which is about team spirit and self-organizing, cross functional team with team members able to switch roles when the situation demands to achieve team goals instead of the typical handoffs and blame game that happens in traditional development models.


Are our agile estimates really bad?

A frequent challenge faced by Agile teams is 'How do we improve our estimates'? This question must have raised due to the situation where the Agile teams are not able to deliver their commitments for the sprint. In one of the projects I worked, the team followed the Fibonacci series of story point estimation for the user stories. Story points were estimated during the Sprint planning meeting 2 while in sprint planning meeting 1 the team agreed with product owner on what user stories can be committed in that sprint. During the sprint planning meeting 2 the team would cut each of the user stories into individual tasks necessary for that user story realization and then estimate hours to complete each of the task. If the total hours estimated for the sprint were less than the team velocity they would sign up more user stories from the product backlog or if the total hours estimated were more than the team velocity the team would discard the least priority items for that sprint.

Although I have listed both the scenarios above but the team used to almost always experience former case where they would over commit on the user stories and every time they had to drop few least priority user stories at the end of the sprint. In the retrospectives we had we all agreed that our estimates are the problem and that we need to improve it so we can stop missing sprint targets. In one of the retrospective the team decided that we will spend more effort in estimating "accurately". But we quickly realized that it was not yielding expected results as the more effort we spent on improving the worst were our estimates. This was also confirmed from the below graph about 'impact of effort on estimation accuracy' (Mike Cohn)


Mikecohn.pngThe above graph clearly shows that beyond a certain point of effort the accuracy does not improve rather it goes worst. Also the team learnt that estimations are not commitment and they can never be accurate. We just needed to come close to actuals.

In another retrospective we decided to map hours to story points. Based on our past data for about 7-8 iterations we found out that on an average 1 story point equaled to about 5 hours. And so this conversion factor was used to estimate the tasks. The result was no different and so we chucked it.

The team felt that more than 'estimating accurately' the issue it was facing was pressure from the team leads to improve estimation and the management to meet sprint targets. As a business analyst on the team I decided to help the team and do some number crunching to identify what the issue really was. I gathered data of all the estimations and the actual hours spent on user story development over the past 15-20 iterations and found that more than 50% of these iterations the estimations were close to actual hours with only gap of about 10-15% i.e. If a task was estimated 10 hours then the actual effort in developing it was about 11-11.5 hours. About 30% of the iterations the estimations were with gap of 20-25%. So I concluded the following:

·         The team was doing 'OK' in estimations and there was no need to panic or overly get worried about improving estimations as after all these are estimations and not commitments

·         The plans were overly ambitious which made team to commit without any buffer or with insignificant buffer. There was absolutely no sufficient buffer planned either at the sprint or at the release level which made the team to sign up for more.

·         The teams should understand their true capacity based on the velocity over previous sprints

What we learnt from this exercise?

·         Estimation is not only the time factor for a user story but also complexity involved which is an unknown entity at the time of conducting the estimation exercise. So estimations should be updated based on the improved understanding from time to time and properly buffered with.

·         Estimations are not commitments and we can never estimate 100% accurately.

·         There will be discrepancies between the points and hours across sprints and it is perfectly fine.

·         The team should be confident about its estimates.


Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter