This was the first Meetup held as the Kanban Coaching Exchange has just been taken over by Dan Brown and Helen Meek from Ripplerock. This is meant to be a place where people can learn from each other and have a place to discuss things with others that are using similar practices.
The meeting had a presentation by Dan Brown first and then went into a discussion of what format would we like these meetings to have. The presentation was on “How do you start putting your data back to work for you?” How can you improve your forecasting and planning, and build better stakeholder trust? In this session, Dan explained ‘#NoEstimates’, and what you can do to replace the hidden value in estimation, while improving your predictability and allowing better conversations between stakeholders and Kanban teams. When you estimate the time it would take to run a project through sizing and estimation there is only a 60% chance that you will be in the right range, meaning 40% of the time the estimate is inaccurate. How can this be improved?
Dan mentions the Monte Carlo Simulation, a tool that you use existing data to help calculate how long a similar project may take in the future. LeanKit have actually built a Monte Carlo forecasting engine on an existing kanban tool. There is also a Jira integration so you could flow your Jira issues through LeanKit in order to do Monte Carlo. Troy Magennis has also presented on how to use this technique to give better estimations: http://vimeo.com/m/43479019.
Modeling and Monte-carlo simulation allows rapid (and repetitive) “what-if” risk analysis to be performed on proposed or ongoing Lean/Kanban projects. This analysis leads to reliably forecasting delivery dates, cost, staffing-requirements, and informed risk management. This provides options that minimize cost and delivery time, whilst maximizing revenue for a project and portfolio.
Modelling and simulation gives a platform for experimentation before and during a project. Modelling your development process and project allows you to simulate possible delivery date/cost outcomes 1000’s of times, and then compare these results to quickly find those model inputs that have the greatest impact on a final result (cost, date, or cycle-time); and then manage your project accordingly.
Very interesting talk but how much data have companies got to run this modelling, would be good to go through a real situation to see how they did it. Are IT the only department not using modelling?