Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Some teams call this event a “Show and tell”. And we do “Show” and “Tell” what we’ve done (cross reference Definition(s) of Done1. But that’s only part of what this event is for… The 3 pillars of Scrum are Transparency, Inspection and Adaptation. Showing and telling only addresses the first of those pillars.

It’s called a “Review” because it is primarily an opportunity for the team to invite stakeholders to give the team their feedback on the work that the team has done in the iteration and their plans for the coming one. This is the Inspection aspect.

It is designed to be a highly engaging session, full of two-way communication. And it is meant to be transparent - highlighting any problems encountered and how those problems were handled and how the team has tried to mitigate them happening again.

It is not designed to be a one-way, ‘Here’s what we’ve done' status report. Any Review that feels more like this kind of an event is an indication that the team think that they have all the answers (lack of intellectual humility), and that stakeholder feedback and advice is irrelevant (not embodying User-Centred Design). A true Agile anti-pattern.

1 Done means more than just code that has been deployed to Production. Other things (and there are more, once you start thinking more laterally about Done) include:

  • here are our estimates on value / effort for this work (Epic, or coming User Story) - we’ve Done the estimation, but it might prompt a discussion about how that changes stakeholder perception of the value of doing the work. Work is sometimes more valuable when it is perceived to be small, but if we highlight it is large, the perception of value might change - and vice versa (springing to mind here is the work we did during covid to introduce a 10.1 and 10.2 option in a drop down list which stakeholders thought would be a big change in TIS, but which actually only took minutes; or the various things that are brought up in Trainee Review that team members fight over seeing whether they can fix while the Review is still going on!).

  • results of Spikes and Investigations again are Done, but the value to the user is the prompt for them to have an informed discussion with us - does it materially effect the value of doing the work, does it inform different work to be considered etc.

  • maximising the work not done is a core Agile Principle - anything we’ve been able to class as Not Required is worthy of consideration to highlight to stakeholders on Review.

This helps with:

  • getting internal understanding of work completed
    (transparency);

  • getting wider support for any impediments the Delivery Manager was unable to remove
    (transparency and inspection).

  • managing the expectations of stakeholders
    (transparency and adaptation);

  • getting advice on future prioritisation and direction based on the work completed
    (adaptation);

  • getting group feedback to inform future decisions
    (adaptation);

  • showing stakeholders the iterative nature of Agile team working - they give feedback in one Review, and they can often see the team act on that feedback in the next, or shortly afterwards
    (transparency and adaptation).

Meeting length:

Up to 2 hours (plenty of time to really discuss things with stakeholders).

Anything less than an hour indicates either a lack of focus on feedback; or taking on tickets that were too large to finish in two weeks.

Who leads:

The whole team lead Review (i.e. Product Manager and Delivery Manager as well as the rest of the Team). The best people to talk about any of the work are the people that did the work!

...

The purpose is to:

  1. Agree as a team at a high level how long something is going to take (yes, we’re starting to talk about “time” here, but as a range of iterations given the number of unknowns). This is for the purpose of opening a dialogue with stakeholders that need to arrange associated activities to the deploying of the work. Obviously, the initial range of iterations we come up with will have a large element of guesswork and the range is likely going to be quite wide to begin with. This process is designed to be reused to increase certainty as work progresses.

  2. Mind-map the work - we often use Coggle for this - teasing out all the elements. With 3 amigos / the whole Product Team, this mindmapping is the fastest way to shed light on all aspects of the work in one go; make all the team aware of the detail and extent of all those aspects; and have an open team discussion.

  3. Having an initial think about how we would vertically slice this work into user-valuable increments - MVP and beyond

  4. Estimate in a range of iterations the amount of work likely involved in each slice (this may require a more granular estimation of work on each component part of the 'slice' and totalling the estimates up based on what can and cannot be carried out in parallel. Consider team availability, other work to focus on, being over ambitious and such like.

  5. Come up with a ranged estimate for the next increment (MVP if you're starting, or the next Product increment otherwise), and give a supporting narrative

  6. For example. A new piece of work might be estimated to take between 7 and 13 iterations, assuming confidence levels between 60% and near 100% (we can never be 100% confident!):

    1. the lower number an optimistic estimate: assuming almost the whole team works on it, barring LiveDefects, maintenance work etc. Or another way of looking at it is that this is an estimate with a 60% confidence level. [Beware that human nature means we have a natural positivity bias - please fight against this when coming up with the positive estimate]

    2. the higher number a pessimistic estimate: based on only a sub-set of the team working on it, in parallel to working on other things / the work emerging as more complex than first thought / compensating for our natural positivity biases / etc. Or another way of looking at it is that this comes with a near 100% confidence level.

  7. This exercise is useful for managing expectations of stakeholders needing lead time to help plan the roll out of the Service. It can be repeated, in order to hone in on a more precise figure as we learn more and more about the work, and especially once we start working on the first ticket.

Image RemovedImage Added

An example Coggle mindmap, for:

  • teasing out the initially known complexity (there’s always emerging complexity to consider as well);

  • discussing as a team;

  • estimating each component element, including whether elements can be worked on in parallel, or whether there are natural dependencies;

  • summing the total to get an estimate of the range of iterations we initially think it might take. Note the high levels of uncertainty at the point… 👇

Image RemovedImage Added

High-level estimation addresses the challenge of how to effectively convey the ‘uncertainty’ when predicting effort (that stakeholders want to understanding in terms of ‘time’ so that they can plan their activities around when we think we’ll complete something) for Epics.

Image RemovedImage Added

A way of visualising the results of what the team estimate at the end.

The team are communicating to stakeholders confidence levels of 60-100%. This example shows the team estimating between 7-13 iterations to complete the work.

If stakeholders plan for the work completing in 7 iterations, they need to be clear that they have taken on considerable risk that it won’t, or that they will only be getting the highest priority elements at that point (and the rest will follow on after that).

...