You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 2
Next »
Because Kanban focuses on visualising the steps in the workflow, highlighting work in progress and optimising throughput between steps in the workflow, it is important that everyone is clear about two things:
how we define when something is fit to be pulled from one step in the workflow to the next;
how we respond as a team to common day-to-day scenarios.
One tool for achieving this is defining ‘policies’ as reference points for both the above. These are designed to be organic (like the Definition of Done that we’re already familiar with).
Following the team workshop around these policies, I’ve set them out here as a draft for you all to review, comment on/modify.
Policy | Workflow step / scenario | Description |
---|
Backlog What is the Backlog? Who populates the Backlog? Should the Backlog be ordered? What should be the size of the Backlog? What goes into the Backlog?
| Backlog | A record of conversations (ideally a 3 amigos ones) about things to be worked on in the next 3-6 iterations. Anyone, as a result of having the conversation. Remember a Jira ticket is the result of the 3 Cs process - card, conversation, confirmation Everything should be ordered - product-development and tech improvement tickets. As a team we need to align these two sides of the backlog wherever possible (Refinement meetings, Planning ones and Stand ups). Determining the order should follow a process (https://foldingburritos.com/product-prioritization-techniques/ ) that takes into account both the Value (to the customer and the business, the risk being mitigated, or the opportunity being enabled) and the Effort (complexity of the work, time it will take to complete, learning opportunity to facilitate, etc) The Backlog should contain enough tickets for the team to work on for the next 3-6 iterations, maximum. With the tickets for the current and next iteration being Refined. Our current velocity is c.13 tickets / iteration. Therefore the Backlog should be between 39-78 tickets. It’s currently 863. It needs radical pruning (for duplicate tickets, placeholder tickets, old tickets that we ‘thought we might work on at some point’ but that will actually never get prioritised). New skeleton tickets to be worked on in the next 3-6 iterations. Also tickets that have been through Refinement (and flagged as Refined). Periodically, the team will take Refined tickets from the top of the Backlog and add them to Ready for Development.
|
Refinement Which tickets should be refined? How should we approach estimation? What sized tickets should we end up with? What constitutes a shared understanding of the ticket? When should something be a ticket / Acceptance Criteria / Sub-task?
| Refinement | Anything at the top of the Backlog that has not been flagged as Refined is a prime candidate to move to the Refinement column. The Refinement column is also the designated column for any tickets being generated as a result of a LiveDefect - to encourage us to refine and prioritise this ticket to mitigate the LiveDefect recurring. When the number of tickets in Refinement exceeds the WIP limit, as a result of creating some following a LiveDefect, this should trigger an immediate ad-hoc Refinement session (straight after stand up is often a good time for it). While we’re not now formally estimating with Story points, we need to do some degree of estimation in order to decide whether a ticket is too big and needs splitting. Some degree of uncertainty is acceptable in a Refined ticket. But where that uncertainty is too great, create a SPIKE ticket(s) to get sufficient confirmation to proceed. Indicate elements to investigate early in a ticket. Include uncertainty and positivity bias elements when estimating. Focus on small vertical slices (no more mega-tickets). Vertical slicing can be hard to do. It can take some thought and even lateral thinking. Consider this framework for how to vertically slice work, if you can think of nothing obvious. Agile working with Kanban is about finding a happy balance between tickets that are small enough to move through the steps in the workflow rapidly, but that also encourage (require?) collaboration between team mates with different perspectives. For a ticket to be flagged as Refined, requires it to be detailed enough that anyone in the team could lead on working on the ticket (and would be able to work out who else to collaborate with to complete the ticket). There should be enough detail for someone to determine how to create one or more experiments towards an overall solution. A ticket needs to meet INVEST criteria. Acceptance Criteria should be a list of statements that can be turned into tests to determine whether the ticket is complete. The Gherkin format that is very popular in this BDD approach is explained with examples in this article. Sub-tasks should focus on how to split the ticket into pieces that enable collaboration. ”Good stories involve multiple people. Each subtask only involves one person.”
|
Ready for Development Anyone can pick the ticket up and make a start Clear, testable Acceptance Criteria Appropriately Sub-tasked Flagged as Refined
| Ready for Development | Essentially, has the ticket been Refined correctly Allow time at the start to enable a TDD approach to development - as a team we have committed to TDD by default Write Sub-tasks specifically to encourage collaboration (and get multiple perspectives on the ticket from the start - to avoid siloed working and the natural assumptions that are made when working in that way) Check that the ticket has been Refined - it should have been flagged, and appear as a yellow ticket with a red flag on it on the Kanban board
|
Implementing How will we test this has added value for the customer Timely documentation #Terraformfirst Considered for ‘exception’ monitoring Avoid scope creep Leave Slack for #fire-fire
| Implementing | As well as TDD as a coding approach, also consider how the ticket will be tested to confirm it is adding value to the customer at the end of development Consider at what point to start, contribute to and complete associated documentation (might be easier to do some as you go through, rather than as a task at the end of development) #Terraformlast is hard, and when collaborating on a ticket, will often result in issues before you get round to it Does this work require specific monitoring (and logging) Notify POs / highlight in Stand up when adding >1 Sub-task after Refinement [ Joseph (Pepe) Kelly - did you mean that fire-fire tickets should go straight to Documenting, and the resolution of them should be handled in the dedicated fire-fire channel for that specific incident? ]
|
Peer Review Clear breakdown of what’s been done and why If you merge it, you own it too (who checks the pipeline?) Do not amend and force push after comments have been added Use the PR as a record of discussions Respect the reviewers
| PR | Often useful to restate the problem, and describe the solution developed Tickets should normally be written in a way that ensures collaboration, so as a team we need to appreciate everyone involved in contributing to the ticket ‘owns’ that code. By default, anyone approving a PR should therefore merge the code. Recent developments enable flagging a PR for auto-merging on approval, which is an option to consider vs using the approach of creating a draft PR Comments are left as a proxy for a two-way constructive conversation. Respond with gratitude for the comment, or justification for your approach. And request confirmation, review, merging Discussions on a ticket often happen on a Slack chat or a Slack/Teams call. Make sure these are included, linked, summarised on the PR, for a full version history Reviewing code shouldn’t be regarded as a tick-box exercise. You should be encouraging reviewers to properly review the code, not just sign it off. By aware that when you necessarily have to submit a PR of >500 lines, you are asking a colleague for a potentially significant investment of time/effort.
|
Verifying on Stage Check all amended services are up and working All E2E tests passed Independent verification Manual check
| Verifying on Stage | Use Prometheus by way of systematic review that the services touched on by the PR are all up and working Don’t ignore E2E test errors Early in the Implementation step, ask a colleague to be ready to do the verification - so they are already aware of the work ahead of time Log into the Stage app and manually check the changes have worked as anticipated and not broken anything obvious
|
Verifying on Prod Add a how to for testing Independent verification Review customer value added Manual check
| Verifying on Prod | Where testing the new code is not obvious, help out the verifier by adding a how to (e.g. need to push specific job into Lambda, attach test materials, etc) As for Verifying on Stage, request a different team member carries out verification on Prod Final opportunity to recheck the value added from the customer perspective (of course, this should have already been considered at Refinement and PR at least, if not all the way through the dev process) Log into the Prod app and manually check the changes have worked as anticipated and not broken anything obvious - this may be more difficult in the Live environment
|
Documenting Ensure the affected repo’s READ.ME file(s) are up to date Ensure the Dev handbook is up to date Add to the Review page on Confluence
| Documenting | This may have already happened, during Implementation Has anything you’ve done necessitated an update to the Dev handbook and was this done during Implementation, or is it outstanding? Tickets by default will normally be ones where we will want to present our work at Review for feedback. Ensure we’re clear how to walk stakeholders through what’s been done, what feedback we’re looking for and any questions we want to ask stakeholders in terms of advising us on next steps
|
Done Check DoD if unsure Check Review page is complete
| Done | See https://hee-tis.atlassian.net/wiki/spaces/NTCS/pages/1286635576/Definition+s+of+Done and 2021 Reviews
|
|
Who, how and where
| Tech improvement | Periodic reviews of the Tech improvement spreadsheet. Periodic feeding in of top items to the Trello roadmap. Periodic feeding in of current items to Jira Backlog.
|
Pull system, not push
| Kanban board mechanics | Kanban is a PULL model, whereas Scrum is a PUSH model. This means tickets should only transition between steps in the workflow at the point that team members are going to work on them in the new step, not pushed to the next step in the workflow when a team member has finished with them in the previous step. eg. a ticket is moved from PR to Verifying in Stage when a team member has approved and merged the PR and is going to start Verifying in Stage.
|
*a SPIKE ticket is a time-boxed (usually a few hours only) piece of work to research an element of uncertainty around the Refinement of a ticket. Eg. what software could we use to achieve the goal of the ticket, will a certain approach work with the infrastructure we have in place or will we need to modify that as part of the ticket, what is the simplest thing we can program that will convince us we are on the right track?
The term ‘Spike’ is believed to have originated in rock-climbing, where the act of driving a spike into the rock does nothing to your progress to the top, other than to anchor you on the path and enable future climbing.
0 Comments