Understanding the Jira Burn-down Chart

by Christine Thompson

The Jira burn-down chart tracks the total work remaining in the sprint and projects the likelihood of achieving the sprint goal. By tracking the remaining work throughout the iteration, a team can manage its progress and respond accordingly. Having spent some time getting to grips with the intricacies of the burn-down chart, I thought that I would share my understanding. 

The green line is the burn-up line which indicates the time spent ie. the sum of all the hours logged against the tasks in the sprint. The red line is the burn-down which indicates the time remaining ie. the total estimated time in the sprint minus all the hours that have been logged. You may well expect that as the sprint progresses, the time burnt-down on the red line will equal the time burnt up on the green line but it seems that this is often not the case. 

Here’s a snippet from a recent burn-down chart for one of my teams, mid-sprint:

 burndown1

You can see that the time indicated on the axis for the burn-down is 84 hours but the time indicated on the axis for the burn-up is 60 hours. So we have logged less time than we estimated we would need. For example:

 timetrack1

If this consistently happens, it tells us something about us overestimating the time we need on our tasks. However, perhaps of more concern would be the converse of this. For example:

 burndown2

Here, we have burnt-up more work than we burnt-down. This is because you can log more hours than you have estimated, for example:

 timetrack2

However, you cannot have a negative time remaining. If you log more than you estimated, the remaining stays at zero. This means that the total represented by the time spent (green) line can exceed the total represented by the time remaining (red) line and indicates that work is taking longer than we estimated. 

What happens if we increase our estimate because we find out, part way through, that a task will take longer than originally thought? This is where we see the upward spikes in the time remaining (red) line, as indicated above. The start point of the ideal work (grey) line and the start point of the burn-down line do not adjust on the axis to reflect that the additional work has been added, so this exacerbates the difference between the apparent burn-down progress and the amount of work completed on the burn-up.  

If your estimates were exactly correct, then adding the time for the scope increases to the apparent burn-down value should then equal the amount of work logged in the burn-up. This isn’t the case in the example above (ie. 35h + 20h + 4h still does not equal 70h) because we are also suffering from under-estimating the time that the tasks will take. 

Given this understanding of what the lines on the graph reflect, it appears that there is useful information to be had here about the accuracy of estimating, especially where we are taking longer on tasks than expected, which may need some further investigation. However, I would only be concerned if this were a regular trend and was matched by a similar discrepancy in the story points estimated and completed within the sprint.  

One final thought is that we do, of course, want the burn-down to reflect reality and not just match the ideal progress line. So being honest about the progress of the work in a sprint is far more important and useful to the team than artificially logging work to achieve a perfect burn down.

A helping start when moving from Silo’d working

by Tristan Bagnall

I often find when teams move from working as a collection of individuals with a shared purpose, but in different silos that they needs a helping start on when to communicate with each other.

Naturally there are points in time where Scrum creates the opportunities, but those opportunities are not enough for a performing team.

I don’t intend on going into how to communicate in the Scrum ceremonies, but rather share some thoughts on how to start teams effectively talking.

Here are the guideline I give teams:

When you are about to pick up a new user story have a quick chat with the team – all the cross functionals (Coding, testing, documenting, deploying, user experience, etc.) to:

  • … ask if there is anything you can do on a currently in progress user story to help get that completed first.
  • … make sure the story is still good – incorporating anything we may have learnt so far in the sprint. Include the PO in this.
  • … make sure the plan on how to complete the user story is still right – it might need updating based on what we have learnt so far in the sprint.
  • … make sure the tasks represent he plan to complete the user story and have enough information to enable anyone to pick them up.
  • … ensure there are the right people available to work on the user story. If you have specialisms in the team make sure there is someone from the cross functionals to work on the user story, such as a tester to ensure a coder knows what to produce to pass tests.

This conversation would normally be anywhere from 30 seconds to 5 minutes, unless we have learnt something that causes he plan to change considerably.

When you are about to pick up a new task:

  • … ask other team members if they need any help on what they are working on, to help them get their task completed.
  • … check that your approach is going to be correct for the task at hand with another team member. For example
    • a coder may check with a tester before implementing code to ensure they understand the tests, edge cases, negative paths, etc. that they need to cover.
    • a tester may check with coder or user experience to see how they are thinking they would add a button to a user interface to ensure their automated tests would capture it

This is a start for teams, and once they get into the flow they will adjust and improve, probably without even thinking about it.

So, when is Alfresco moving to GitHub?

“When are you moving to Git/GitHub/Bitbucket/decentralised version control software/…” must be the most frequent question I get, both from people inside and outside of Alfresco.

I usually start by explaining that Alfresco is on GitHub! The Alfresco organisation there currently owns 35 public repositories, and many of them are very active. All our mobile code has been there forever; for instance: (iOS + Android) * (SDK + App), as well as Aikau, Gytheio, the Alfresco SDK, etc. The recent announcement that Google Code is going to shut down triggered another batch of migrations to GitHub, such as Jive Toolkit, dropbox integration, and more.

Of course, what the question really means is: “When is the Alfresco Platform codebase moving? Not some petty integration or client app.” :-) Well, we do have a mirror of the Community Edition source on GitHub. But as for moving the whole code base there, the answer is: never — at least in its current form. Here are a few reasons why.

  • It doesn’t fit!
    GitHub has a limit of 1GB per repository. We hit that limit for the mirror, and had to filter a few big folders. (Even Linux had to cut down the history of the repository to move there!) Of course, we used to commit big jar files in the source. We don’t any more, but even then, we still cross the limit. And it’s not just GitHub but all DVCS: holding all the history of the repository in your local copy has a big impact if you have a 10 year old code base of one million lines of code!
  • Continuity with past releases
    We could make a clean cut, leaving the Old World in Subversion and having the Brave New World in Git. That would work, but it would make merging modifications from earlier maintenance branches very hard — and most of our commits are of that kind! We have 4 active service pack branches (4.1.x, 4.2.x, 5.0.x) and 18 active hotfix branches (from 3.4.14.x to 5.0.1.x) All fixes have to be merged forward, so that people don’t get regressions when upgrading. Doing this is tedious enough without having to switch software in the middle!
  • Access rights
    Our Subversion repository is currently a patchwork of access rights, and DVCS don’t support that — the idea being that you spread your software in smaller repositories, and manage the rights for each repository. Even in a given branch, we have folders which are public (Community Edition), others which are reserved to customers (Enterprise Edition) and others which are private (Cloud Edition).
  • Big team, big problems
    I don’t want to manage a Git repository where 50 people (and counting!) commit daily. DVCS are inherently more complicated than centralised systems (think of what identifies a revision, for instance), which certainly allows for more power, but also more headaches in big teams where not everyone has a PhD in Gitology!

However, don’t despair. We are not stuck in this situation forever! As you saw earlier this week, we are currently working hard to make the code more modular, and to extract from our big codebase some independent, consistent pieces that can be released separately. You’ve seen this already with Aikau and Share (but it also happened a while ago with Records Management, and all the integrations).

As we extract smaller chunks, it can make sense to move them over to GitHub, because they will then be more manageable and will have a small team of people responsible for it. The goal here is twofold: externally, to release more frequently to our audience, and internally, to allow more parallel developments to happen and be more agile.

I hope you are as excited as we are about this change – this is much more interesting than just changing our SCM software!

Adding Views to Filtered Search

by Dave Draper

Introduction

One of the Alfresco Solutions Engineers recently contacted me to ask how easy it would be to add a table view into the new filtered search page in Alfresco 5.0. Fortunately this page is built using the Aikau framework, so this is actually an incredibly easy task to accomplish. This blog will take you through the process. If you have trouble following the steps or just want to try it out then download the example extension module from here.

Extension Module Creation

The best practice to customizing Alfresco Share is to first create an extension module, and for Aikau pages this is a very straightforward process. First of all ensure that Share is running in “client-debug” mode.

Now login to Share and perform a search so that the filtered search page is displayed.

Filtered search page

Open the “Debug” drop-down menu and select “Toggle Developer View

Debug Menu

You should see a page that looks like this:

Developer View

Now click on the link at the very top of the page that says “Click to generate extension JAR”. This will generate a JAR file containing all files required to customize the filtered search page.

Unpack the JAR file and open the “/alfresco/site-webscripts/org/alfresco/share/pages/faceted-search/customization/faceted-search.get.js” file in your editor of choice.

Now go back to the filtered search page (still in developer view) and click on the info icon for the main list. It should display a tooltip indicating that the widget selected has an id of “FCTSRCH_SEARCH_RESULTS_LIST”.

Selecting the Search List

Copy the “Find Widget Code Snippet”, it should be:

widgetUtils.findObject(model.jsonModel.widgets, "id", "FCTSRCH_SEARCH_RESULTS_LIST");

Paste this into the “faceted-search.get.js” file that is open in your editor. This snippet of code is all you need to target a widget on an Aikau page (obviously each snippet of code is different for each widget on the page), and in this case you have targeted the main search results list.

Understanding the extension

Lists in Aikau are used to manage data and delegate the rendering of that data to one or more views . We want to add an additional view into the search page.

There is lots of information in the Aikau tutorial on creating views, so I’m not going to repeat that information here, but if you’re not familiar with defining a list then you should certainly work your way through the tutorial.

To add a new view you just need to “push” a new widget declaration into the “widgets” array of the search lists “config” object. You can create any view you like, but as a relatively simple example you could create the following (this would be the complete contents of the faceted-search.get.js file):

var widget = widgetUtils.findObject(model.jsonModel.widgets, "id", "FCTSRCH_SEARCH_RESULTS_LIST");
if (widget && widget.config && widget.config.widgets)
{
   widget.config.widgets.push({
      name: "alfresco/documentlibrary/views/AlfSearchListView",
      config: {
         viewSelectionConfig: {
            label: "Table View",
            iconClass: "alf-tableview-icon"
         },
         widgetsForHeader: [
            {
               name: "alfresco/documentlibrary/views/layouts/HeaderCell",
               config: {
                  label: "Name"
               }
            },
            {
               name: "alfresco/documentlibrary/views/layouts/HeaderCell",
               config: {
                  label: "Description"
               }
            }
         ],
         widgets: [
            {
               name: "alfresco/search/AlfSearchResult",
               config: {
                  widgets: [
                     {
                        name: "alfresco/documentlibrary/views/layouts/Row",
                        config: {
                           widgets: [
                              {
                                 name: "alfresco/documentlibrary/views/layouts/Cell",
                                 config: {
                                    additionalCssClasses: "mediumpad",
                                    widgets: [
                                       {
                                          name: "alfresco/renderers/SearchResultPropertyLink",
                                          config: {
                                             propertyToRender: "displayName"
                                          }
                                       }
                                    ]
                                 }
                              },
                              {
                                 name: "alfresco/documentlibrary/views/layouts/Cell",
                                 config: {
                                    additionalCssClasses: "mediumpad",
                                    widgets: [
                                       {
                                          name: "alfresco/renderers/Property",
                                          config: {
                                             propertyToRender: "description"
                                          }
                                       }
                                    ]
                                 }
                              }
                           ]
                        }
                     }
                  ]
               }
            }
         ]
      }
   });
}

We’re pushing in a new “alfresco/documentlibrary/views/AlfDocumentListView” that uses the table view icon (“alf-tableview-icon“), has a label of “Table View” (which we could have localized if we wanted) and a value of “table”.

The view has two header cells (for name and description) and each item in the list is rendered as an “alfresco/documentlibrary/views/layouts/Row” widget containing two “alfresco/documentlibrary/views/layouts/Cell” widgets.

The first cell contains “alfresco/renderers/SearchResultPropertyLink” that renders the “displayName” of the item and the second is a simple “alfresco/renderers/Property” that renders the description.

Testing out the view

Re-package the extension files as a JAR file, copy that JAR file into the “share/WEB-INF/lib” folder and then restart the server. When you perform a search you should see your table view as an option.

Selecting the view

Selecting the table view will show the search results as:

Search Table View

You can add more columns to your table view, but it’s important to understand that the API used on the search page only retrieves a very small set of Node data. The data that is available for each node found is:

  • displayName
  • description
  • mimetype
  • modifiedBy (user display name)
  • modifiedByUser (username)
  • modifiedOn
  • name
  • title
  • nodeRef
  • path (within a site)
  • site (if the node is in a site)
  • size (in bytes)
  • tags
  • type (e.g. “document”)

If you want to display more than than this limited set of data then there are a couple of options available.

One approach that you could take is to use the “alfresco/documentlibrary/views/layouts/XhrLayout” widget that allows an initial version of the view to be rendered for an item (using the limited data set) and when that item is clicked the full node data is requested and the “full” view is then rendered using that data. However, this widget is only a prototype and should only be used as an example.

Another option would be to extend the “alfresco/documentlibrary/AlfSearchList” widget to request the full data for each node before the view is rendered. This would naturally slow down the rendering of search results but would allow you to display any of the data available for that node.

Deprecations

The example used in this blog will work on 5.0, but you should be aware that some of the widgets referenced have now been deprecated in later versions of Alfresco. The deprecated widgets won’t be removed for a long time, but if you’re customizing 5.0.1 onwards then you should look to use the latest versions. All deprecations are listed in the release notes for Aikau.

Screen Shot 2015-03-31 at 4.34.51 PM

Release Agility – Update I – Share source location changes

If you are a member of the Alfresco Development ecosystem or you attended one of the recent Alfresco events, you might have heard of the Release Agility project, a major investment we are doing at Alfresco to improve the pace, quality and modularity of our development and release process.

And if you monitored the Alfresco Public SVN and/or you build Alfresco from sources, you might have noticed some substantial changes in our source code layout, specifically related to the Share webapp location.

In case you were guessing, yes, these two items are tightly connected and in fact the changes in SVN changes are just one of the initial steps of the larger process improvements driven by the Release Agility project.

So, in the spirit of open communication and in pure Sinek-ian fashion, let me give you a general idea of the changes, starting with why we are doing this.

Continue reading

Why Alfresco 5.0.d will be a game changer for UI development

by Dave Draper

Introduction

It was recently announced that Alfresco 5.0.d has been released. There is lots of great stuff in this release for the Alfresco Community to enjoy – but the thing that I’m most excited about is that 5.0.d has a dependency on artefacts created from the independent Aikau GitHub project. This is a significant change because, for the first time, it is going to allow Community users to have access to the latest UI updates and fixes, rather than needing to wait until the next Community release.

The Benefits of an Independent Aikau

Before I explain how unbelievably easy it is to upgrade the version of Aikau that is used in 5.0.d, let’s cover some of the reasons why you should be excited about this change if you customize or make enhancements to Alfresco Share.

First and foremost, you can get an updated version of Aikau every week – this means you get access to the latest widgets, improvements and bug fixes almost as soon as they are implemented. Those enhancements can even come directly from the Alfresco Community as we’re very happy to merge your pull requests into Aikau, if they meet our documented acceptance criteria.

This means that you don’t have to passively wait anywhere between 6 months and a year for a new release that may or may not contain a fix that you might be hoping for. Now you have the opportunity to raise bugs (and optionally provide the fixes for them) as well as raising feature requests for inclusion in future development sprints. This gives the Alfresco Community unprecedented influence on updates to the UI code.

The Aikau project backlog is public so you can see what we’re going to be working on in the near future, and can give us an indication of what you’d like to see implemented, by raising new issues or voting on specific issues.

How to update Aikau in 5.0.d

The best part is that you won’t even need to re-build anything in order to get updated versions of Aikau… you just need to follow these 3 simple steps:

  1. Download the JAR for the version you want from the Alfresco Maven repository
  2. Drop it into the “share/WEB-INF/lib” directory
  3. Restart your server.

That’s it.

No really, that’s it… Surf supports multiple versions of Aikau and will always use the latest version available (although you can still manually configure the version used with the Module Deployment page if you want to).

The Aikau project even provides a Grunt task called “clientPatch” for patching Aikau clients, if you’ve cloned the GitHub repository and want to verify your own changes before submitting a pull request. You can even configure a list of different clients and then pick which one you want to update.

Summary

With the release of 5.0.d you can now take advantage of the latest updates to Aikau as they happen. Your installation of Alfresco Community can keep up with UI related bug fixes and your customizations can leverage all the new features and widgets that get released every week.

Alfresco Community 5.0.d is a great release and is going to revolutionize Share UI development.

 

 

 

Story Points?

by Tristan Bagnall

Recently I have been asked quite a bit about story points here are some of the answers I have given.

To give some context and scope around this post, here are some quick facts I have learnt about story points:

  • For story points we use an altered fibonacci sequence: 1, 2, 3, 5, 8, 13, 21, 40 ,100. (Some tools use 20 instead of 21)
  • Story points are abstracted from elapsed or ideal time.
  • They are like buckets of accuracy / vagueness.
  • The larger they are the more assumptions they contain, the larger the probable complexity and therefore effort.
  • They are numbers, allowing the use of an empirical forecast
  • They are used by the Product Owner and enable them to do forecasting – the PO should find themselves being asked, when will I be able to have a usable checkout (or other feature).
  • They are used on user stories a.k.a. product backlog items (PBI)
    • Epics are included as user stories, even though some tools have adopted a taxonomy that suggests Epics are different to user stories.
  • They show the relative effort and complexity of a chunk of work.
    • They are a vector – 8 story points is 4 times as much effort as 2 story points (4 x 2 = 8)

There is plenty of literature out there about story point, estimation, etc. This is not meant to be exhaustive, but I would encourage everyone to read about them.

Why not use man days instead?

Everyone has an opinion on what a man day is – it is kind of mythical as it means so many things to different people.

Man days suggest that there is little complexity and we are certain on what needs doing – after all days can be divided into hours (24ths) so they are very accurate.

Man days also start give an expectation of a delivery date, even if they are padded out by saying they are ideal man days. However once you start with ideal man  days you then get into confusing realms of what is ideal and what is really happening. For example:

  • 1 man day, might be 2 ideal man days as the person is only spending 50% of their time on a team (a 50:50 split).
  • But in reality they are context switching every 30 minutes, so the time split is really less than 50% – context switching is very expensive and leads to poor quality work. So the real split might be something like 40:40:20.
  • This suggests that 5 man days are really 2 ideal man days.
  • At this point normally a large debate starts, with boasts about how easily someone can context switch and these (or any) figures are wrong.
  • At the end of the debate there is a lack of clarity and therefore the man days have become meaningless

It is generally accepted that it is better to work out the effort and then measure how a team progresses through that effort.

Why the sequence of numbers?

As we continue to have conversations about an item of work we get to know more about it, we therefore learn about its complexity, we remove uncertainty and get an idea of the effort involved to deliver it. While we do this we break the work down into more manageable parts. Through all this we are testing assumptions, either removing them, or correcting them or validating them.

As we gather all these moving parts we can become more accurate about how much effort is needed. While it is big, we have a lot of assumptions, and due to that we are pretty vague.

So how does this tie back to the sequence of numbers?

As we can be more accurate with the smaller items we need more buckets that are closer together to put the chunks of work into. Therefore the first part of the sequence is ideal: 1, 2, 3, 5, 8.

Then we have the large chunks with lots of assumptions – the epics – that need to be broken down before we can work on them: 40, 100

Then we have chunks that we have become more familiar with, partially broken down, but are still too big: 13, 21.

How small should a user story be before I start working on it?

Another way of putting the question is

  • how much uncertainty should remain;
  • how many assumptions should be cleared up;
  • how effort should there be;

before I pull a user story into a sprint or into progress on a kanban board?

This depends on several factors:

  • How much uncertainty are you comfortable with?
  • How will the remaining assumptions affect your ability to deliver the chunk of work?
  • What is the mix of the sizes you are pulling into a sprint?

As with all things agile there are exceptions and generalisations. One observation I have made is that many teams think that they can take large chunks of work into a sprint, however this means there are lots of assumptions still to be worked out, lots of vagueness and uncertainty. This leads to a lack of predictability and consistency on the sprint delivering.

Therefore I have normally advised the largest single chunk going into a sprint is 8 story points, but there should always be a mix of sizes going into a sprint.

A helpful technique to start estimating

by Tristan Bagnall

Not sure how to start sizing stories?

A clever agilist once showed me a really useful technique to help teams start with story points and break the initial barrier on estimating. Here it is with my twist:

  1. Pick a story that you think is small, perhaps even the smallest on the wall –
    • well understood,
    • not many assumptions,
    • understood by all the team,
    • little effort to get done
  2. Find all the similar size stories and label them all as small
  3. Look for the next significant size up and label them medium
  4. Look for the largest stories and label them large
  5. Now go back to the small stories
  6. Mark all the small stores on a scale of small, medium and large, Try to think of the medium as about twice as large as the small and the large as about three times as large as the small.
  7. Move on to the medium sized stories and mark them all as small, medium and large
  8. Move to the large stories and mark them as small, medium and large. Try to think of the medium being half (or less) the size of the large.

You should now have user stories labelled and  marked:

  • Small – Small
  • Small – Medium
  • Small – Large
  • Medium – Small
  • Medium – Medium
  • Medium – Large
  • Large – Small
  • Large – Medium
  • Large – Large

We can use these to translate to story points:

Small Medium Large
Small

1

2

3

Medium

5

8

13

Large

20  or 21*

40

100

* Depending on your tool you may find support for 20 or 21

The role of the Scrum Master in empowering the team

by Christine Thompson

Self-directed, cross-functional teams

Here’s one of the things that I love about Scrum. Scrum is a simple system which allows people to be intelligent within it. It assumes that team members do the best they can within the constraints of the system they work within. If something goes wrong, it is generally assumed that it is the process that is at fault and not the people. The Carrot & Stick approach doesn’t motivate people in skilled work.  Instead, autonomy, mastery and purpose do.

Scrum teams should be self-managed, self-organized and cross-functional. Team members take their direction from the work to be done and not from the Scrum Master or stakeholders. To empower the team, they need authority, resources and information. Scrum itself values team success over individual performance.

Each scrum team should be made up of team members with cross-functional, “T-shaped” skills. Whilst people may have an area of speciality, they also have a set of broader skills which overlap with those of their team-mates. If a skill-set is over-stretched, then other people need to step in and fill it. If a skill-set is missing, then we need to train people up.

Finally, reforming teams frequently is wasteful as it takes a long time to establish a performant team.

Powerless teams

So what are the characteristics of a powerless team? They may be heavily directed by the Scrum Master and/or influenced by people outside of the team. They’re not making their own decisions, they’re being told what to do. Perhaps they get no value from the daily stand-up: they address the Scrum Master and use it as a status update. Individuals either don’t participate or they argue about everything. People work in isolation and just “do their own thing”. Communication happens indirectly, via comments in tools instead of face-to-face.

Empowered teams

So what does an empowered team look like? The team share an understanding of their tasks and what it takes to complete them and they find their own answers without having to revert to other authorities. Individuals offer to help each other out whenever and wherever they can. The team values its interactions and conversations; all the meetings they hold are considered of value. Everyone shows respect to everyone else, everyone in the team is valued equally and the whole team works towards completing their goals together.

Role of the Scrum Master

So what’s the role of the Scrum Master in empowering the team? The Scrum Master is not the same as a Team Leader or Tech Lead. They are a “Servant leader” – they facilitate but do not manage the team. They may question and challenge things but they have no authority because the team manages themselves. It’s important that the Scrum Master sets the tone of the team in their own behaviours and they also provide the social grease on the distributed team, encouraging teams to use the thickest form of communication available at any time.

For example, the Scrum Master disempowers the team when they:

  • Assign or ear-mark tasks for individuals – team members should decide what they will progress next themselves, based on the information they are given in the scrum meetings and from their understanding of the sprint backlog
  • Influence the sizing of tasks – unless they are performing a dual role as an engineer on the team, the Scrum Master does not take part in, or steer the outcome from, the sizing discussions
  • Make design / implementation decisions for the team – again, unless they are also an engineer on the team, the team members themselves should be making decisions about how a task will be implemented
  • Interfere with the flow of the sprint – if the the team has all the information it needs about the priorities and tasks in the sprint, then there is no need for the Scrum Master to influence people on what tasks they should be working on and when
  • Chase progress instead of chasing blockers – the Scrum Master is there to facilitate and not to manage the team. Asking for progress updates does not engender trust between themself and the team. Such information should be available from the task board and the Scrum Master should only be chasing impediments.

Instead, some examples of what the Scrum Master might do to empower the team:

  • Reduce / eliminate “command and control” practices so that teams can run their own sessions openly and honestly; ensure that dysfunctional meeting participants are controlled
  • Ensure that barriers between team members are removed
  • Work with the team to remove impediments effectively
  • Protect the team from stressful outside influences and unnecessary interruptions
  • Prove a level of true commitment to the team – teams will not feel truly empowered until they see that the Scrum Master is serious about the role

Final thought

The ultimate goal of the Scrum Master is to coach and support the team to the point at which is becomes truly self-organising, autonomous and empowered. In the words of Nanny McPhee: “There is something you should understand about the way I work. When you need me but do not want me, then I must stay. When you want me but no longer need me, then I have to go. It’s rather sad, really, but there it is.”

ScrumBan

by Christine Thompson

Why ScrumBan?

I first started looking into ScrumBan when I was working with a team who had been doing a prolonged period of feature development and had a well-established Scrum process. Everything was working well for us until we started to transition into a phase of bug fixing and support. Suddenly we found that we had too much support to have predictable sprints. We could never finish a sprint because the support tasks couldn’t be sized accurately. Our priorities were constantly changing, as new issues came in, and we couldn’t lock-down the sprint. Things went into and out of the sprint and our burn-down started to look like an electrocardiogram.

I started to question then whether we should be looking at a continuous workflow and moving over to Kanban. This way we would be able to respond quickly to priority changes, limit our work in progress and work on tasks that would take more than sprint. But Scrum had worked so well for us that I was reluctant to move away from it completely. This is when I hit on ScrumBan.

What is ScrumBan?

ScrumBan combines the framework of Scrum with the principles of Kanban. It is more prescriptive than Kanban, which has no roles or meetings, but is more responsive to change than Scrum, where change can only be accommodated at the sprint boundary. ScrumBan retains all the roles and meetings of Scrum but uses the Kanban continuous workflow board. The daily stand-up focusses on flow of tasks across the board and reviews what it would take to move each one forward. The workflow can even include both support and feature work items on the same board, for teams who have to progress tasks in both areas at once. This is a neat solution to dividing teams in half, where those who end up in the support team are generally less impressed than those who remain on the feature team! It allows people to vary the type of task they pick up each time and to share the support load.

Using the Kanban board allowed us to take advantage of some of the lean principles of limiting work in progress and eliminating blockers. We had a limited number of “Ready” slots available on the board, which the product owner would fill with the top priority items. Should priorities change, or new requests come in, these could be swapped in and swapped out as needed. Ready to progress items were ordered in priority and the team was asked to try to progress the top items first, wherever possible. This was a real exercise in team empowerment and collaboration, and people worked hard to pick up priority items first, rather than those which just looked the nicest! As the Scrum Master, my role remained to facilitate this process and assist to eliminate the blockers that arose.

ScrumBan activities

We kept many of the Scrum ceremonies in place, relatively unchanged. The daily stand-up reviewed the progress on the board and allowed individuals to exchange information and offer assistance, even though they weren’t interdependent from working on shared user stories. The stand-up also allowed the opportunity to review our work-in-progress so ensure that individuals weren’t progressing too many tasks in parallel and that nothing was blocked. We retained a weekly backlog sizing meeting to review the new tasks in the backlog. The sizing exercise was still of value in allowing conversations to be held and shared understanding to be reached on what the tasks entailed, even though we weren’t tracking our velocity as before. The Product Owner maintained a backlog of around 10 items in the to-do list at any time, pulling in more from the backlog as soon as the list ran low. As new, high priority items came in, the Product Owner would add these to the top of the to-do list, removing lower priority items back onto the backlog, as necessary. The Product Owner was also always on hand to answer questions about the requirements around the issues that were being addressed and the test coverage necessary to extend our regression suite.  In our case, we didn’t hold a sprint review-type meeting, because the increments were limited to bug fixes. However, I see no reason why there shouldn’t be value in this type of meeting, for sharing the solutions that had been implemented. And, of course, retrospectives were as valuable as ever in reviewing our process and making improvements as the team felt necessary.

Final thoughts

The advantage of moving from Scrum to Scrumban, rather than pure Kanban, is the retention of much of the Scrum framework. For a team that needs to move from feature work, to support & bug fixing and back again, this provides a less onerous transition as many of the meetings and the general heartbeat of the team remain unchanged. Further, even for teams who only ever do support, I still see a great deal of value in having the Scrum roles and ceremonies in place as I believe that these add a lot of value which could be missed in a pure Kanban environment.