Why Alfresco 5.0.d will be a game changer for UI development

by Dave Draper


It was recently announced that Alfresco 5.0.d has been released. There is lots of great stuff in this release for the Alfresco Community to enjoy – but the thing that I’m most excited about is that 5.0.d has a dependency on artefacts created from the independent Aikau GitHub project. This is a significant change because, for the first time, it is going to allow Community users to have access to the latest UI updates and fixes, rather than needing to wait until the next Community release.

The Benefits of an Independent Aikau

Before I explain how unbelievably easy it is to upgrade the version of Aikau that is used in 5.0.d, let’s cover some of the reasons why you should be excited about this change if you customize or make enhancements to Alfresco Share.

First and foremost, you can get an updated version of Aikau every week – this means you get access to the latest widgets, improvements and bug fixes almost as soon as they are implemented. Those enhancements can even come directly from the Alfresco Community as we’re very happy to merge your pull requests into Aikau, if they meet our documented acceptance criteria.

This means that you don’t have to passively wait anywhere between 6 months and a year for a new release that may or may not contain a fix that you might be hoping for. Now you have the opportunity to raise bugs (and optionally provide the fixes for them) as well as raising feature requests for inclusion in future development sprints. This gives the Alfresco Community unprecedented influence on updates to the UI code.

The Aikau project backlog is public so you can see what we’re going to be working on in the near future, and can give us an indication of what you’d like to see implemented, by raising new issues or voting on specific issues.

How to update Aikau in 5.0.d

The best part is that you won’t even need to re-build anything in order to get updated versions of Aikau… you just need to follow these 3 simple steps:

  1. Download the JAR for the version you want from the Alfresco Maven repository
  2. Drop it into the “share/WEB-INF/lib” directory
  3. Restart your server.

That’s it.

No really, that’s it… Surf supports multiple versions of Aikau and will always use the latest version available (although you can still manually configure the version used with the Module Deployment page if you want to).

The Aikau project even provides a Grunt task called “clientPatch” for patching Aikau clients, if you’ve cloned the GitHub repository and want to verify your own changes before submitting a pull request. You can even configure a list of different clients and then pick which one you want to update.


With the release of 5.0.d you can now take advantage of the latest updates to Aikau as they happen. Your installation of Alfresco Community can keep up with UI related bug fixes and your customizations can leverage all the new features and widgets that get released every week.

Alfresco Community 5.0.d is a great release and is going to revolutionize Share UI development.




Story Points?

by Tristan Bagnall

Recently I have been asked quite a bit about story points here are some of the answers I have given.

To give some context and scope around this post, here are some quick facts I have learnt about story points:

  • For story points we use an altered fibonacci sequence: 1, 2, 3, 5, 8, 13, 21, 40 ,100. (Some tools use 20 instead of 21)
  • Story points are abstracted from elapsed or ideal time.
  • They are like buckets of accuracy / vagueness.
  • The larger they are the more assumptions they contain, the larger the probable complexity and therefore effort.
  • They are numbers, allowing the use of an empirical forecast
  • They are used by the Product Owner and enable them to do forecasting – the PO should find themselves being asked, when will I be able to have a usable checkout (or other feature).
  • They are used on user stories a.k.a. product backlog items (PBI)
    • Epics are included as user stories, even though some tools have adopted a taxonomy that suggests Epics are different to user stories.
  • They show the relative effort and complexity of a chunk of work.
    • They are a vector – 8 story points is 4 times as much effort as 2 story points (4 x 2 = 8)

There is plenty of literature out there about story point, estimation, etc. This is not meant to be exhaustive, but I would encourage everyone to read about them.

Why not use man days instead?

Everyone has an opinion on what a man day is – it is kind of mythical as it means so many things to different people.

Man days suggest that there is little complexity and we are certain on what needs doing – after all days can be divided into hours (24ths) so they are very accurate.

Man days also start give an expectation of a delivery date, even if they are padded out by saying they are ideal man days. However once you start with ideal man  days you then get into confusing realms of what is ideal and what is really happening. For example:

  • 1 man day, might be 2 ideal man days as the person is only spending 50% of their time on a team (a 50:50 split).
  • But in reality they are context switching every 30 minutes, so the time split is really less than 50% – context switching is very expensive and leads to poor quality work. So the real split might be something like 40:40:20.
  • This suggests that 5 man days are really 2 ideal man days.
  • At this point normally a large debate starts, with boasts about how easily someone can context switch and these (or any) figures are wrong.
  • At the end of the debate there is a lack of clarity and therefore the man days have become meaningless

It is generally accepted that it is better to work out the effort and then measure how a team progresses through that effort.

Why the sequence of numbers?

As we continue to have conversations about an item of work we get to know more about it, we therefore learn about its complexity, we remove uncertainty and get an idea of the effort involved to deliver it. While we do this we break the work down into more manageable parts. Through all this we are testing assumptions, either removing them, or correcting them or validating them.

As we gather all these moving parts we can become more accurate about how much effort is needed. While it is big, we have a lot of assumptions, and due to that we are pretty vague.

So how does this tie back to the sequence of numbers?

As we can be more accurate with the smaller items we need more buckets that are closer together to put the chunks of work into. Therefore the first part of the sequence is ideal: 1, 2, 3, 5, 8.

Then we have the large chunks with lots of assumptions – the epics – that need to be broken down before we can work on them: 40, 100

Then we have chunks that we have become more familiar with, partially broken down, but are still too big: 13, 21.

How small should a user story be before I start working on it?

Another way of putting the question is

  • how much uncertainty should remain;
  • how many assumptions should be cleared up;
  • how effort should there be;

before I pull a user story into a sprint or into progress on a kanban board?

This depends on several factors:

  • How much uncertainty are you comfortable with?
  • How will the remaining assumptions affect your ability to deliver the chunk of work?
  • What is the mix of the sizes you are pulling into a sprint?

As with all things agile there are exceptions and generalisations. One observation I have made is that many teams think that they can take large chunks of work into a sprint, however this means there are lots of assumptions still to be worked out, lots of vagueness and uncertainty. This leads to a lack of predictability and consistency on the sprint delivering.

Therefore I have normally advised the largest single chunk going into a sprint is 8 story points, but there should always be a mix of sizes going into a sprint.

A helpful technique to start estimating

by Tristan Bagnall

Not sure how to start sizing stories?

A clever agilist once showed me a really useful technique to help teams start with story points and break the initial barrier on estimating. Here it is with my twist:

  1. Pick a story that you think is small, perhaps even the smallest on the wall –
    • well understood,
    • not many assumptions,
    • understood by all the team,
    • little effort to get done
  2. Find all the similar size stories and label them all as small
  3. Look for the next significant size up and label them medium
  4. Look for the largest stories and label them large
  5. Now go back to the small stories
  6. Mark all the small stores on a scale of small, medium and large, Try to think of the medium as about twice as large as the small and the large as about three times as large as the small.
  7. Move on to the medium sized stories and mark them all as small, medium and large
  8. Move to the large stories and mark them as small, medium and large. Try to think of the medium being half (or less) the size of the large.

You should now have user stories labelled and  marked:

  • Small – Small
  • Small – Medium
  • Small – Large
  • Medium – Small
  • Medium – Medium
  • Medium – Large
  • Large – Small
  • Large – Medium
  • Large – Large

We can use these to translate to story points:

Small Medium Large









20  or 21*



* Depending on your tool you may find support for 20 or 21

The role of the Scrum Master in empowering the team

by Christine Thompson

Self-directed, cross-functional teams

Here’s one of the things that I love about Scrum. Scrum is a simple system which allows people to be intelligent within it. It assumes that team members do the best they can within the constraints of the system they work within. If something goes wrong, it is generally assumed that it is the process that is at fault and not the people. The Carrot & Stick approach doesn’t motivate people in skilled work.  Instead, autonomy, mastery and purpose do.

Scrum teams should be self-managed, self-organized and cross-functional. Team members take their direction from the work to be done and not from the Scrum Master or stakeholders. To empower the team, they need authority, resources and information. Scrum itself values team success over individual performance.

Each scrum team should be made up of team members with cross-functional, “T-shaped” skills. Whilst people may have an area of speciality, they also have a set of broader skills which overlap with those of their team-mates. If a skill-set is over-stretched, then other people need to step in and fill it. If a skill-set is missing, then we need to train people up.

Finally, reforming teams frequently is wasteful as it takes a long time to establish a performant team.

Powerless teams

So what are the characteristics of a powerless team? They may be heavily directed by the Scrum Master and/or influenced by people outside of the team. They’re not making their own decisions, they’re being told what to do. Perhaps they get no value from the daily stand-up: they address the Scrum Master and use it as a status update. Individuals either don’t participate or they argue about everything. People work in isolation and just “do their own thing”. Communication happens indirectly, via comments in tools instead of face-to-face.

Empowered teams

So what does an empowered team look like? The team share an understanding of their tasks and what it takes to complete them and they find their own answers without having to revert to other authorities. Individuals offer to help each other out whenever and wherever they can. The team values its interactions and conversations; all the meetings they hold are considered of value. Everyone shows respect to everyone else, everyone in the team is valued equally and the whole team works towards completing their goals together.

Role of the Scrum Master

So what’s the role of the Scrum Master in empowering the team? The Scrum Master is not the same as a Team Leader or Tech Lead. They are a “Servant leader” – they facilitate but do not manage the team. They may question and challenge things but they have no authority because the team manages themselves. It’s important that the Scrum Master sets the tone of the team in their own behaviours and they also provide the social grease on the distributed team, encouraging teams to use the thickest form of communication available at any time.

For example, the Scrum Master disempowers the team when they:

  • Assign or ear-mark tasks for individuals – team members should decide what they will progress next themselves, based on the information they are given in the scrum meetings and from their understanding of the sprint backlog
  • Influence the sizing of tasks – unless they are performing a dual role as an engineer on the team, the Scrum Master does not take part in, or steer the outcome from, the sizing discussions
  • Make design / implementation decisions for the team – again, unless they are also an engineer on the team, the team members themselves should be making decisions about how a task will be implemented
  • Interfere with the flow of the sprint – if the the team has all the information it needs about the priorities and tasks in the sprint, then there is no need for the Scrum Master to influence people on what tasks they should be working on and when
  • Chase progress instead of chasing blockers – the Scrum Master is there to facilitate and not to manage the team. Asking for progress updates does not engender trust between themself and the team. Such information should be available from the task board and the Scrum Master should only be chasing impediments.

Instead, some examples of what the Scrum Master might do to empower the team:

  • Reduce / eliminate “command and control” practices so that teams can run their own sessions openly and honestly; ensure that dysfunctional meeting participants are controlled
  • Ensure that barriers between team members are removed
  • Work with the team to remove impediments effectively
  • Protect the team from stressful outside influences and unnecessary interruptions
  • Prove a level of true commitment to the team – teams will not feel truly empowered until they see that the Scrum Master is serious about the role

Final thought

The ultimate goal of the Scrum Master is to coach and support the team to the point at which is becomes truly self-organising, autonomous and empowered. In the words of Nanny McPhee: “There is something you should understand about the way I work. When you need me but do not want me, then I must stay. When you want me but no longer need me, then I have to go. It’s rather sad, really, but there it is.”


by Christine Thompson

Why ScrumBan?

I first started looking into ScrumBan when I was working with a team who had been doing a prolonged period of feature development and had a well-established Scrum process. Everything was working well for us until we started to transition into a phase of bug fixing and support. Suddenly we found that we had too much support to have predictable sprints. We could never finish a sprint because the support tasks couldn’t be sized accurately. Our priorities were constantly changing, as new issues came in, and we couldn’t lock-down the sprint. Things went into and out of the sprint and our burn-down started to look like an electrocardiogram.

I started to question then whether we should be looking at a continuous workflow and moving over to Kanban. This way we would be able to respond quickly to priority changes, limit our work in progress and work on tasks that would take more than sprint. But Scrum had worked so well for us that I was reluctant to move away from it completely. This is when I hit on ScrumBan.

What is ScrumBan?

ScrumBan combines the framework of Scrum with the principles of Kanban. It is more prescriptive than Kanban, which has no roles or meetings, but is more responsive to change than Scrum, where change can only be accommodated at the sprint boundary. ScrumBan retains all the roles and meetings of Scrum but uses the Kanban continuous workflow board. The daily stand-up focusses on flow of tasks across the board and reviews what it would take to move each one forward. The workflow can even include both support and feature work items on the same board, for teams who have to progress tasks in both areas at once. This is a neat solution to dividing teams in half, where those who end up in the support team are generally less impressed than those who remain on the feature team! It allows people to vary the type of task they pick up each time and to share the support load.

Using the Kanban board allowed us to take advantage of some of the lean principles of limiting work in progress and eliminating blockers. We had a limited number of “Ready” slots available on the board, which the product owner would fill with the top priority items. Should priorities change, or new requests come in, these could be swapped in and swapped out as needed. Ready to progress items were ordered in priority and the team was asked to try to progress the top items first, wherever possible. This was a real exercise in team empowerment and collaboration, and people worked hard to pick up priority items first, rather than those which just looked the nicest! As the Scrum Master, my role remained to facilitate this process and assist to eliminate the blockers that arose.

ScrumBan activities

We kept many of the Scrum ceremonies in place, relatively unchanged. The daily stand-up reviewed the progress on the board and allowed individuals to exchange information and offer assistance, even though they weren’t interdependent from working on shared user stories. The stand-up also allowed the opportunity to review our work-in-progress so ensure that individuals weren’t progressing too many tasks in parallel and that nothing was blocked. We retained a weekly backlog sizing meeting to review the new tasks in the backlog. The sizing exercise was still of value in allowing conversations to be held and shared understanding to be reached on what the tasks entailed, even though we weren’t tracking our velocity as before. The Product Owner maintained a backlog of around 10 items in the to-do list at any time, pulling in more from the backlog as soon as the list ran low. As new, high priority items came in, the Product Owner would add these to the top of the to-do list, removing lower priority items back onto the backlog, as necessary. The Product Owner was also always on hand to answer questions about the requirements around the issues that were being addressed and the test coverage necessary to extend our regression suite.  In our case, we didn’t hold a sprint review-type meeting, because the increments were limited to bug fixes. However, I see no reason why there shouldn’t be value in this type of meeting, for sharing the solutions that had been implemented. And, of course, retrospectives were as valuable as ever in reviewing our process and making improvements as the team felt necessary.

Final thoughts

The advantage of moving from Scrum to Scrumban, rather than pure Kanban, is the retention of much of the Scrum framework. For a team that needs to move from feature work, to support & bug fixing and back again, this provides a less onerous transition as many of the meetings and the general heartbeat of the team remain unchanged. Further, even for teams who only ever do support, I still see a great deal of value in having the Scrum roles and ceremonies in place as I believe that these add a lot of value which could be missed in a pure Kanban environment.

Size Matters: Estimating in Scrum

by Christine Thompson

One of the biggest bones of contention that I seem to have had with my various Scrum teams is around sizing and estimating. There seems to be a level of confusion about why to size, how to size, what units to use etc. etc. Whilst there are guidelines out there (for example, see “Agile Estimating and Planning” by Mike Cohn) there is, of course, no right and wrong way to do this and teams must settle on the solution that works for them. But let’s at least review what those guidelines are, as a starting point.

Sizing User Stories

User Stories are generally estimated at a high-level, in story points. They are sized relative to each other, rather than in absolute terms. The size of the story, in story points, is a function of its complexity to implement, the effort involved and any risk associated with it. One neat example, given by an old colleague of mine, was for a user story which required the operator to hit the X key once, on the keyboard. Of course this would be low complexity and low effort but if the story increased to hitting the X key one million times, then the complexity is still the same (it’s still a simple task) but now the effort involved in significantly increased and therefore the size of the story is increased too. So both of these factors influence the relative sizing of the story, along with any risks or unknowns that it contains.

Why size the story?

I would suggest two reasons for this. The purpose of the estimate is to track velocity so that we can predict when features will be completed. If we know how many story points we can complete in a sprint, and we know how many story points remain on the backlog, we can forecast when features can be delivered. Another key purpose for sizing has to be around the conversation that this draws out of the team. For the individuals to agree on a size, they have to reach a shared understanding of what’s required in the story and they have to be able to agree the size they assign. The discussion it takes to get the individuals to reach consensus is of great value in ensuring a thorough and agreed understanding of the work. Even if we threw the estimate away at the end of the sizing there would still have been value in the exercise.

Sizing Tasks

Once we have a user story of the right size (generally 8-13 points, depending on your own ‘value’ of a story point), we will often break this down into individual tasks. This allows the team to understand the individual pieces of work that will be performed to implement the story and to be able to parallelise the work with two or more people. So what about the sizing of these tasks? Tasks are generally sized in ideal time. They are low-level and small enough to know roughly how long we will spend on them. The purpose of sizing the tasks is to allow us to burn-down work during the sprint, so that we can follow our progress and identify early if we are not on track.

So if we add up the time taken for the tasks, we can calculate how long it takes to implement a story point, right?

Wrong! Velocity is an average over time and takes a number of sprints to establish. Some stories of the same story point size will take longer to complete than others of the same size. Velocity takes into account a whole team over a period of time. Tasks relate to one person doing a small amount of work. You can’t equate the two and they serve a separate purpose.

So, in summary:

Stories are sized in Story Points which provide a high-level comparison of complexity and are externally visible to the stakeholders.

Tasks are sized in ideal time which provides a low-level measure of effort and are internally visible to the team, allowing them to track progress within a sprint.

I’ve worked with teams who cannot cope with the abstract concept of story points and need to work in the real world units of time. They estimate everything in either elapsed or in ideal time. They admit that, if forced to use story points, they will have a conversion factor in their head which will allow them to supply this. These are the teams who may benefit from a better understanding of the concept and abstraction of story points. I’ve also worked with teams who prefer to do everything in story points because they hate the idea of switching between the abstract complexities and the actual time taken. They feel that it makes them equate story points to time, which they want to avoid. This seems a slightly more understandable approach, although not always well supported in Agile project tracking software.

The final word on this, of course, has to belong to the individual team and they must do what works best for them in order to achieve their needs in terms of planning and tracking their work. If it works for them, then it isn’t broken.

“Eddie would go”

by Dave Draper

Today we released version 1.0.6 of Aikau and would really like to get some feedback on what we’ve done so far. If you tuned into Tech Talk Live 83 then you’ll know that we’ve been busy breaking Aikau out of Share and the Alfresco release life-cycle and into its own GitHub project.

We’ve done this so that we can iterate on Aikau faster to support Alfresco modules (such as Records Management) and to try to engage better with the Alfresco Community. The Aikau team are working to one-week long Sprints with a release at the end of each Sprint (so you can expect a new release every Tuesday!). During each sprint we will be adding more features and fixing any bugs that have been found in the previous sprint but always prioritizing bugs over features.

For the last few releases we have predominantly been focusing on the infrastructure of the Aikau project, i.e. moving the code to GitHub and ensuring that Aikau can be developed and tested outside of the internal Alfresco eco-system. This is where you come in…

We’d be really grateful if you could do one of two things for us…

1. Clone the GitHub repository, follow the development environment setup instructions (available for Linux, Windows and Mac) and check that you can build Aikau, start the test app and run the unit test suite on the Vagrant test VM.

2. Start working your way through the tutorial that will take you through the process of creating a new standalone client using the new Maven archetype. We’ve written 20 chapters of the tutorial and have so far ported 6 into GitHub markdown format. In each sprint we’ll be porting more and then writing more chapters. The tutorial has been road tested by quite a few people internally but we’d really like some external feedback on it (e.g. if there are things that aren’t clear or steps that don’t work).

We’re also interested in your contributions, bug reports and feature requests. We’ve defined some contribution guidelines to try to make the criteria for accepting contributions as transparent as possible which we will probably adjust over time as necessary to encourage active participation.

There’s still a long way to go for the Aikau project – some of the widgets are still only beta quality, and some still only work when used within Alfresco Share – but we’re making good progress. Over the coming weeks you should hopefully see new and improved widgets, more tutorials, publicly accessible JSDocs and improved test coverage.

In the meantime, give it a go and let us know what you think. Please provide feedback via the comments section or Tweet me directly at @_DaveDraper – many thanks in advance!

The Alternate Realities of Share Development

By Dave Draper, Kevin Roast, Erik Winlöf and David Webster


Over the last 4 years (from versions 4.0 through to 5.0) there have been a number of changes in relation to Share development and customization.

From an outside perspective the decisions that have been made might appear confusing or frustrating depending upon your particular use case of Share. If you’ve written Share extensions or customizations for previous versions then you might have hit breaking changes between major releases and you might be fearful of it happening again.

It might seem that we don’t care about these issues. We’re sorry if you have experienced such problems but we can assure you that we try our very best to move Share forwards without breaking what has gone before.

In this post we’re going to highlight how things are better than they would have been if different decisions were made.

Decision #1 – Extensibility model

Back in 2010 we introduced an extensibility model that enabled us to dynamically change a default Share page via an extension module. This was an initially coarse customization approach that enabled Surf Components to be added, removed or replaced as well as our custom FreeMarker directives to be manipulated.

This in turn paved the way for us to refactor the Share WebScripts in 4.2 to remove the WebScript .head.ftl files and push the widget instantiation configuration into the JavaScript controller.

If we hadn’t have taken this approach then the method of customization would still be copy and pasting of complete WebScript components to the “web-extension” path.

This wouldn’t have stopped breaking changes between versions (e.g. the REST API changing, the Component calling a different WebScript, different properties or URL arguments being passed, etc) and would have required constant manual maintenance of those WebScripts with code from service pack fixes as necessary.

Decision #2 – New Java based client

We know that a few customers still use heavily customised versions of Explorer and the fact that we’ve finally removed it from 5.0 is going to cause pain to a few.

It has been suggested at various points over the last 4 years that we could create a brand new client to replace Share – even though Share does not yet have complete feature parity with Explorer. We recognise now that we need to invest in Share and improve it over time since creating a new client would ultimately introduce more problems than it solves.

However, when calls were strongest for writing a new client, the recommendation was to move to either GWT or Vaadin. We’re fairly sure all those people that lament the fact that this week we’re not using Angular would be horrified if they were now stuck with a Java based client that doesn’t have feature parity with Share (let alone Explorer).

A new Java based client would have guaranteed that all those customizations would have to be re-written from scratch.

Decision #3 – Aikau

In our opinion it feels like some people miss the point of Aikau. Often we field questions along the lines of:

  • “Why do we have to use Dojo?” (you don’t)
  • “Why have you written your own JavaScript framework?” (we haven’t)
  • “Why aren’t you using Angular/Ember/React/Web Components?” (many good reasons; customization and configuration requirements, framework stability, performance etc.)

As it was said on the Product Managers Office Hours last week; “web technologies change every 3 years”. It’s probably even more often than that. In the short time that Share has existed there has been a constant changing of the guard for “best web development framework”.

Even if we started again tomorrow with Angular (the current populist choice), we’d be doing so in the knowledge that there will be breaking changes when Angular version 2.0 is released next year and that in a few years we’ll (allegedly) all be using Web Components anyway.

The long and short of it is that unless you’re writing an application that is only going to have the lifespan of the carton of milk in your fridge then binding yourself to a single JavaScript library will be a mistake.

With Aikau we’re trying to mitigate that problem through declarative page definition. Yes, you have to write a bit of AMD boilerplate but really the choice of JavaScript framework is entirely in your hands.

Aikau isn’t tied to Share (theoretically it’s not even tied to Surf) so if we do ever do switch to a new client then the existing widgets and your custom widgets will still be applicable. We’re also evaluating breaking Aikau out of the Share development life-cycle so that we can make new widgets and bug fixes available faster.

Will there be more breaking changes?

We’ve been talking about re-writing the Document Library using Aikau for a while now (and have a pretty good prototype already) along with the other site pages. However, just as the old header Component still exists in Share, the original pages will still remain so you’ll always be able to configure Share to use the current YUI2 Document Library with your customizations.

There is also a lot of talk about re-writing the rest of Share in Aikau. Whilst we think this is ultimately a good idea we don’t think it’s worthwhile until we’ve gone to the trouble of evaluating and improving the current design…. do you really only want a carbon copy of the current Wiki, Blogs and Data List pages? We’ll get there in time, but we’re not sure there’s any great rush to get this all done for 5.1.


Ultimately, any web interface needs to keep modifying its underlying technology. Aikau gives us a way to do that with the least possible pain. We understand that some developers have gone through some suffering with breaking changes in the last few releases, but through the use of Aikau we expect this pain to decrease and customisations to become more stable, transferable and powerful.

We’re working to add transparency to the development process that will hopefully make what we’re working on more obvious and make it easier for external developers to predict what changes there may be in future Share releases.

Aikau – Using the AlfDocumentPreview Widget


This afternoon I saw a Tweet asking if there were any examples of how to use the AlfDocumentPreview widget.  Aikau documentation is currently very thin on the ground (as you’re probably no doubt painfully aware) so I thought it would be worth writing up a quick blog post to describe how we use it and how you can too. If there’s anything that you want more information on then it’s worth Tweeting me @_DaveDraper with a request – I can’t guarantee that I’ll be able to write it up as a blog post, but I will try to do my best as my time allows!


Most of the Aikau widgets are completely new, some are “shims” around existing YUI2 based code and a few are direct ports of YUI2 widgets. The AlfDocumentPreview widget (and it’s associated plugins) is a good example of a ported widget. The original code was copied into an Aikau widget definition and then most of the YUI2 code was replaced, bugs were fixed and thorough JSLinting applied.

You might wonder why we’d go to such lengths when a widget already existed. This essentially gets right to one of the fundamental points of Aikau as a framework. The code inside the widget really isn’t important – what’s important is defining an interface to a widget that performs a single specific task that can be referenced in a declarative model. The widget becomes an API to a piece of UI functionality – in this case, previewing a widget.

Every Aikau page model that references it will never need to change – even if we decide to completely rewrite the widget to use JQuery, Angular, Web Components or whatever happens to be the current flavour of the month – the pages will function as they always have.

Where is the previewer used?

The rule of thumb that I tell anyone asks me, is that if Alfresco has used an Aikau widget in a product feature then it’s fair game for use in your application or extension. There are a number of widgets that are definitely beta quality (and we call these out in the JSDoc) which might be subject to change, but once it’s been used in a feature then we’re obliged to maintain backwards compatibility and fix any bugs with it.

The AlfDocumentPreview is currently being used in the new filtered search feature that is part of the 5.0 release (and you’ll also find it used in the Film Strip View that is part of the prototype Aikau based Document Library which is not yet a product feature!). If you click on the thumbnail of any document (that is not an image) then a new dialog is opened that contains a preview of that document. The preview will render the appropriate plugin (e.g. PDF.js, video, audio, etc) for the content type.

The filtered search page in Alfresco Share 5.0

The filtered search page in Alfresco Share 5.0

A preview of a search result

A preview of a search result

How it works

Each row in the search results is an AlfSearchResult widget that contains a SearchThumbnail widget. When you click on the thumbnail widget (of the appropriate type) then a payload is published on the “ALF_CREATE_DIALOG_REQUEST” topic to which the AlfDialogService subscribes. The payload contains a JSON model of widgets to render in the dialog when it is displayed. The model is an AlfDocument widget that contains an AlfDocumentPreview widget.

widgetsContent: [
    name: "alfresco/documentlibrary/AlfDocument",
    config: {
      widgets: [
          name: "alfresco/preview/AlfDocumentPreview"

The point of the AlfDocument widget is to ensure that all of the relevant Node data is available to pass to a child widget (in this case the AlfDocumentPreview – but it could be something else) so to do something with.

One of the key things about the search page is that search requests only return a very limited amount of data about each node (unlike requests from the Document Library which are slower but contain much more information such as all the properties and the actions permitted for the current user).

An additional XHR request is required to obtain all the data required to preview the node. The payload published when clicking on the thumbnail also contains the publication to make once the dialog has been displayed:

publishOnShow: [
    publishPayload: {
      nodeRef: this.currentItem.nodeRef

The “ALF_RETRIEVE_SINGLE_DOCUMENT_REQUEST” is serviced by the DocumentService and the AlfDocument subscribes to successful document loaded publications (note that the SearchThumbnail will have a “currentItem” attribute set containing the limited data returned by the search request which will contain a “nodeRef” attribute).

The AlfDocument only processes it’s child widgets once it has some data about a specific node. Once the DocumentService has published the node data then it will process the AlfDocumentPreview widget. From that point on the AlfDocumentPreview will use the data that has been provided to create the appropriate plugin to preview the document.

Other Ways to use AlfDocumentPreview

You don’t have to use an AlfDocumentPreview within an AlfDocument, you just need to ensure that you provide it with node data as the “currentItem” configuration attribute. So if you already have the all the data (for example if you’ve made a request from within your JavaScript controller or if you are accessing it from a list that has been generated from the REST API used to service the Document Library) then you can configure it into the widget directly.

The following is an example of a simple Aikau page model that previews a document (obviously you need to swap in your own nodeRef!):

model.jsonModel = {
  services: ["alfresco/services/DocumentService"],
  widgets: [
      name: "alfresco/documentlibrary/AlfDocument",
      config: {
        nodeRef: "workspace://SpacesStore/7d829b79-c9ba-4bce-a4df-7563c107c599",
        widgets: [
            name: "alfresco/preview/AlfDocumentPreview"

You also don’t need to display it in a dialog either.

Once again this should hopefully demonstrate how you can re-use Aikau widgets to achieve very specific objectives – try doing using the YUI2 previewer in isolation and then you’ll understand why it’s been ported!


Hopefully this has provided both a useful description of how we’re currently using the AlfDocumentPreview widget (as well as how we’ve configured pub/sub in the filtered page to link widgets and services). If anything isn’t clear or you have further questions then please comment below.

Creating Aikau Site Pages for Share


The Aikau framework provides a simpler way of creating pages in Share where a page can be declaratively defined as a JSON model in a WebScript. This avoids the necessity to create the XML and FreeMarker files for Surf Pages, Templates and Components.

I’ve been asked how you would create an Aikau page such that it is available as a site page in Share (e.g. a page that can be added via the Site Customization tooling in Share). So thought it would be worth capturing this information in a blog post. This is one of those interesting use cases where the old and new approaches of Share development intersect…


In Share we use “pre-sets” configuration to provide default User and Site dashboards. These are XML configurations that define the Surf objects that can be used to “cookie-cut” new page instances (which are then stored on the Alfresco Repository).

The pre-sets can be found in in this file and the “site-dashboard” pre-set contains a property (“sitePages”) that defines the initial set of pages for each site. Once the site is created a new Surf Page instance is created on the Repository and when you add or remove pages from the site it is this property that is updated (in the instance, not the pre-set).

The “Customize Site” page lists both the available “Site Pages” and the “Current Site Pages” and the list of pages to choose from is defined in the “share-config.xml” file under the “SitePages” condition, e.g:

<config evaluator="string-compare" condition="SitePages">
    <page id="calendar">calendar</page>
    <page id="wiki-page">wiki-page?title=Main_Page</page>
    <page id="documentlibrary">documentlibrary</page>
    <page id="discussions-topiclist">discussions-topiclist</page>
    <page id="blog-postlist">blog-postlist</page>
    <page id="links">links</page>
    <page id="data-lists">data-lists</page>

It’s possible to extend this configuration to include additional pages, however the underlying code currently assumes that each page is mapped to a Surf object. This means that if you want to add in an Aikau page to this list then you need to create a Surf Page object (even though it won’t actually be used to render the page at all).


Say you want to add in a new Aikau page called “Example”. You need to create a Share configuration extension that defines the new page (one way of doing this would be to create a “share-config-custom.xml” file that you place in the “alfresco/web-extension” classpath).

The file would contain the following XML:

  <config evaluator="string-compare" condition="SitePages" replace="false">
      <page id="example">dp/ws/example</page>

But you’d also need to create a Surf Page XML file (placed in the  “alfresco/site-data/pages” classpath) containing:

<?xml version='1.0' encoding='UTF-8'?>
  <title>Example Site Page</title>
  <description>Example of adding a new site page</description>

Which would result in the following being shown when customizing a site:

The Customize Site page showing an Aikau page.

The Customize Site page showing an Aikau page.