The 7 traits of my “ideal” team

The best teams I worked in or know of have the following traits and apply the accompanying practices.

My ideal team

My “ideal” team …

  • is customer oriented
    • cross-functional team composition
    • involves real users
  • understands requirements as assumptions
    • starts with minimal desirable product
    • experiment/data-driven decision making
  • is autonomous, has no external dependencies
    • self-organises
    • pull system
    • >= 3 members
    • “owns” their product end-to-end from ideation to operations
  • improves over time
    • learns constantly from each other (pairing, BBL, CoPs)
    • communicates well
    • regular process improvement and product feedback cycles (retros & reviews)
  • delivers efficiently
    • is aligned, works towards the same goal
    • <= 9 members
    • visualises to communicate internally and externally
  • delivers predictably
    • continuous flow (of stories on board)
    • predicts with lead time
    • approx. same size stories; no estimation
    • no planning meeting; backlog refinement as needed
  • delivers at will
    • highly automated tests / deployment / monitoring
    • clean (enough) codebase

What works best is highly contextual, so I won’t try to force any of this on a team. I realised though that most process improvement experiments I suggest are guided by my vision of this “ideal” team.

The 7 traits of my “ideal” team

Velocity vs. Cycle time – or ‘How to predict how much work will get done by a given time’

Both velocity and cycle time make predictions based on completed work in the past. This approach is sometimes referred to as ‘yesterday’s weather’. For the remainder of this post I will assume that this work is chunked into user stories. Basing predictions on past events assumes a stable (enough) context. In our case the most important factor here is a stable team.

In either case these user stories can be estimated – usually with story points – or split to roughly the same size and then just counted. The later being faster and more outcome-focussed. Assuming same size stories simplifies measurement and calculation in both cases and is just as accurate as using more precise estimates, but requires more care when creating stories. For the remainder of this post I will assume that user stories are roughly the same size and counted instead of using more detailed estimates. If you are a firm believer in estimates the following also holds true, but requires an extra step of converting user story points into number of stories.

Velocity measures completed stories per iteration. The unit is work per time, e.g. 4 stories per 2 week iteration.

Cycle time measures the amount of time passed working on a story. The unit is time per work, e.g. 2.5 days per story.

Velocity and cycle time can be (sorta) converted into each other (neglecting times in between finishing and starting user stories and in the simplified case of WIP = 1) and are therefore (kinda) equivalent in the information they carry. They merely represent different points of view on the same thing – how long does it take to get chunks of work done. For a more in depth discussion on the actual math, check out the comments on this blog post.


Another concept of planning is commonly mangled up in discussions about predictions, but is actually orthogonal to it: Iteration planning vs. variable input queues.

Iteration planning fixes the scope for the next iteration, e.g. 4 stories in the next 2 weeks. Velocity is often used in this context, since the unit of velocity, work per time, can be used to support the decision how much to forecast for the next iteration.

A variable input queue provides a steady stream of work. User stories can be re-sorted, added or removed anytime as long as work on them hasn’t started yet. This provides more flexibility compared to iteration planning. Measuring time per user story (cycle time) matches this well.

When using a variable input queue and no iteration planning, iteration goals and the associated commitment also disappear. This can be replaced with commitments to OKRs, which we do quarterly. Three months are long enough to accomplish significant things, while still short enough to have a sense of urgency.

Fixing the length of the input queue provides a mechanism to trigger refilling the queue as well as preventing planning to far into the future. The optimal queue length depends on story size, team capacity and predictability of stories. An amount of stories that everyone can easily keep in their heads (around 7) prevents overhead in reviewing and re-prioritizing the queue. When using a fixed input queue planning is triggered by an empty slot in the queue rather than in specific intervals. It also is finer-grained – possibly one story at a time. When integrating this in a daily standup an extra planning meeting can be avoided.

Additionally to tracking the time a story is worked on (cycle time) one can easily also measure the time from when a story is added to the queue until it is done (lead time). This makes it easier to get an idea by when a story will be finished once it is added. This is very effective if visualized at the end of the queue with something like “Your expected wait time today from this point is between x and y days.”, aka Disneyland wait time.


Velocity is usually used with iteration planning and cycle time with variable input queues. This feels more natural because measuring work per time with iterations is a good match, as is measuring time per work on a variable input queue. But there is nothing preventing you from combining it differently if it makes more sense in your context.

Discussions about velocity vs cycle time are a substitute discussion for iteration planning vs variable input queues. This is where the real differences are. Velocity and cycle time are merely different points of view on the same thing. They just happen to match one approach more than the other.

This post appeared first on

Velocity vs. Cycle time – or ‘How to predict how much work will get done by a given time’

Cross-functionality is a function over time

This blogpost originally appeared on the STYLIGHT Engineering blog. In my last blog post I described why we formed cross-functional business teams. In this blog post I am writing about team composition, that it changes over time and consequences of that. 

When we talk about the composition of cross-functional teams we usually have something like this in mind. The labels usually read developer, tester, designer and UX researcher or something along those lines. For the sake of this article we abstract from the specific role and just call them red, orange, yellow and brown experts.

constant expertise

This visualisation of the team composition is ignoring the fact that the amount of needed expertise to build a product changes over time. In reality it looks something more like this. 

needed expertise varies

During product development there might be a phase where there is a lot of yellow work needed (Feb) while sometime later there is almost none (Apr) and then it’s picking up again. 

There is a certain threshold up to which it makes sense to have someone with a specific expertise full-time on the team. If the needed expertise goes under that threshold that expert won’t be fully utilised. Which is OK, if it’s just a dip, but will get boring and frustrating if persistent. 


Looking at this, one might argue that the yellow expert should leave the team by mid-February and just be available to the team as needed. The orange expert joins the team around that time. The brown expert would leave sometime later around mid-March. This makes it effectively impossible to form a stable team that has the chance to gel and perform at it’s peak effectiveness.

Having T-shaped people on the team helps with this since they can help out in other disciplines than their own. This lowers the threshold in our graphical visualisation.

T-shaped: T-shaped people have two kinds of characteristics, hence the use of the letter “T” to describe them. The vertical stroke of the “T” is a depth of skill that allows them to contribute to the creative process. That can be from any number of different fields: an industrial designer, an architect, a social scientist, a business specialist or a mechanical engineer. The horizontal stroke of the “T” is the disposition for collaboration across disciplines.
IDEO CEO Tim Brown 

lower threshold

Now it makes sense to keep the yellow and brown experts for longer and bring on the orange one sooner.

Having M-shaped people on the team helps even more since they combine two or more needed disciplines. This makes it easier to stay above the threshold.

M-shaped: Building on top of the metaphor of T-shaped persons, M-shaped persons have expertise in two or more fields.

So, if our yellow expert was also an expert in the brown discipline, she would combine the areas below both of these lines resulting in the green line.

combined disciplines

Now, leaving the team because of under-utilisation is out of the picture.

Apart from having people on the team that are valuable in more than one discipline there are of course other options to deal with slack than leaving the team. How about some Kaizen? Helping someone else working on a stuck task, going to that conference, reading that book, finally doing that refactoring or writing that blog post are just a few of them.

Specialist teams are another option for experts that are having an effect here and there, but are not constantly needed on a team. In order to not create dependencies and thereby crippling autonomy of teams these specialist teams should be enablers and teachers helping teams. This means ownership stays with the teams, not with the specialists. At STYLIGHT we have for instance a platform team.


Forming stable cross-functional teams in the face of changing needs of expertise over time is not trivial. Being aware of this and having strategies how to deal with slack for a specific discipline (T-shaped people, Kaizen) still make it a viable strategy though. For us the advantages of cross-functional teams outweigh these difficulties.

Cross-functionality is a function over time

Tradeoff between cross-functional business teams and specialist teams

Pro cross-functional business teams
  • no handovers
  • faster learning about business
  • more innovation
  • shorter development cycles
  • broaden perspectives through diversity of experiences, expertise and knowledge
  • greater sense of purpuse by working on the full (or at least greater part) of the value chain
Pro specialist teams (aka silos)
  • get work done more efficiently when it can be described precisely and handovers are cheap
  • learning from specialists in same field
  • higher consistency of outcomes within silos
  • easier agreement with people that speak the same lingo
How to remedy the short-comings of cross-functional feature teams
  • use communities of practice (CoP) for knowledge sharing amongst specialists
  • express yourself in the lingo of the addressed person when talking to a specialist in another field
  • get to a novice level of understanding in the specialist fields of your team mates (“become” T-shaped)
Tradeoff between cross-functional business teams and specialist teams

Providing transparency in a participative organization

At it-agile all data concerning the company is accessible by all employees. That includes salaries, expenses, earnings etc. But even though I can dig through all the data to get the information I am after, I wouldn’t say that the information is transparent. The data is distributed over several systems, like Google Docs, Dropbox, our Wiki and probably others. It is way too much effort to find and make sense of.

To share with my colleagues what I learned in my slack time last year, I decided to write a little web app to present the data. (Also because I have been desperate to try out Meteor for more than just todo lists. So should you. It’s awesome.)


Apart from what gets usually shared, namely “what” people have spent their time on. The app encourages people to describe

  • what they learned,
  • what their goal for the year was,
  • how much it cost,
  • how much time they invested and
  • if they would recommend it to other people.

With that I hope to make what people do in their slack time at it-agile more transparent and help sharing knowledge faster.

Apart from transparency about slack time usage, I am thinking about four other ways to support how we work together.

  • Giving anonymous feedback to colleagues while providing the option for the recipient to respond with clarifying questions.
  • Ranking colleagues to support salary decisions
  • Show the economic status of the company
  • Keeping a repository of decisions we made / maybe also provide a decision mechanism


Like I said, we already do all this and the data is (somehow) accessible, I am merely trying to make the data more comprehensible. Right now I am just scratching my own itch and seeing if it catches on with my colleagues. If you are keen (and got a Google account, which you’ll need to log on), you can try it out yourself on

Providing transparency in a participative organization

Lean Startup Distilled

[You can find a German translation here.]

There is quite a few books out there on the topic of Lean Startup plus an endless stream of blog entries and conference talks to keep you busy reading as long as you wish. I have tried to summarize my understanding of Lean Startup in the following 5 methods and accompanying practices. Obviously they don’t describe everything there is to it, but for me this is the core of Lean Startup.

BusinessModelCanvasLean Startup presumes that the business model that your product or service is based on consists of a set of assumptions. These need to be validated by experiments and measurements. This approach is based on the scientific method, which essentially consists of forming a hypothesis, performing experiments and analyzing the collected data. The assumptions are collected in a business model canvas, which depicts the business model in an easy and clear manner by splitting it up in few distinct parts.

pivotThis canvas is in no way a static document, but designed to invalidate wrong assumptions and replacing them with new ones. These changes in strategy are called pivots.


Get-out-of-the-buildingThe assumptions in the business model can’t be validated by speculating and arguing in your office. This is expressed by Steve Blank‘s pithy statement “Get out of the building!” To verify your assumptions you need to get feedback from real (future) customers.

interviewsIn the very early stages of product development the easiest way to do this are interviews. Even though you can only reach very few users in this way you will get indispensable qualitative feedback. A possible structure for these interviews can be found in Ash Maurya‘s book Running Lean.

experimenteGenerally one tries to (in-)validate ones assumptions by running targeted experiments. It is important to define specific failure/success criteria before performing the experiment. Otherwise you will find yourself arguing about what the collected data means most likely trying to justify the result you wanted to see. Confirmation bias is not your friend. Experimenting requires a company culture that allows for failure. The highest potential for learning lies in invalidating assumptions. If invalidated experiments are regarded as failure learning is hampered.

MVPTo run experiments as cheap and quick as possible one tries to keep the necessary product increment small. This is what’s called a Minimal Viable Product (MVP). Something minimal that is allowed to be incomplete, but is enough to run the experiment and learn from it. For instance by doing costly calculations, like matching in a dating platform, by hand before automating it (concierge MVP).

validiertesLernenueberProduktundWachstumGenerally Lean Startup values learning more than the product or growth. This is to figure out what the market really wants before building the product. Learning, rather than the number of implemented product features, therefore becomes the measure for progress. This can feel highly unsatisfying at first, because learning as opposed to implemented requirements is very intangible and hard to measure. The business model canvas is a way to visualise the learning progress.

Build_Measure_LearnThe learning process is depicted with the build-measure-learn cycle. The cycle consists of three steps. Building something minimal, measuring changed user behaviour and learning from it. An experiment is a full loop through this cycle. This full loop is what you should aim to shorten and optimize for, not any individual step. It doesn’t matter how quick you can build a new feature, if you can’t measure changed user behaviour. To optimize for learning one plans an experiment in the opposite direction. First, decide what needs to be learned next, then what you need to measure to do so and finally what needs to be built to make that measurement possible.

WrenchAs depicted in the build-measure-learn cycle one tries to learn by measurements of actual user behaviour.



Pirate_MetricsA popular set of metrics to measure are the pirate metrics by Dave McClure. They got their name from the acronym of its phases which reads AARRR. These phases form a funnel. In each step you are losing users. By making changes to your product or service you are trying to reduce the percentage of users you lose in each step.

  • Acquisition: How do users find your product or service?
  • Activation: Are users executing your core functionality?
  • Retention: Are users coming back?
  • Revenue: How do you make money?
  • Referral: Are users referring you to others?

Do you find that something essential is missing or something else could be left out? Let me know in the comments.

Lean Startup Distilled

My perfect job

willworkforlatteWorking on a team of people I respect and learn from on something that matters to me in an open and empowering environment that leaves space for my personal life

  • People: shared values (integrity, learning, respect, the Agile Manifesto and the principles behind it), everyone is asked and willing to contribute on all levels, growth mindset
  • Purpose: something I genuinely believe will make the world a better place, even if just a little
  • Openness: Empowered employees, transparency
  • Organization: Self-organized teams, pull systems
  • Money: A competitive salary that gets the topic of money off the table without having to make me rich. The potential for big reward, if the team’s vision is fulfilled. A peer based compensation mechanism.
  • Improvement: Ample time and budget for continuous learning
  • Time: Freedom of time management, room for my personal life, working remotely
  • Startup culture: no bureaucracy, willingness to experiment, data-driven decisions
  • Cool technology stack: the web, current favourites: Meteor & CoffeeScript
  • Free choice of tools: laptop, phone, books
  • Good espresso: this is actually not just nice-to-have, but I am willing to provide my own
  • Healthy food options provided or nearby
  • Fitness room provided or nearby
Photo by  allaboutgorge
My perfect job