The problem with Maturity Models
Maturity models can be useful to kick off a reflection on where your team or organization is right now on its agile path.
However, we have seen such models that use a confusing approach: They provide a number which then suggests something got measured. In reality, though, it’s just a subjective value attached to a high level, abstract question like “On a scale from 1 to 5, how much is your HR a catalyst for agile transformation?”. Uh? 3…-ish?
Our approach is different.
Second, we ask our clients these quantitative questions, referencing the Agile Manifesto:
“Outcome (e.g. working & valuable software), delivered in short cycles”
1: Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
3: Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
What is your project’s definition of “delivery”?
When/how often has your project delivered working software in the last 3 months?
Was it actually shipable at those times as well?
What is the cycle time of a feature in your project (measured from “came up with idea” to “delivered”)? (average & variance)
When/how often have you integrated (into) the full system in the last 3 months?
How many pieces of actual user feedback have you received in the last 3 months?
- How many of these feedback items were turned into backlog items?
2: Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
- How many changes have been proposed in the last 3 months?
- How many of them were implemented?
- Out of those that were NOT implemented, how many of them were rejected or postponed because
- other backlog items were deemed to yield higher value (for the users, for the business etc.) or lower (risk of) loss (e.g. urgent security fixes, avoiding penalties etc.),
- previously made commitments had to be kept (on less valuable items),
the roadmap couldn’t be changed,
- established processes/policies didn’t allow for it (e.g. strict Change Management Process, definition that “an active sprint’s scope must not be changed” etc.),
- the architecture/code base wasn’t flexible enough to implement the change, i.e. the effort for implementation was deemed disproportionate to the expected value,
- it was deemed too risky,
i.e. undetected regression bugs were likely / the effort for testing the change would have been disproportionate to the expected value,
- the change couldn’t be broken down into a size that matches the release iteration length?
“Face-to-Face, Business & Development collaboration”
4: Business people and developers must work together daily throughout the project.
6: The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
- How many business people are part of the team?
How much of their work time do they spend on the project (FTE)?
When/how often have developers and business people had face-to-face conversations in the last 4 weeks?
When have developers collaborated with relevant people outside the team (stakeholders, customers, business people, domain experts etc.) in the last 3 months?
Are they even allowed to?
- Face-to-face vs. text communication (sampled over the past 3 months):
- If you use Jira, Redmine or similar, how many comments do tickets have? (average & variance)
- How many of them have more than 5 comments?
- How many tickets have or refer to descriptions that are in contradiction to agreements in the comments?
- How many requirements have been implemented in the last 5 iterations?
How many of them have been personally presented and explained by business people to the complete team?
How many of them have been discussed with / invited questions from the team?
How many of them have been modified by the business people as a result of the team’s questions, feedback and discussion points?
- If development team members are NOT allowed to talk directly to business people or stakeholders then how many people/management layers are between them and customers, business people, domain experts, first level support, marketing, the sponsors/cost center owners, …
Developer -> (1) Offsite Manager -> (2) Onsite Manager -> (3) Sub-System Product Owner -> (4) System Product Owner -> (5) Cost Center Owner -> (6) Marketing -> Customer
9: Continuous attention to technical excellence and good design enhances agility.
How do you assess the technical excellence in your project
– AND what metrics/evidence is this opinion based on?
When you think of the latest 3-10 changes, how much time, in % of the total effort, was spent on
understanding the existing code base,
refactoring the existing code,
working around issues in the code base,
fixing unexpected problems/regression bugs,
manual quality assurance work,
- manual deployment work?
- In the past 6 months, how many change requests were rejected or postponed because the current code base does not allow for the required adaptations (in reasonable time/budget)?
What is the percentage?
- How many backlog items exist for technical improvements / resolving technical debt?
What is their average Cycle Time (measured from “added to backlog” to “done”)?
In the 5 latest iterations:
- How many of said technical improvements / resolutions of technical debt have been rejected or postponed by the Product Owner or other decision makers outside the development team?
- How many of these iterations have the Product Owner and/or the Stakeholders been fully satisfied with the product’s quality, adaptability and the implementation speed?
- If #1 is > 50% and #2 is < 50%, has the team discussed the contradiction between rejecting technical improvements / resolving technical debt vs. being unsatisfied with quality, adaptability or implementation speed?
8: Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
- How many people have left the project in the last 6 months?
- How many new people have joined the project in the last 6 months?
- How long has the project been going on?
- How many different Product Owners, Scrum Masters and Architects have worked on the project so far?
What has been the weekly work time for people over the last 3 months?
Which people on the team CANNOT be replaced by a successor or compensated for by the remaining team within 4 weeks?
11: The best architectures, requirements, and designs emerge from self-organizing teams.
How many decisions has the team made in the last 3 months?
- How many of them were made by individual people, e.g. Scrum Master, Product Owner, Project Manager,
vs. decisions made by the dev team?
- How many of them had to be approved by someone outside the team before they became effective, e.g. the Line Manager, Director of XYZ, CEO etc?
How many of them were opposing or enhancing company guidelines?
How many of them were opposing the Product Owner’s / Project Manager’s, Line Manager’s etc. or any outside stakeholder’s opinion?
12: At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
How often has the team met for reflection sessions in the last 3 months?
Has the team or any outside stakeholder had the impression that the project was in a bad situation at least 1 time in the last 3 months?
If yes, did the team meet for a reflection session / stop-the-line meeting outside their regular meeting schedule?
How many and what changes to the way of working has the team tried in the last 3 months?
How many proposed changes to the way of working were NOT tried in the last 3 months, e.g. they
- were turned down by the Product Owner or Scrum Master or an individual lead role on the team,
- were turned down by an outside stakeholder, e.g. Head of Department,
- were not pursued by the team because they feared rejection or wanted to avoid the need for outside approval,
- lacked the time, budget, resources?