In accordance to the fixed time and flexible scope premise of shape up, at the end of the build cycle, chances are there will be gaps/differences between pitched scope vs. realized scope.
Do you revisit pitched vs. realized scopes? Do you log the gaps for any type of future use?
We do take stock of outcomes, but since scope is variable, I avoid thinking in terms of gaps or differences. IMO that is not positive language to measure the value of a cycle.
In fact, we do the opposite and celebrate realized scope: Highlight accomplishments (both internal and for customers/users) and share insights learned along the way. We broadcast this to the whole company. It boosts team morale and “sells” Shape Up internally.
If we do look at gaps, it’s more in the original pitch—if defects in the pitch hindered team progress. It helps us write better pitches in the future.
Unrealized scope could flow into the cool-down (if tiny things) or put into the next cycle as a continuation of the pitch, if appetite persists.
This is a great question @rflprr thank you for bringing it up! I’d be interested to hear if other teams do anything special to close out a cycle.
I’d frame this more about team performance than about tracking work.
We assume that any changes in scope are for the better — they are the result of learning what works better in reality, given the appetite. From that POV, we don’t care about the gap.
However, that assumes leadership is happy with what gets shipped. If the work that ships is for some reason not satisfying, then there are two paths to investigate:
Did something go wrong with the team’s decision making and/or execution?
Was there a problem from the beginning in the shaped work?
I’d view this more as a debriefing matter, and something we only do when a problem comes to our attention, rather than some standard procedure.
Of course, some review in the course of cycle mitigates this. Good times for review are (a) if/when some work is stuck uphill or (b) after a couple meaningful scopes have rounded the hill.
I would imagine 2, 4, and then 6 weeks after you ship a batch to users, that you you begin to collect data and write an outcome report and share that out. That report can look like the following temple:
OUTCOME REPORT:
(Post as a Heartbeat message in the Product HQs’ Message Board after documented and completed here. Then, archive this project. This outcome report is completed ~30-60 days after a product/solution/feature is shipped to users.)
YOUR BET
We believe<this feature/solution/capability>
for<(x number of y users)>
will result in<this outcome (change in your user’s behavior).>
We will know we have succeeded<when we see x measurable signal.>
RESULTS
(Qualitative and/or quantitative data.)
CONCLUSION
Did the results match your hypothesis? Y/N
Did the results contradict your hypothesis? Y/N
Are the results inconclusive? Y/N
NEXT STEPS
What, if anything, are we doing to improve results of this experiment/product release?