What do you do when Shaping needs much research work AKA figuring rabbit holes patches needs much work?

For example, imagine that you have an issue with understanding the customer issues with your service and want to implement a universal logging on backend, frontend, iOS and Android app so that you could have a closer look at what’s happening where. That is very likely to be a valuable investment, definitely not super-huge in complexity (definitely fits in a cycle), yet which solution to use (or several ones) is quite unclear and it is exactly the process of evaluating solutions that’s complex.

Certainly some shaping one would be able to with just deep thinking and studying demos, but for selecting the final option quite a non-trivial proof of concept would be needed: e.g. some logging solutions may be iOS incompatible, some - won’t let you respect privacy regulations easily, you’ll need to study which exactly kind’of privacy popup will be good enough for iOS, you’ll want to simulate significant amount of logs and see how useful the visualizations are in different solutions.

What do you usually do in this sort of situations?

  • A. Do you consider it a part of shaping, just unusually expensive shaping and do the needed proof of concept off cycle?
  • B. Or do you treat the proof of concept need as a [small] batch, just delivering not to the end user, but to shapers to help them list rabbit holes patches properly? And if small batch is assumed to be enough, then you can bet on implementing it as the next small batch within the same cycle then?
  • C. Or if you quite certain that some solution will be selected for sure (we do want to log things for sure and we are sure that implementing it will fit into cycle for sure), then you just assume that the research will take a part of cycle and the rest of the cycle will be spent on implementing it?
  • D. Or something very different?

After reading the @rjs 'es book, option A seems to be a natural choice (if you are not clear how the rabbit holes will be patched - go research the options), yet in research-intensive tasks such research and proofs of concept could easily need some 70-80% of total work and… then most of your work is done off cycle - doesn’t look very predictable and manageable :confused:

P.S.
I guess the same question is relevant for most of architecture-related decisions. In many-many cases it is choosing the database, monolith VS microservices, this or that queueing solution, etc. that takes time while the final implementation can often be relatively trivial.

The product I had been working on faced this exact same issue a few months back. Some thoughts:

“Universal logging” is something the team broke into pieces. We started with web front-end, then back-end, then mobile iOS (we don’t do Android). We didn’t try to tackle things simultaneously to determine one solution.

We also moved the risk to the front, meaning that we started with the work that was most likely going to dictate/constrain our logging architecture. For us, that was the web client. (Although our situation is a bit different, as we built our own logger rather than implement a third-party solution).

And while we delivered that first batch of work to end users, we piloted it first internally with a beta customer as a proof of concept during the cycle.

So to answer your question, we did a mix of B and C (we were sure we were going to do logging). It was in-cycle work, but it was small and limited. The outcome was to prove the feasibility of the logging architecture, meaning that if we knew it worked for web front-end, it would work for the rest. And if it didn’t work, we just lost that little piece and not the whole thing.

Hope that helps. Good luck!