Becoming narrators, DALL-E and pattern language

David Friedberg makes an eloquent point here on the the different roles “humans” have assumed over time. Here’s the clip: E99: Cheating scandals, Twitter updates, rapid AI advancements, Biden's pardon, Section 230 & more - YouTube (45:21-48:53). TL;DW humans have more or less followed this progression on Earth over time:

  1. Passive
  2. Laborers - ploughed fields, walked distances
  3. Creators - once labor become more automated, we switched to knowledge work
    Up next as AI proliferates
    4.Narrators - instead of creating a blueprint for a house, you describe the house you want and the technology (software) produces it.

The thesis has wide ranging implications but I couldn’t help but connect the dots to Shape Up and the pattern language used when creating Pitches. My (very) cursory understanding of DALL-E is that it uses something called CLIP where it is trained on hundreds of millions of images to make the correct associations of text <> image. E.g.

a teddy bear riding a skateboard in Times Square

Similarly, here’s an excerpt pattern from Christopher Alexander’s “The Oregon Experiment” describing a pattern for one of his projects.

People use open space if it is sunny, and don’t use it if it isn’t, in all but desert climates. Therefore: Place buildings so that the open space intended for use is on the south side of the buildings; avoid putting open space in the shadow of buildings; and never let a deep strip of shade separate a sunny area from the building which it serves.

And finally, an example from a Pitch for a “Testimonies” feature:

Testimonies can be positive or negative. Depending on which sentiment the subscriber chooses to leave, we update the question in the testimony form.

You can start to imagine something like a “CLIP” model being applied to the app you work on. Instead of images, the model knows your affordances, major nouns, interactions and so forth. Working on an app that is mostly CRUD operations + the emergence of tools like Github Copilot, simply “Narrating” new features start to come into focus.

I’ve always tried to predict where to spend effort learning new skills that will be relevant over the next 10-15 years in my career. If Friedberg’s prediction is correct and AI becomes this type of tool as described above, then mastering the art of Shaping (which can be compared to narrating) seems like a worthwhile pursuit in my little corner of the world. I was glad to see RJS continue to refine Shape Up here.


I think it is an excellente analogy. And as you need to be able to write a good pitch for a good result, you need to be very good in prompt engineering to get the desired results from an AI.

1 Like

I think it could also be compared with Github Copilot.

Copilot is for code, what Shape Up is for product development: you describe it precisely enough while leaving flexibility. This is efficient for the writer and the executor alike.

1 Like