I wanted to understand the Big Model; but the problem is, as an outsider who was never part of the Forge, I don’t have the grounding in the language to make the intuitive leaps between topics as presented in the wiki. I needed a map, so I drew one.
Cognitive Mapping is a technique developed by Joseph Novak. It looks superficially like Buzan’s Mind Mapping but it is subtly but importantly distinct in two areas:
- A C-Map is decentralised compared to a Mind Map; no implied hierarchy or priority
- A C-Map has contextual links between nodes.
C-Maps have more potential as pedagogical tools. As this post says:
This map does not show how I intend thinking about this problem. Rather it shows the results of my thinking.
However this one is still pretty big, so will take some effort to follow. Colour coding to help (see the legend in the bottom left corner).
PDF version here.
This doesn’t have everything in the Big Model wiki (but it has most of it). Some immediate impressions:
- It’s messier than the Big Model Onion thing, but it does retain the high concept with colour coding, though not the progression through layers.
- I don’t see everything as subordinate to the Social Contract (e.g. Whiff Factor).
- I am not sold on the layers, at least not as implied to be discrete objects. There is feedback between the areas. Techniques must inform the norms established by the group, even if those norms (leading to permissions and Authority) are a social issue
- I see a single principle motive, Reward; the motives for behaving badly (turtling, prima donna, railroading and deprotagonism) aren’t so clear. Is it a consequence of incoherence?
The big gap I can see in the model is acknowledging cognitive load or the role of decision making. It does in places (e.g. seek and handling time, IIEE) but a great deal is spent on social issues. Could those issues, which frequently come from misalignment between players, be fixed by addressing cognitive load? Certain “incoherence” and cognitive burden must be related, as incoherence involves orthogonal procedures and decision making.
The antidote to this incoherence in storygames is frequently to reduce, simplify around a set of base assumptions, and reduce the scope of action by making implied permissions explicit. But I think this has consequences: I’ve said before that Apocalypse World seems to drive every decision towards a Type II decision making model; with that in mind, the mantra that “every rule breaks immersion” becomes self-fulfilling.