I started talking about online games that matter followed by a report on the BreakthroughToCures game, which was followed by some observations on that game; when the game results and report appeared, I reported on that here.
Earlier, two other games were played; I did no reporting on either. The first was CatalystsForChange, and the second was the U.S. Navy’s MMOWGLI. MMOWGLI was played on a platform that was licensed from IFTF, and created at the Naval Postgraduate School; that platform is to be made available as an open source project. In each of those games, I achieved highest score on the leaderboard. I remain skeptical that being at the top of the leaderboard conveys much information beyond levels of persistence, and perhaps innate skills at gaming the scoring rubric. But, that score can bear some meaning in a ValueMatrix sense when combined with other metrics such as the count of “super interesting” marks achieved by that player. An overall observation from having played 4 such games starting in early 2010 is this: the quality of players and their game moves is improving; that’s an important trend.
The next section of this report relates to experience gained while playing a another game; this one sought ideas about next-generation hospitals. All of these games are based on the Institute for the Future’s Foresight Engine. To anticipate, game play in a Foresight Engine game, a kind of card game, involves choosing a particular card type (question, answer, etc), and making a statement or question. You never actually make a de novo game move; you are always responding to another card.
Comments on Gameplay
These comments related to the hospital game. It’s extremely draining when you play to win, where winning is defined as sitting at the top of the leaderboard. I won; in fact, I helped #2 to get there from being #4 on the leaderboard just by game play directly with him in a fashion not unlike ping pong. Let me explain.
Ping pong, as you know, is a game in which, to win points, two people swat at a flying ball, trying to get the other to miss. David Bohm spoke in terms of ping pong as not the way to conduct conversations.
But, I just said ping pong with another player moved him from #4 to #2 on the leaderboard. Sound like the opposite effect? Sure. In this case, we kept “hitting the ball” by responding to each other’s game moves, and each got lots of points for each swat of that ball (read: game play move). Sounds like win-win, and it is. Why? The scoring rubrics.
In the past, I argued that the particular scoring mechanism in a Foresight Engine game encouraged the wrong kind of game play; today, I will argue otherwise. Here is what I mean.
Let me explain my use of the term ValueMatrix. Ordinarily, we think of things as valued in some way. But, sometimes the way in which we value something might not be appropriate to some context. Value ought to be a matrix; it is a high-dimensional concept.
Let me sketch the Foresight Engine. I view it through two lenses. First, the user experience is that of a card game. Then, the actual conversation is that of a tree, a particular kind of tree, one which structures the conversation along lines suggested by Issue-based Information Systems (IBIS), which came into play in the search for finding resolutions to wicked problems; visit Cognexus, DebateGraph, and Compendium to learn more (those are by no means the only resources–this is a rich and active field).
The scoring rubric says this (my interpretation, which could be weak or wrong):
- you get points for making a game move, but only when someone responds to your move with one of their own. In some sense, that’s a ValueMatrix-like approach: if nobody reads and responds to your card (game move), it isn’t worth anything, and that is one dimension in a ValueMatrix.
- when people respond to your card in droves, building a “tall tree structure” (called a “Build” in the game), the deeper they go into that tree (longer the tree branch) the more valuable your own move(s) (really, all moves in that tree). At this time, it is not clear whether points are allocated up the tree to all the cards in that branch, but they could be.
- if a “game runner” — expert (wizard as in oz) reading cards behind a curtain, likes a card, that card gets marked as “super interesting” and that game move is now worth way more than otherwise, meaning game play off that card (see 2) is even more valuable.
My view is that (2) encourages ping pong. I tend to stick with that interpretation (eyes wide open to other ideas). Build tall trees to win. That’s what we did. As #4 rose to to #2, our later moves appeared to be worth many hundreds of points each. It did help that at least one card in that tree had a “super interesting” notation (3).
Now, my original notion was that ping pong encourages chatter, mindless game moves just to get points. Indeed, I would argue that some game play is like that, and that was my complaint before now. Early in the Hospital game, I began to realize that mindless chatter is necessary. It is the noise in a Boltzmann machine that jiggles things around, letting them have opportunities to re-settle into different patterns. Let me unbundle that statement.
The setting is a fitness landscape. Think of a hill as a representation of a big problem you need to solve. You start climbing that hill. At some point, you find a plateau (it’s foggy) which you think to be the top of the hill, so you stop climbing, sit down, have lunch, and tweet your victory. In some cases, you are at the top; in others, you are not there yet and you need something to “kick you in the butt” and start you climbing again. Kennan Salinero reminded me of energy minima landscapes which allow movement across activation energies into new local minima, as in thermal movement.
Here’s the deal as I see it. Some fitness landscapes (hills) are gentle, others are rugged, steep, jagged. A Boltzmann machine is a learning machine that needs randomness to stir it up so it can re-settle to new patterns. That metaphor suggests that if you have a lot of noise on a gentle landscape, not much will happen. But, given a rugged fitness landscape, noise might actually knock you off the plateau and force you to start climbing again.
Map that back to game play. If you read lots of (nearly all) cards as I do, then you get ideas; the more the better. Mindless ping ponging might just offer the noise necessary to get me thinking outside whatever box is driving me at the moment.
But, this is also a picture of a double-edged sword. Here’s what I mean.
The context for this thought is that of a WorldOfWarcraft-like framework in which an imagined Hospital game posed a Quest, to be played by Guilds. If the Quest’s final game tree included mountains of mindless chit-chat, it’s hard to imagine very many “people who matter” reading that game tree. So, Guilds serve to generate game moves. It’s the task of Guild members to apply whatever mindless chatter it takes to evolve solid, concise, novel, profound game moves, and to keep that mindless chatter out of public view; only the final game moves are published to the Quest’s game tree. The thought here hints at the notion that we could conduct games in such a way that noise occurs where noise adds value, leading to crisp, clean, concise, and comprehensive game moves which reach those who need to see them.
Let me now return to the scoring rubric one more time. Consider (3), marking a game move as “super interesting”. Let me suggest that to be a kind of missed opportunity. Let me explain.
Some masked person marks my card as super interesting. On the surface, now I get more points, bragging rights, maybe a better place in line at the supermarket. I have no clue what it is about my game move that made it interesting. Don’t get me wrong: “more points” is good! But, that’s not the point of game play for me; in my view, the point of game play is to maximize our understanding of some situation (Quest) and maybe make possibly profound discoveries along the way. For that, we need all the information we can get. The label “super interesting”, while nice, is useful since it serves as an attractor basin (ants on honey) to get others to play that hand; it would be more valuable if it included statements which justify the award.
I can anticipate a valid counter argument to what I just said: learning what a domain expert thinks of one’s move, in particular, their justifications might bias further game play. So, we are sitting in the middle of a mildly wicked problem, in which potential gains to the game players in their understandings (knowledge) might actually bias their game play such that we miss something they might otherwise have “thunk up”. It’s a tough call.
In my view, a key point is this: (3) is the only mechanism by which some game move is rated in terms of possible semantics in a direct way.
If we revisit ants on honey (attractor basins), game moves should be interesting, and in particular, they should gain ValueMatrix scores based precisely on why they are interesting. It is that why which counts. Because someone thought so is not quite as useful as why that person thought so. Just consider the echo chambers which serve political dialogue to think of why this is important. We can build into our game engines all sorts of software agents that roam about a knowledgebase and form relations between a game move and other topics. Consider this: one of my “super interesting” cards (one of three I got) is this:
With that card, I referenced the topics “other cards” (this game), “PTSD patients”, “benefits”, “game play”, “avatars”, “storytelling”, and “guild activity”. You can analyze that claim (the card’s statement) in as many different was as there are stars in the universe, but we don’t need to do that. We can link those topics out to a trusted knowledgebase and appeal to those topics for ValueMatrix merits. For instance, consider PTSD. How does it stand in this precise context?
Firstly, the overriding context is next generation hospitals, and PTSD patients sometimes engage hospital ecosystems in one way or another. There, you get some points. Nextly, PTSD, as a medical issue, is a hot topic. More points. After you do that analysis for all of those topics, then you start looking at the coherence of the claim itself; all the mentioned topics seem to fit together as a coherent claim. More points.
Maybe that analysis is precisely what the game runner did; we shall never know. But we do know that the analysis is useful, so we strive to build such analytical capabilities into our platform. IBM’s Watson does something like this.
Overall, I believe that the road ahead for, let us call them IBIS Games, lies along three parallel but necessarily converging paths:
- User Experience
- Scoring Metrics
- Collaborative Games (Quests, Guilds, Avatars).
I would like to think that there are much larger uses of games, particularly in the fields of education and sensemaking (think: politics and health, research). Let me close with these questions: What would a manifesto on sensemaking/learning games look like?; What roles might sensemaking games play when combined with MOOCs?