Archive for December, 2006

The Myth of Content Reuse…Confirmed, Plausible, or Busted?

Definite apologies to Mythbusters (one of my ongoing favorite shows on TV). They’ve had so many great episodes. Who knew that water is a good way to avoid bullets and that a fire extinguisher is a great way to chill a 6 pack of your favorite beverage?

Anyway…content reuse, object oriented programming, or leftovers…whatever it’s called and whatever industry we’re talking about (learning, IT, cooking, etc), is it really possible to get real value from reusability. Well, before I get to that, maybe we should get a few definitions in here. First, to be reusable, something must be able to be used over and over again in different settings without being changed itself.

The classic example (and also one of my favorite toys growing up and now) is Lego. The nearly 75 year old Danish company’s core product may actually be one of the best examples of learning through play ever made. In fact, one of my favorite concepts for a business consulting company is Serious Play which makes use of Legos in business. In any case, the single Lego block pictured to the right may be the best example of the smallest piece of content. It has things it knows about itself including size (2×2, standard height) and color. It knows how to talk to other blocks (the top side) and knows how to listen to other blocks (the bottom side). Now, what it knows and what it can do are quite limited, but that’s part of the definition of reusability content. It must be as small as possible, but no smaller.

What’s really important for reusability, however, is that the object itself has no context. Is it a wall or a car? Is it a spaceship or a cathedral? It is none of these, but it can part of any of these. Only once it becomes part of a bigger structure when we add the context like the one pictured to the left can we know what it’s really a part of.

Let’s take a look at this in corporate learning. The smallest piece of content might be one of the corporate values such as “Keep the Customer First” or in the center or some such thing. Nearly every company has a mission, vision, or value that has some variation on that theme. That value could obviously be reused in a course on values, and it also could come up in courses on sales, customer service, and building maintenance (i.e. let the customer walk by first rather than running them over with the inventory re-stocking cart). By having that value as an object, rather than re-typing it each time, the object could be housed in a central repository and called on each time it’s needed. If it ever needs to change (words added or removed for example), it is changed once in the repository and every time it’s used is automatically updated. This is a pretty simple example and maybe even a silly one, but it reasonably demonstrates several of the benefits and limitations of reuse.

Benefits of Reuse

  1. Quick & Easy Updates – Done right, true reuse promises to make updates simple. Change once for all. The more places an object is used and the more often it can or does change, the more powerful this benefit is. In fact, this advantage also points to one of the times when reusability should be considered. If an object is used a lot in many different places or it changes frequently, a structure and process for reuse should definitely be considered.
  2. Consistency – When something is retyped, restated, or reused often it can be modified each time it’s passed on. It’s like the old game of telephone or duplicating old cassette tapes, each generation gets to be a little lower quality than the one before. True reuse ensures consistency regardless of the number of times an object is used.
  3. Knowledge Repository – Sometimes just having the object in a central place can be a value in itself. It’s why libraries (just another assembly of knowledge objects) were so valuable back in the day. People could go to one place (and they knew where it was) to find information.

Limitations of Reuse

  1. Upfront Cost – No matter what the experts say, reuse has a huge upfront cost. It’s expensive to build a usable library. Objects have to have a place to live, be discoverable (easily), and be accessible regardless of the medium. This costs money and takes a lot of time and patience to create. Does it pay off? It depends on how long it continues and how well it’s created and used.
  2. Extended Dedication – To have value, all objects (or at least all important objects) must be in the library and cannot exist in other locations. As soon as exceptions are made to this, the library falls apart. If some objects are in one place and others in another, it reduces the value of the library. If one object gets an exception, soon all will follow. Since creating the library takes extra time and money, it takes special dedication to stick with it both in creation and usage, often for many years before the real value is received (especially if the objects change infrequently).
  3. Depth – It’s very difficult to decide what level to go to. In the values example above, is an individual the right depth or is it too granular or too generic? Well, the level below a value is an individual word and that’s definitely too far. How about the level above listing all of the values? Well, that depends on what’s being done with it. When mentioning one value will all of them will be mentioned? It takes a lot of foresight and oversight to know what the right answer is.
  4. Context – As the altitude is increased, more and more context is wrapped around a piece of content. Each level of context added to an object reduces or eliminates its reusability. Using the values example, a value of Customer First is probably pretty reusable and has almost no context as to what to do with it. Taking a case study on how to deal with a customer during a product return and turning that into an object becomes less reusable because it probably doesn’t apply in a courses about building maintenance (even though the underlying principle does). Context reduces reusability.
  5. Consistency – If the library is large (which it likely will be if it’s going to provide the value), it probably will require multiple people to create. Those people must be exactly consistent on things like structure, tagging (labeling each item), and taxonomy so that regardless of who’s doing the searching objects can be found.
So, is the Myth of Content Reuse confirmed, plausible, or busted? I think the best I can do is call it plausible. There are many examples, especially in the areas of Information Technology and Programming. However, too many LCMS, reusability, and object-oriented projects have failed to complete or when completed failed to provide the value they promised to call this myth busted or confirmed. Should a company pursue a reusability project? I think it depends a lot on the culture of the organization, the business strategy, the quantity of changes, the speed of changes, and the number of items that are truly reusable across the organization. As always, there’s no easy answer.

Eliminating transfer

Over the last few days I’ve been thinking and talking a lot about transfer. In the learning field, transfer is simply the ability to take what you’ve learned and apply it to the job. Using the often-debated, sometimes-despised Kirkpatrick levels, transfer would be measured on Level 3. Often data for measuring Level 3 can be extracted from an enterprise system (CRM, ERP, etc) or through 360-degree feedback by comparing a person’s performance before and after the training event. Retention is certainly part of transfer, but it’s not the only thing. It’s not enough to remember what was learned when returning to work, the employee must also apply what they’ve learned. So, with that we arrive at:

Transfer = Retention + Application

Most discussions of transfer focus almost solely on retention. Here are some examples of how retention has become the focus of our discussions on transfer:

  • Time – hold the class closer to when the person needs to use it. Certainly time is a factor in retention. We all know that we forget stuff very quickly, and the longer we wait the more we forget.
  • Frequency – increase the number of times the learner hears the information, maybe even spread out over weeks. Certainly repetition aids memory, but it doesn’t have anything to do with application.
  • Flash – make it memorable by adding zing, flare, and style…do it up big. I’m all for events or visually stunning programs. When done well, they are fun and have lots of benefits for corporate culture. However, people tend to remember the flash and not the substance. “Remember that time when the CEO…” But what was said? “I don’t remember, but it was a lot of fun.”
  • Smaller Chunks – smaller, bite sized pieces are easy to digest. While it’s probably true that a 5-minute piece is easier to grasp than a 1/2 day or even 1 hour lecture, it still doesn’t help the learner know what to do with it.

These methods assume that the gap shown in Figure 1 is going to exist, so let’s try to fill the gap with various things to ensure that the learner retains the information long enough to get back to work. Transfer is not just about retention it’s also about application. In fact, if application is done well, retention will happen automatically without the need for most of the tricks listed above. The best way to increase transfer is to increase application. Here are several approaches for increasing application:

  • Simulations – I’ve talked a lot about these. What’s really important about a good simulation is that it is as close to the work as you can get without actually doing the work. This approach increases transfer by reducing the gap between what was being taught and what needs to be done. The learner shouldn’t have to make any logical leap from the simulation to the job. This approach creates a situation similar to Figure 2. Virtually eliminating the gap between the learning and the work. Transfer becomes less and less necessary in this approach.
  • Performance Support – wikis, job aids, search, and other tools all provide ways to keep the learner on the job where they can apply the information immediately. Is this training? I don’t know, and to some extent I don’t care what we call it. Does the person get the information they need and are they then able to apply it to the job? Then it works for me, whatever we call it. Most people aren’t going to remember most of the content anyway, so why not just provide it via search in the first place. Maybe we need training on how to do good searches. That’s yet another blog entry.
  • Coaching – there may be nothing better than having a good coach. Someone who watches the work, provides gentle but effective feedback in the moment, provides increasingly more difficult challenges, and refers to the next coach when their own abilities have been exceeded.

With these approaches, the need for transfer is significantly reduced or eliminated. There are many other approaches that could also be explored. For a long time, our training has been moving further and further away from the job. Lectures with whiteboards, flip charts, and PowerPoint are immensely far from the work. If you’re planning to do training using that or other similar methods, then be sure to include both Retention and Application in your Transfer plans. However, I think it’s time that we as industry focus on application so much that the training is indistinguishable from the work.

1 Comment more...

Double Your Costs to Save Money…

Sound counter-intuitive? Not to Menlo Innovations. Located in the heart of downtown Ann Arbor’s Kerrytown district just blocks away from another innovative business, Zingerman’s Deli, they both share many strategy and growth philosophies, but that’s another post. The Menlo name–after Thomas Edison’s famous New Jersey lab (now located at Greenfield Village in Dearborn, Michigan pictured to the right)–was selected not only for it’s obvious marketing inferences, but also as a nod to their approach to work and innovation.

The office, newly located on the 3rd floor of the Kerrytown Shops, is an open loft with red brick walls and semi-unfinished floors. At it’s core, Menlo is a custom programming shop. While the projects they are working on are interesting from a programming perspective, it’s really the work environment and work process that makes them unique. There are no cubicles, no walls, no offices…for anyone. All 50 or so staff members (programmers, project managers, designers) all work in the same space. For programmers, it’s even more cozy. They each share a folding table and a computer. Two programmers per computer, working together all week long. At the beginning of each week the pairs are switched sometimes to a different part of the same project, sometimes to a different project, and always with a different partner.

It’s not just that collaboration is built into the process, collaboration is the process. If programming in pairs fundamentally doubles the cost of writing code, why would a business do it?
  • Ownership – employees are more productive because they feel a responsibility to their partner to make each other look good
  • Quality – having two people review code as it’s being written increasing the probability that an error will get caught early which reduces costs
  • Innovation – putting two people together to solve complex problems generally creates a better solution
  • Flexibility – constant rotation of the staff means more people are familiar with the project and can jump on if time lines need to be shortened or requirements change. There’s almost no ramp up time when changes need to be made
  • Learning – Mentoring, coaching, and learning become inherent in the process. Within a few weeks something one person knows is organically spread to the entire organization.

Menlo has proved that even in an traditionally introverted environment, collaboration can improve quality, lower costs, and provide better results for the customer. Better yet, they’re not hiding their process, they’re encouraging others to use it by offering hands-on courses to programmers, project managers, and business leaders on how to apply this to their own work environment. What if all businesses, not just programming job shops, applied the same collaborative approach to work? What kind of innovation, quality, and efficiency could we gain?

1 Comment more...

Is Instructional Design Dead?

There’s quite a bit of talk going on in the blogosphere about whether or not instructional design is dead. There was even a few sessions on it at Learning 2006. Like Mark Oehlert, I’m a visitor to this land. My background is really in Information Technology and Business Management. I sort of fell into Learning and Training as outcroppings of those careers.

If by instructional design, we mean somebody designing a classroom or e-learning session, then if it’s not dead already it’s certainly breathing pretty hard. There’s often talk about the classroom having been the right choice for the Industrial Age, but that it doesn’t fit the Information Age. I’m not sure that it was ever the right choice. People learn best by doing. Yet we took them off the floor, out of the field and put them in a room with lectures and bullet points. The more we can move to on-the-job, performance support, coaching, simulations, informal learning, and other related techniques the more impact we will have on the business. The farther we get from the job (the classroom and traditional e-Learning) the more we have to worry about transfer (def: taking what you just learned and applying it to your work) which has also become a hot topic recently. The reason transfer is even a topic at all is that our training is too far away (physically and logically) from the work we have to do.

If by instructional design, we mean somebody designing work-integrated, longitudinal experiences that increase an employees skills and productivity, then long live instructional design (ID). However, the skills necessary to do this job are quite different from what’s often taught in our ID classrooms today. Here’s a partial list of some of the competencies for a modern ID:

  • Writing (clear, concise, readable, jargon free)
  • Project management (planning, scope, communication, budget)
  • Team building/leading
  • Visual design (usability, interface, standards)
  • Creativity
  • Standards (SCORM, AICC, etc)
  • Storytelling
  • Basic Office packages (Excel, Outlook, Word)
  • Broad format competencies (classroom, e-learning, simulations, podcasting, etc)
  • Strong comunications (possibly to the level of being a communications person)
  • Modern instructional design (blended, experiential, longitudinal, on-the-job)
  • Business strategy execution
  • Leadership development
  • Performance support
  • Organizational development
  • Marketing
  • Communications
  • General business literacy
  • Plus, an expertise in whatever topic they are developing

What would you add to the list? So many of these lead to other blog entries. I can’t wait to dive in.

Interactive Museums?!?

Interactive Museums…okay, so you don’t hear those two words together too often. Some children’s museums are doing pretty well with interactive exhibits, but most of the rest of the rest of museum exhibits that are billed as interactive are, well, a little lacking. The words Interactive Art Museum or Interactive Architecture Museum are even less likely to be found together. However, an underrated and certainly under-advertised exhibit is well worth your stop on a trip through St. Louis.

The City Museum in downtown St. Louis is part museum, part art exhibit, and part adult-sized HabiTrail. More known for the Arch and riverboat casinos, the City Museum is one of St. Louis’ hidden gems. All of the pieces are made from reclaimed architectural features from within the city. Some of my favorite features include 2 & 3 story slides, a monster indoor/outdoor jungle gym (yes, that’s me dozens of feet above the ground), a large aquarium, some cool caves, and a fire pit (complete with marshmallows for roasting). All of the exhibits are built with some amazing pieces of architecture each with their own story from historic St. Louis waiting for discovery and a truly hands-on experience.

If only we could find a way to make all learning this much fun. Next time you’re in St. Louis, I highly recommend you stop by. On Fridays and Saturdays, they’re even open until 1:00 am! (How many museums do you know that are open that late?)

Wine Reduces Violent Crime

Recent studies have shown that wine (red & white) can have significant health benefits–in moderation, of course. Now, Dmitry, a user of a new service called Swivel has found a correlation between wine and violent crime:The red line is the trend of wine over the last 25 years. The green line is the trend in violent crime. So, I think we all need to drink more wine in order to save the world.

The full study includes the data and the sourcing. Thanks to Mark Oehlert for pointing out this new service.

(Yes, I know correlation doesn’t mean causation).

Books You Should Read (if you do anything with simulation, gaming, or learning)

Many people have influenced my thinking over the last 7 or 8 years. Some of them have written books, and some I’m even lucky enough to call friends. Here are a few of the books that have had a big influence on my work and my point of view:

Jim Gee (bio), What Video Games Have to Teach Us
Jim is a professor at the University of Wisconsin-Madison and is the head of the GAPPS (Gaming and Professional Practice Simulations) group. The other people on his team are worth checking out as well including Kurt Squire (wikipedia bio) and Constance Steinkuehler both of whom are doing great research in the field.

Chris Crawford (bio, wikipedia bio), Interactive Storytelling
Chris was the founder of the Game Developers Conference and has been involved with the development of many games such as Balance of Power. Right now, Chris is working on a new tool called Storytron for helping people write their own interactive stories.

Marc Prensky (bio), Digital Game-based Learning
Marc’s original book was a ground-breaking, easy read. In it he established the concept of “digital natives” and “digital immigrants” to describe comfort issues with technology often ascribed to generational differences. In addition, he’s the founder of Games2Train, a company that tries to put the ideas into practice.

Janet Murray (bio, wikipedia bio), Hamlet on the Holodeck
Janet’s book was the first where I really began to understand interactivity and the importance of “agency” (becoming a direct participant in the story or interactivity). It’s a great book for anybody doing web design, simulations, or storytelling.

Alan Cooper (wikipedia bio), About Face 2.0
Often referred to as the “father of Visual Basic” (a programming language from Microsoft), Alan has had a huge influence on design, usability, visualization, and interactivity. His consulting company in San Francisco even offers training on the topics.

Don Tapscott (bio, wikipedia bio), Growing Up Digital
This book and his later ones provide a great deal of information on the changes taking place with various Internet-based technologies. Besides being an author, Don is also a consultant and a professor at the University of Toronto.

Jakob Nielsen (bio, wikipedia bio), Designing Web Usability
Jakob’s work on usability is the best in the space. His company has designed methodologies that are extremely valuable to the instructional designer, game designer, and web developer. Of course, any idea taken to the extreme can have negative results. The ideal is to find a balance between usability (function) and art (form) with neither being at the expense of the other.

Scott McCloud, (bio, wikipedia bio), Understanding Comics
Thanks to Mark Oehlert for the referral to this book a couple years ago. While the entire piece is incredible, Scott’s description in Chapter 2 of the use of visualization is a must read for any designer. Scott has written many books on the topic of comics and is worth going to see on his U.S. speaking tour.

What an incredible set of people and books. Most of them have other books that I would recommend as well, but these represent the best of their collective work so far. Enjoy!

Adaptive Simulations

Jim Gee‘s book, What Video Games Have to Teach Us about Learning and Literacy, was so ground breaking when he wrote it in 2003, it was destined to become a classic. In it he describes 36 Learning Principles that are demonstrated by most video games. I want to focus on just 2 of them here in order to describe the concept of adaptive simulations. The visual below is a representation of these concepts:

Here is the first of the principles as stated in his book:

14. “Regime of Competence” Principle
The learner gets ample opportunity to operate within, but at the outer edge of, his or her resources, so that at those points things are felt as challenging but not undoable.”

The green area in the picture above represents the an individuals area of competence. There are a few key things to note about this area. First, as the simulation progresses, their competence is increasing. Practice and repetition allow participants to gain competence. At the same time, the threshold at which they become bored increases as well. If they are repeatedly required to do a task that they are already an expert at, they will become bored. Most participants will tolerate a few moments of boredom, but if the simulation continues to stay below their level of competence they will stop playing. It’s the same for simulations that go above the area of competence into incompetence. However, players will stop playing more quickly if the simulation is too difficult and they don’t seem to be making progress.

One other key, notice the relatively low threshold for entry at the start of the simulation. This allows the participant to situate themselves before beginning. Other starting points would result in far few starts. For example, simulations that start too easy (in their boredom range) or to hard (in their incompetency range) will have people leave. It is also important that it starts low even in their competency range. If it starts high in their competency range, the expectation is that it will stay there and may be too exhausting to play. Which leads us to our next principle:

11. Achievement Principle
For learners of all levels of skill there are intrinsic rewards from the beginning, customized to each learner’s level, effort, and growing mastery and signaling the learner’s ongoing achievements.

Notice the sawtooth wave pattern. The left side of each tooth represents the increasing difficulty of the simulation. In each round, section, or level it builds to a climax. In gaming terms, this moment is often a confrontation with a “boss” where everything that was learned so far comes together. Once the boss is defeated, there is some form of reward such as points, extended play, new tools, and so on. However, there is a more subtle reward as well…relief. The difficultly level drops dramatically and the process starts over, but it doesn’t start back at the previous level of difficulty. It starts one step higher. This process is repeated throughout the simulation moving the players competence level (and boredom level) progressively higher. This relief can almost be equated to the feeling after a good bike ride or challenging run down the ski slope.

Simulation design requires both art and science. Where should it start? How steep should the sawtooth be? How far should it push the player? How big is the reward? How many repetitions or levels should their be? These variables are configured through dozens and maybe hundreds of rounds of testing with real participants to get the simulation just right.

So as the skills of the participant develop, the simulation gets progressively harder stepping up the challenges. Some simulations even analyze the approach the player is using and adjust the curves accordingly by increasing (or decreasing) the slope or summit of each round making it harder or easier to succeed based on their competencies.

In some ways, the adaptive nature of simulations is what makes them an ideal learning environment. Like a good teacher, they spur the participant on to greater and greater heights providing just the right challenge at just the right moment.

Open Seating on Southwest Air – pt. 2

OK, so this is less about Southwest than it is about boarding aircraft, but why does it take so long to board a plane? There has got to be a better way. Here are the current attempts to speed things up:

  1. Board by row – people don’t read their boarding pass and it still creates a backup
  2. Board by number – with 6 or 7 groups it seems to be a little easier for people to understand, but doesn’t seem to speed it up much. Often it’s the same as boarding by row when group 7 is rows 26-30, group 6 is 20-25, etc.
  3. Open seating, prioritized by groups – it’s hard to tell if this is actually faster or not. The cheaper fares of Southwest bring less-frequent fliers who aren’t accustomed to flying so board more slowly anyway. It’s like flying to Orlando year round.
  4. Two boarding doors – many international flights and even some domestic flights have started boarding at two doors. For assigned seating this seems like it would speed things up. Of course, the middle of the plane becomes the new back of the plane, but it’s still better.

None of these seem adequate enough by themselves. Though, it could just be a people problem. I was on one flight in Newark. It was one of those end of the day Friday, late winter flights when the weather is always iffy. The plane had been delayed and delayed and delayed again. Finally, it arrived and was ready to board. However, our crew had already been flying for near their legally allotted of time. It was a full flight with about 150 business travelers all with their luggage in tow. The gate attendant got on the microphone and explained the situation saying “We all want this plane to get off the ground. To make that happen, all of you have to be on the plane in 7 minutes,” and it happened. People were moving quickly, getting out of the aisle, helping each other move the luggage, swapping seats, it was amazing to watch people when they were motivated. In the end, the flight was cancelled so it didn’t matter, but the boarding process was incredible.

So here are some thoughts (some doable and some not so much) on how to speed up the boarding process:

  1. No luggage allowed unless it fits under the seat. I know I don’t like this one either, but if they fixed the delays in getting baggage this could work.
  2. Mandatory training and certification for everybody. It could even be a simulation where they could practice going through security and boarding the plane before they go to the airport. We could even create performance support tools to deal with in the moment questions or difficulties.
  3. Line people up by row or even seat number at the gate. Rather than creating fewer groups, assign every seat a number and then make people line up in order.
  4. Rather than a single or double door, have people enter roller coaster style. Each row on the plane would have a place to stand at the airport. Each seat would have a little picture on the floor with the seat number so you knew exactly where to stand. (Added bonus: seat conflicts would be figured out in the concourse when they’re easier to solve than on the plane). Then, one whole side of the top of the plane would lift up and people would enter directly at their row.

I suppose I’m only hoping, but maybe somebody from Southwest or Northwest will read this and leverage an idea or two.

3 Approaches to Scenario-based Simulations

A few posts ago we talked a bit about simulations. Probably the most underutilized (online or offline) category of simulations is Scenario-based Simulations. Sometimes referred to as Conversation or Situational Simulations, participants are presented with a problem to solve or goal to achieve and then need to use conversations, relationships, and sometimes teamwork to accomplish the goal.

This category is notoriously hard to simulate (which may be why it’s so rarely used). Simulating hardware and software are relatively easy. They’re predictable and consistent. People don’t share either of those characteristics and that’s probably why they are so hard to simulate. However, when used well, these can be the most powerful and effective simulations since nearly everything we do involves working with other people. There are 3 basic approaches that have been used. Here’s a quick summary including their advantages and disadvantages:


This approach has been around forever in formats like role playing, improv, and children’s make believe. Nothing can simulate real people like real people. The best experience is gained by practicing an interview or a sales pitch with a real person on the other side of the table. They can adapt in the moment, analyze your weaknesses (and capitalize on them). This can even be done with teams. Companies use this approach for disaster recovery drills, emergency responder preparations, team presentations, and complex equipment changes. Some companies have begun to combine the use of other simulations (data/numeric/analytic) with Scenarios to deepen the experience. Enspire Learning‘s done a great job of combining these categories. This can be even be done online using Massively Multiplayer Online Role Playing Game (MMORPG) technologies. Companies like Breakaway Games have had a lot of success with MMORPG simulations. In one of the upcoming posts, we’ll dive into RPGs & MMORPGs in more detail.

As with many things, the strength of this approach is also it’s weakness. While nothing can simulate real people like real people, it’s hard to find people who are able to be a good antagonist. For this to work, good or even great coaches are required, and good coaches are hard to come by. Even when you find one, they have limited time available and can only be in so many locations. The MMORPG approach does alleviate this a little bit by allowing coaches (and participants) to be in any location, but coaches are still a limited resource. It’s also often difficult to bring people together for team-based activities. Travel restrictions, costs, and other commitments make it difficult to get people together at the same time.

In the end, if the disadvantages can be overcome, this is by far the most effective approach. Thankfully, if these hurdles are too high, there are other approaches with their own trade offs.


If costs or logistics are too high to have in-person or live antagonists, the next step is to use virtual characters of some sort. Virtual characters can be photos, drawings, animations, videos, etc. We’ll do at least a couple blog entries on use and selection of virtual characters in the near future. For now, virtual characters are a substitution for a live person. If the participant were going to practice giving feedback, the virtual character would be one of their employees. At each step (a node in branching terms) in the process, the participant would decide which choice was the most appropriate. The picture on the right is an example of one node.

After each decision point, a resulting response would be given by the virtual character. In the example above, if the participant had chosen “C: How are you feeling?”, the virtual character might respond “I’ve been pretty tired lately.” This leads to another set of decisions and a response. The process repeats itself until the situation has been resolved (successfully or unsuccessfully). With just two choices for each branch (A or B), and just 3 levels of depth, the result would be a picture like the one to the right. It’s not too hard to see how the look of this “tree” was called “branching”. Notice even with small example, we already have 7 nodes. At 5 levels of depth the number of nodes balloons to 33. It echos of the old 1970’s shampoo commercial of “she told two friends, and she told two friends, and so on”. (There seems to be some disagreement on whether it was Breck or Faberge. Let me know if you can confirm which one). It doesn’t take too long for this tree to escalate out of control. The amount of work to design and create these continues to grow exponentially with each node.

Sadly, even at this level of work, they are not “realistic enough”. Rarely does a decision point only have 2 options and rarely can all situations be remedied only 5 steps. Over time, people have developed tricks to reduce the work load. Things like loop backs, mini games, and dynamic branches, create a more realistic feel with a little less effort to create, but the depth is still limited.

While this approach has limitations, it does have quite a few benefits. A live antagonist is not needed and it can be done anytime, anywhere. This reduces deployment costs and time significantly. Also, while these take a good amount of work to do well, they are comparatively less work than a state-based system. In addition, there are several tools such as NexLearn‘s SimWriter (high end) and Adobe‘s Captivate (low end) that can dramatically reduce the pain of development. Several tools that fall in the middle of the two are also actively in development.


The last approach has quite a few similarities to a game. In fact, many PC and console games use this approach, and many gaming engines are optimized for it. Rather than pre-defined branches that a participant must take, state-based simulations typically allow participants to go where they want. For example, if the character walks up to a door, the door has a “state” or, better, a series of “states” such as open or closed, locked or unlocked, transparent (has a window) or opaque, impenetrable or breakable, etc. The character can then act on that door to change any or all of the states. Once the states are changed, they remain in the new state until something else changes them. The same is true if the character walks up to a person. There are certainly physical states, but there are also emotional and resources states such as happy or sad, hungry or full, knows about “X” or does not know about “X”, etc. The states are often not polar (on or off, happy or sad). Often they are reflected as a scale or a range. Let’s say, for example, on a scale of 1 to 100 the antagonist has a relationship with your character of 63. If the limit for people to be willing to help your character is 60, then the person will give you helpful information. However, if you just shoved them, the relationship number might drop to 40 which may cause them to not give you the correct information and they may shove you back.

These are very simple examples of state engines. Each object can have multiple states and each state can be related to any other object’s states. These dynamic relationships are what makes the simulation interesting and generally more complex. The benefit is that the simulation feels more lifelike and can, in general, handle more scenarios than a branching story can. The difficulty is that they are expensive to build and still don’t model the complexities of the real world as well as the first approach.

Of course as with any discussion that uses somewhat arbitrary categories to describe something, there are exceptions and combinations. Many simulations actually combine elements of all 3 of these approaches to take as many of the advantages and as few of the disadvantages of each each approach to come up with a best-of-breed model.

Copyright © 1996-2010 thcrawford. All rights reserved.
Jarrah theme by Templates Next | Powered by WordPress