Determining the Validity of Simulation Models
Picture the following: A group of managers are playing a simulation game in which they run a chain of retail convenience stores. This particular chain was growing rapidly until the last few years when sales slacked dramatically. The managers decide to offer a major promotion and price discount to boost sales. Unexpectedly, the resulting sales show little increase (and in some regions a decrease) and are accompanied by a sharp drop in profits. “This doesn’t make sense!” says one frustrated manager. “Our customers look for bargains. Sales should have gone up. Where’s the increase?”
People have a variety of reactions to a computer simulation that provides an unexpected result. Some will immediately argue that the model is biased or inaccurate (“garbage in, garbage out”). Others will accept the result as a voice of authority, without critical thought. A third group will try to understand the assumptions driving the unexpected result (this may or may not be possible), and judge if those makes better sense than their previously held assumptions.
A Question of Purpose
The number one thing to remember when discussing model validity is that there is no such thing as a valid model. Instead models are more (or less) useful given a set of objectives and boundary conditions. It all depends on the purpose of the model.
Example: Forio’s PDA Sim
Setting |
|
---|---|
Learning Objectives |
|
Design | A high-level model of a consumer electronics company. This model contains cause-and-effect relationships similar to models we've built at real companies, and the resulting patterns of behavior are very similar to real world products. The specific numbers are not representative of any real-world PDA manufacturer. |
Typically, simulations that focus on strategy issues tend to cover a broad range of strategic issues but not include a high level of operational detail. Conversely, simulations that focus on operational issues tend to be very precise regarding the detail, but leave out business factors not relevant to the area of operational focus.
A version of PDA Sim that was intended as a market analysis tool for a specific company would be built around a model that was calibrated to a finer level of accuracy. A PDA Sim that was used by marketing staff in Germany to test specific promotions would have a detailed model of the German market but a very high level model on broader corporate issues.
One of my favorite writings on this issue is A Skeptic’s Guide to Computer Models (PDF), by MIT Professor John Sterman. In this article he notes: “Beware the analyst who proposes to model an entire social or economic system rather than a problem. For the model to be useful, it must address a specific problem and must simplify rather than attempting to mirror in detail an entire system… The art of model building is knowing what to cut out, and the purpose of the model acts as the logical knife.”
Best Practices in Model Validation
Experienced simulation designers go through several phases of model validation.
1. Verify the model assumptions
Working with subject matter experts, articulate and confirm each cause and effect relationship. Modeling disciplines such as System Dynamics help tremendously by representing assumptions in a visual diagram that can be easily verified. This is an iterative process that is performed simultaneously with the construction of the model. It begins with design meetings and interviews of company managers, includes reviews of high level structure with company generalists, and detailed reviews of model assumptions with subject matter experts.
2. Verify the model technical structure
Check the simulation model against a set of technical rules. These rules, which vary depending on the type of model, ensure that the assumptions are consistent and the equations don’t violate requirements in the modeling discipline. For example, when I was hiking a few years ago, I started at a trail head with a sign that said “8.6 miles to Highpoint Falls.” When I got to HighPoint Falls, a similar sign pointing back read “8.1 miles to trailhead.” These signs violated an obvious technical rule. (It’s worth noting that even if the signs had passed the “technical structure” test, other validation needs to be done to ensure they represent the right distance!)
3. Validate the model behavior
Once the model is built, test to see that the results of the simulation match expected behavior. There are a number of useful techniques that modelers can use. There is a significant body of literature that provides more information on best practices in this area.
- Extreme condition testing: Run the simulation with parameters set at extreme levels (price of $0, price of $1,000) checking the model results for consistency (for example, with a price of $0, the revenue should be $0 and the profit should be negative).
- Sensitivity testing: Run the simulation multiple times varying each parameter a bit higher and a bit lower. Look for parameters that cause the results to change significantly (and then pay extra attention to validating those parameters).
- Calibration & optimization: Use automated tools that apply algorithms such as Hill-Climbing or Genetic Algorithms to adjust each parameter until the result matches a predetermined value or time series.
4. Test business policies
In this final stage, modelers test alternative sets of decisions and policies, confirming that the simulation produces results that are realistic and make sense. (Often this step confirms that the scope of the simulation is appropriate to the problem.) If further revisions are needed, more verification or validation may be done.
Management Training Simulations
Management training simulations are often built with a semi-fictional case study or scenario rather than a detailed predictive model. This allows managers to learn about the key issues in a short amount of time while not being distracted by the need to analyze a detailed forecast of the real business. In addition, the simulation is run as a game – so managers may try strategies that they would never implement in the real business. Consequently, training simulations have less detail but broader possible outcomes than a decision-support or predictive model.
The simulation can also present the logic built into a simulation as a means of helping managers to understand how their decisions lead to their business results. PDA Sim has several “help” screens that show a typical product lifecycle curve and a diagram of the key cause and effect relationships in the model. It also features an “advisor” that highlights key business issues. (For example, “Your price is lower than your cost of goods. You are losing money on every sale.”) More complex simulations can have advisors with multiple perspectives, for example a CFO, VP of Marketing, and a customer.
Conclusion
As the opening scenario notes, model validity is often questioned in a training simulation when results are encountered that are unexpected. As the point of most training simulations is to help managers experience new strategies and ideas, this is a common occurrence!
Simulation designers and instructional facilitators need to represent a simulation for what it is: an approximation of reality. Good rules of thumb: Articulate the purpose and learning objectives before building the simulation, apply best practices to model design and validation, then ensure the simulation is used appropriately. When questions about simulation assumptions arise (as they almost always will), guide learners in considering the impact of their decisions in the real world, and help them look for clues as in the simulation to find information that can provide additional insight.
In the retail simulation described above, the managers found a screen that described a customer survey of their chain and their competitors. They discovered that immediately after they lowered their prices, a major competitor did the same. Comparing notes on the impact of this price war, they found out that market shares had barely changed, while profits (and management morale) dropped at both sets of stores. This was very similar to a real-world dynamic that had occurred in their industry about five years before.
As a final thought, let me refer back to Professor Sterman in A Skeptic’s Guide (PDF):
Models should not be used as a substitute for critical thought, but as a tool for improving judgment and intuition.
The value in computer models derives from the differences between them and mental models. When the conflicting results of a mental and a computer model are analyzed, when the underlying causes of the differences are identified, both of the models can be improved. Computer modeling is thus an essential part of the educational process rather than a technology for producing answers.