Causal models – how to structure, represent and communicate them

A topic article in the Outcomes Theory Knowledge Base

Citation
, XML
Authors

Abstract

A number of different types of causal models (logic models, results maps, ends-means diagrams, outcomes models etc.) are used in outcomes systems (strategic planning, monitoring, performance management and evaluation systems). It is concluded that the best way of structuring, representing and communicating such models is to: 1) represented them as visual models (supplemented, where possible, by mathematical models); 2) include in such models any causal element (steps and outcomes) necessary to model the real outside world (i.e. do not restrict them just to measurable steps and outcomes); 3) allow any causal element in such models to be able to cause any other causal element in such a model; and, 4) not obscure the visualization of the flow of causality in such models by also attempting to visualize the specific features of steps and outcomes (measurability, attribution and accountability) using the same visualization mode.

Introduction

There are many different types of causal models used in many contexts, e.g. managing for outcomes systems, monitoring systems, results-based systems, performance management systems, program evaluation systems, accountability systems, pay-for-performance systems, league-table systems, evidence-based practice systems, contracting for outcomes systems etc.). The different types of causal models used in such systems (known as outcomes systems) go by names such as: logic models, results maps, logical frameworks, causal models, logframes, ends-means diagrams, strategy maps, intervention logics, program logics, results models etc. Within outcomes theory these models are called outcomes models. An important question regarding the use of causal models is how they should be best structured, represented and communicated. This question can be broken down into a set of sub-questions and answered as follows: 

1. What is the best way of representing a causal model (outcomes model)?

Causal models (outcomes models) can be represented in narrative text, as tables, as diagrams (visualizations) or as mathematical models (or combinations of these). Mathematical models are desirable because of their power to represent complex causality. However, it is often the case that all of the relationships between variables within a model cannot be mathematically specified. In addition, mathematical models are often hard to communicate to stakeholders. Just using narrative text is usually not an efficient way of communicating multiple causal links between causal elements within a model. Tables can show causality more efficiently than simple narrative text, however they become problematic as soon as the attempt is made to try to communicate multiple causal links between elements. Diagram-based visualizations of causal models have advantages over these other methods of representation. They also provide the easiest way of working with a stakeholder group when constructing a causal model. (See Simplifying terms used when working with outcomes for a discussion of the way in which working with visual causal models (outcomes models) solves many of the current problems faced in building causal models with stakeholder groups).

A visual diagram-based method of representing a causal model has a number of advantages over other methods of representation (narrative text-based, text-based and mathematically-based). A visual method can be supplemented and detailed by mathematical representations of visual models where sufficient information is available regarding the links between steps. Where needed, narrative descriptions can be developed from an underlying visual model. Using visual models (supplemented with mathematical models where feasible) is the preferred method of representation of causal models (outcomes models) within outcomes theory.

2. What causal elements will be allowed within causal models?
Causal models (outcomes models) include causal elements of various types (i.e. things which cause other things to happen). These elements are known by the generic name ‘steps and outcomes’ within outcomes theory. In different approaches to building causal models, constraints are placed on what type of element is, and what type is not, allowed within a particular models. The justification for such constraints is often unexamined, even by those who insist on enforcing them. The existence of these constraints is revealed in comments such as: ‘you can’t put that in, it’s a process‘; or, ‘you can’t put that in, it’s an activity‘; or ‘you can’t put that in, it’s not measurable‘; or, ‘you can’t put that in, because you can’t prove that it’s attributable to your program”. Such unexamined constraints often make the process of building causal models frustrating as stakeholders are constantly being told that they cannot put particular elements into a model. This is often a very disempowering process for those involved in building such models.
Many different terms are used to describe the ‘types’ of causal elements which can potentially be included in causal models.  Unfortunately, different stakeholders in different settings use different terms in different ways. A useful way of escaping from such terminological confusion is to get to a point where the different types of element which could be included within a causal model can be unambiguously defined in conceptual terms. Ideally this should be done without having to use the current set of labels which are sometimes used in different ways in different settings (e.g. processes, activities, outcomes, impacts, intermediate outcomes etc.)  
It would be good if, when identifying the underlying features of the elements which can potentially be included in causal models, we could also capture the key conceptual distinctions which need to be made when working with steps and outcomes within outcomes systems. Having identified such a set of underlying features, we could then unambiguously define any type of causal element which could potentially be put into a causal model. If this can be done, then it is less of a problem when stakeholders use different terms to describe the elements they are allowing, or disallowing, from their causal models. We can always just get them to unambiguously identify the exact features of the type of element they are talking about independent of the term they are using to describe it.
The underlying features of the causal elements (called ‘steps and outcomes’ in outcomes theory) which can potentially go into causal models (outcomes models) have been defined within outcomes theory. They are set out in the article on the features of steps or outcomes. In summary, causal elements which can potentially go into causal models can have the following features: 
  • Relevant – relevant to high-level outcomes it is hoped will be influenced by a program or intervention. This means that causal models could potentially include risks (things that you hope will not happen – if written in the positive (e.g. the economy does not enter a melt-down) and assumptions (e.g. funding continues for the full program term). 
  • Influenceable - able to be influenced by a program or intervention (this is different from actually demonstrating attribution in a particular case, see below). 
  • Controllable - only influenced by one particular program or intervention.
  • Measurable - able to be measured. Merely measuring that a step or outcome has occured is a separate issue from whether it can be demonstrated that a change in that step or outcome has been caused by a particular program (see demonstrably attributable below).
  • Demonstrably attributable - able to be demonstrated that changes in the step or outcome can be attributed to one particular program or intervention (i.e. proved that only one particular program or intervention changed it). This is the claim that it can be proved that a particular program or intervention changed a higher-level step or outcome in a particular instance. This is a separate claim from the claim set out above that a higher-level step or outcome is influenceable by a program or intervention and a separate claim from the claim that it can be measured.
  • Accountable (rewardable or punishable) - something that a particular program or intervention will be rewarded or punished for. 

Having identified these potential features of the causal elements which can go into causal models, we can now unambiguously define any causal element (step or outcome) which is, or is not, allowed within a particular type of model. For instance, a wide-spread constraint on the elements which are allowed within causal models is the demand that they be SMART – Specific, Measurable, Achievable, Realistic and Time-bound [1]. This is usually put in the form of the statement that: ‘your objectives must be SMART’. The SMART criteria will only allow relevant, measurable elements to be included in a model. 

Using the underlying features of the causal elements which can potentially be included in an outcomes model identified above, we can examine the conceptual clarity of many of the demands about what is allowed to be included within causal models. For instance, in regard to the SMART approach, it is not clear whether objectives which meet the SMART criteria also have to be: controllable; demonstrably; attributable; or accountable. Another example of such lack of conceptual clarity is the simple demand that ‘intermediate outcomes’ should be included within a causal model. Often it is implicitly assumed that such outcomes need to be measurable, however it is often not clear whether they also need to be controllable, demonstrably attributable and/or accountable. Such lack of conceptual clarity regarding the underlying features of the elements allowed within causal models causes considerable confusion for those working with outcomes systems. For instance, see what is called the The controllable demonstrable non-output intermediate outcome paradox in the article on Contracting for outcomes.

Features such as measurability are a result of the appropriateness, feasibility and affordability of measurement at a particular time in a particular setting. If these appropriateness, feasibility and affordability issues are allowed to dictate what can, and cannot, go into a causal model, then the model will no longer represent a model of the causal processes in the outside world. It will, instead, just reflect what is currently measurable. The same applies to the features of attributability or accountability. The issues of measurement, attributability and accountability all need to be dealt within in outcomes systems; however it is undesirable to allow them to limit what is allowed within a causal model. If they do dictate what elements are allowed within a causal model, it will becomes a model of what can currently be measured and attributed and what parties are currently being held to account for. In contrast, causal models are most powerful if they are drawn to provide the most accurate current representation of causality in the outside world. Issues of measurability, attribution and accountability can then be dealt with following the drawing of the basic causal model. For instance see, Indicators – why they should always be mapped onto a visual outcomes model  for a discussion of mapping measurements back onto a comprehensive outcomes model (or a short video showing how indicators can be mapped onto a visual model). See Contracting for outcomes for a discussion of working with attribution and accountability subsequent to building a comprehensive causal model (or a short video showing how visual models can be used in outcomes-focused contracting). See Simplifying terms used when working with outcomes  for a general discussion of how to work in this way. 

A final point about the causal elements which may, or may not, be included in a causal model is the widespread demand that causal models exclude processes or activities. This demand has two origins. The first is a desire to encourage those producing such causal models to ‘move further up the causal pathway’ towards higher-level outcomes rather than just describing lower-level internal activities or processes which people often tend to start with. Therefore elements which are seen as being ‘above’ activities and processes, sometimes referred to as outputs, intermediate outcomes and/or final outcomes are see as being more desirable within a causal model than the model just being made up of lower-level process and activity elements. The second origin of this demand is the attempt to fit an outcomes model onto a single printed page. If a large number of lower-level activities and processes are included in the model, there will simply not be enough room for outputs, intermediate outcomes and final outcomes. Both of these problems can be addressed if a visual modeling approach is used to work directly with stakeholders. In a visual modeling environment where causal models can be any size (e.g. when using software designed for this purpose such as DoView [2] outcomes and evaluation software) there can be as much modeling as desired at any level within the causal model. Different parts of the model can be used by different stakeholders for different purposes and it is simply a pragmatic question as to how high or low one goes in drawing a model. For example, funders will be more interested in just looking at higher-levels of the model and leaving the lower levels of the model (activities and processes) to providers. (See a short video on how this approach can be used when drawing outcomes models with a group).

The conceptually simplest, and most generic, way of constructing a causal model is to allow any causal element which has one or more of the features set out above, to be included within the model so that it can provide an accurate representation of causality in the outside world. Such modeling can occur at any level (e.g. from activities right up to high-level outcomes). Measurement, attribution and accountability can be dealt with following the building of the original comprehensive causal model. This is the approach used within outcomes theory.

3. How much causal complexity will be able to be modeled in a causal model?

Causal models differ regarding how much causal complexity can be represented within them. This is usually an unexamined result of the method used to represent the model (e.g. narrative text, tables, visual diagrams, mathematical models). For instance, tables can provided a more efficient representation of causality than just  narrative text, but they tend to encourage ‘siloization’. This occurs where a high-level outcome is put into the first column of a table and the second column is used to list the lower-level steps which contribute to it. This format often discourages the showing of links between the lower-level steps and other high-level outcomes which appear in subsequent rows in the table. In diagrammatic visual formats, the attempt is often made to represent causality on a single page using lines and arrows to visualizing causality. This often means a very impoverished model can be built because there is not enough space to include all of the line and arrow links on the page. 

Conceptually the simplest approach to building a causal model is to allow any causal element in the model to potentially cause any other causal element. To do this within a visual model requires software which can allow such ‘many to many’ linking [3]. This is the approach used in outcomes theory.

4. What elements will be allowed to go in which places within a causal model?

Often within a causal model, there will be constraints regarding what type of elements will be allowed to go in particular places within such models. For instance, within a tabular model, there may be separate rows for inputs, outputs, intermediate outcomes and final outcomes. In a visual representation, this constraint is shown by the characteristic list of headings down the left-hand side of bottom-to-top visualized causal models and a list along the top of left-to-right visualized models. Typically terms such as: inputs, outputs, intermediate outcomes, final outcomes are used to create ‘layers’ within a vertically laid-out causal model. From a conceptual point of view, what is being attempted through such ‘layering’ is to separate out different types of causal elements. This approach is using vertical position (e.g. in a vertically laid-out model to indicate ‘type’ of causal element). Using this mode of visualization can, in some instances, clash with another major aspect of such models which is also being communicated through vertical position within the model. This is the flow of the causal pathway within the model; in the case of vertically laid-out causal models – from the bottom to the top of the model. (It should be noted that this aspect of the visualization communicates just the general flow of causality in the model. This does not prevent feedback loops (which technically reverse the flow of causality back down the model) being represented. An example of how feedback loops can be represented within an upwards flow of causality is available here: http://outcomesmodels.org/models/examples49-slices/examples49-2.html#controls.)

Attempting to visualize two things at once using the same mode of visualization (in this case vertical height within the model) creates a problem when there is a potential conflict between the two visualization demands. This occurs in causal models when there are elements which should naturally sit along-side of each other, in terms of the flow of causality, but which, when thought of in terms of the the possible types of causal elements, fit within different layers in the model. This is a particular issue with outputs (and in the case of intermediate outcomes where it is assumed these will be measurable). The definition of whether something is an output or not is dependent on its measurability and measurability is a separate issue from the flow of causality in a model. In such cases the second visualization demand distorts the general visual flow of causality within the model. In order to avoid this problem, there needs to be a different way of representing each ‘type’ of causal element. We can be assured of achieving this as long as we have a way of visualizing each of the features of causal elements within causal models as identified earlier in this article. A set of ways of visualizing these features, which does not rely on vertical position (which is being used to show the causal pathway within the model) is as follows:

  • Relevant – by inclusion in the model.
  • Influenceable - by a causal link between elements.
  • Controllable - by only one causal link coming into an element.
  • Measurable - by representing measurement as a separate visual element (e.g. having a separate icon for an indicator).
  • Demonstrably attributable - by a letter, numeral, icon or other visual sign showing that an element is demonstrably attributable to a program or intervention.
  • Accountable (Rewardable or Punishable) - by a letter, numeral, icon or other visual sign showing that an element is one for which a party will be held accountable. 

  • The conceptually simplest approach is to let the vertical (or horizontal) flow of the model represent the general flow of the causal pathway in the real-world and to visualize the ‘types’ of causal elements (defined in terms of the features discussed above) in the model using a different type of visualization rather than vertical (or horizontal) position. This is the approach which is adopted within outcomes theory. 

    Conclusion

    The best way of representing and communicating causal models has been discussed and it has been concluded that: 1) causal models are best represented as visual models (backed up, where possible, with mathematically-based models); 2) to provide the richest possible model of causality in the real world, there should be the least number of constraints on which causal elements are allowed within a causal model; 3) any element within a causal model should be able to have a causal connection to any other element within a model; and, 4) the ‘type’ of causal element (as defined by its features) within a causal model needs to be able to be visualized, but this should not conflict with other aspects of the visualization (e.g. showing the upward flow of causality). A way in which this can be done when working with stakeholders is set out in the article on Simplifying terms used when working with outcomes

    Please comment on this article

    This article is based on the developing area of outcomes theory which is still in a relatively early stage of development. Please critique any of the arguments laid out in this article so that they can be improved through critical examination and reflection.

    Acknowledgment

    Rick Davies commenting on this article (below) led to the author realizing that the visualization of the flow of causality is just a ‘general’ flow of causality in that feedback loops must be allowed for and these will go in the opposite direction to the general flow of causality and the article was amended accordingly. 

    Citing this article

    Duignan, P. (2009). Causal models – the best way of structuring, representing and communicating them. Outcomes Theory Knowledge Base Article No. 239. (http://knol.google.com/k/paul-duignan-phd/causal-models-how-to-structure/2m7zd68aaz774/79).

    [If you are reading this in a PDF or printed copy, the web page version may have been updated].

    [Outcomes Theory Article #239 http://www.tinyurl.com/otheory239%5D.

    References

    • There are various versions of the SMART acronym, this is one.
    • Disclosure: the author is involved in the development of DoView outcomes and evaluation software.
    • DoView outcomes and evaluation software allows such 'many to many' linking.
    1. The building-blocks/types of evidence used in outcomes systems (Redirect)
    2. Types of claims able to be made regarding outcomes models (intervention logics/theories of change) (Redirect)
    3. Reconstructing a Community – How the DoView Visual Planning methodology could be used (Redirect)
    4. Simplifying terms used when working with outcomes (Redirect)
    5. Impact evaluation – where it should and should not be used (Redirect)
    6. Types of economic evaluation analysis (Redirect)
    7. Unequal inputs principle (‘level playing field’) principle
    8. Welcome to the Outcomes Theory Knowledge Base
    9. Organizational Requirements When Implementing the Duignan Approach Using DoView Within an Organization
    10. M & E systems – How to build an affordable simple monitoring and evaluation system using a visual approach
    11. Evaluation of Healthcare Information for All 2015 (HIFA2015) using a DoView visual evaluation plan and Duignan’s Visual Evaluation Planning Method
    12. DoView Results Roadmap Methodology
    13. Problems faced when monitoring and evaluating programs which are themselves assessment systems
    14. Reviewing a list of performance indicators
    15. Using visual DoView Results Roadmaps™ when working with individuals and families
    16. Proving that preventive public health works – using a visual results planning approach to communicate the benefits of investing in preventive public health
    17. Where outcomes theory is being used
    18. How a not-for-profit community organization can transition to being outcomes-focused and results-based – A case study
    19. Duignan’s Outcomes-Focused Visual Strategic Planning for Public and Third Sector Organizations
    20. Impact/outcome evaluation design types
    21. Introduction to outcomes theory
    22. Contracting for outcomes
    23. How a Sector can Assist Multiple Organizations to Implement the Duignan Outcomes-Focused Visual Strategic Planning, Monitoring and Evaluation Approach
    24. How community-based mental health organizations can become results-based and outcomes-focused
    25. Paul Duignan PhD Curriculum Vitae
    26. Integrating government organization statutory performance reporting with demands for evaluation of outcomes and ‘impacts’
    27. Non-output attributable intermediate outcome paradox
    28. Features of steps and outcomes appearing within outcomes models
    29. Principle: Three options for specifying accountability (contracting/delegation) when controllable indicators do not reach a long way up the outcomes model
    30. Outcomes theory diagrams
    31. Indicators – why they should be mapped onto a visual outcomes model
    32. What are Outcomes Models (Program logic models)?
    33. Methods and analysis techniques for information collection
    34. What are outcomes systems?
    35. The problem with SMART objectives – Why you have to consider unmeasurable outcomes
    36. Encouraging better evaluation design and use through a standardized approach to evaluation planning and implementation – Easy Outcomes
    37. New Zealand public sector management system – an analysis
    38. Using Duignan’s outcomes-focused visual strategic planning as a basis for Performance Improvement Framework (PIF) assessments in the New Zealand public sector
    39. Working with outcomes structures and outcomes models
    40. Using the ‘Promoting the Use of Evaluation Within a Country DoView Outcomes Model’
    41. What added value can evaluators bring to governance, development and progress through policy-making? The role of large visualized outcomes models in policy making
    42. Real world examples of how to use seriously large outcomes models (logic models) in evaluation, public sector strategic planning and shared outcomes work
    43. Monitoring, accountability and evaluation of welfare and social sector policy and reform
    44. Results-based management using the Systematic Outcomes Management / Easy Outcomes Process
    45. The evolution of logic models (theories of change) as used within evaluation
    46. Trade-off between demonstrating attribution and encouraging collaboration
    47. Impact/outcome evaluation designs and techniques illustrated with a simple example
    48. Implications of an exclusive focus on impact evaluation in ‘what works’ evidence-based practice systems
    49. Single list of indicators problem
    50. Outcomes theory: A list of outcomes theory articles
    51. Standards for drawing outcomes models
    52. Causal models – how to structure, represent and communicate them
    53. Conventions for visualizing outcomes models (program logic models)
    54. Using a generic outcomes model to implement similar programs in a number of countries, districts, organizational or sector units
    55. Using outcomes theory to solve important conceptual and practical problems in evaluation, monitoring and performance management
    56. Free-form visual outcomes models versus output, intermediate and final outcome ‘layered’ models
    57. Key outcomes, results management and evaluation resources
    58. Outcomes systems – checklist for analysis
    59. Having a common outcomes model underpinning multiple organizational activities
    60. What is best practice?
    61. Best practice representation and dissemination using visual outcomes models
    62. Action research: Using an outcomes modeling approach
    63. Evaluation questions – why they should be mapped onto a visual outcomes model
    64. Overly-simplistic approaches to outcomes, monitoring and evaluation work
    65. Evaluation types: Formative/developmental, process and impact/outcome
    66. Terminology in evaluation: Approaches, types (purposes), methods, analysis techniques and designs
    67. United Nations Results-Based Management System – An analysis
    68. Selecting impact/outcome evaluation designs: a decision-making table and checklist approach
    69. Definitions used in outcomes theory
    70. Balanced Scorecard and Strategy Maps – an analysis
    71. The error of limiting focus to only the attributable
    72. Reframing program evaluation as part of collecting strategic information for sector decision-making
    73. Distinguishing evaluation from other processes (e.g. monitoring, performance management, assessment, quality assurance)
    74. Full roll-out impact/outcome evaluation versus piloting impact/outcome evaluation plus best practice monitoring
    75. References to outcomes theory
    76. Techniques for improving constructed matched comparison group impact/outcome evaluation designs