What are Outcomes Models (Program logic models)?

A Topic Article within the Outcomes Theory Knowledge Base

Citation
, XML
Authors

Abstract

Outcomes models (also known as results maps, logic models, program logics, intervention logics, means-ends diagrams, logframes, theories of change, program theories, outcomes hierarchies and strategy maps, amongst other names) are attempts at spelling out in detail how it is believed a program or intervention will lead to improvements in higher-level outcomes. They should represent all of the high level intended outcomes for the program and the lower-level steps which it is believed need to occur to achieve these outcomes. Such models can be represented in text, tables, as printed diagrams, or, increasingly, as interactive models within software. In the past, there have been a number of largely unexamined ‘rules’ about the way in which outcomes models (logic models) should be drawn. This article starts by asking the question: what are outcomes models for? It then works back from this to identify the best way of drawing outcomes models so that they can be the most use for a wide range of purposes. This is a topic article within the Outcomes Theory Knowledge Base.

Introduction [1]

Outcomes models are models used to show how a program or intervention works to achieve high-level outcomes. Examples can be found at OutcomesModels.org. They have a wide variety of names and can be presented in different formats (e.g. databases, textual tables, visualized models and mathematical models or combinations of these). Some of the names they go by include: results maps, logic models, program logics, intervention logics, means-ends diagrams, logframes, theories of change, program theories, outcomes hierarchies and strategy maps.

In its most general sense, an outcomes model is a model of some sort which makes a claim about how the ‘world works’. More technically, it sets out the chain of causality which leads from lower-level steps to higher-level outcomes within an outcomes system or a performance management system of any type. An outcomes system is any system that attempts to deal with specifying, measuring, attributing and holding parties to account for changes in outcomes. Such systems go by a variety of names such as: results management systems, performance management systems, performance measurement systems, program evaluation, evidence-based practice systems, investment strategies, value-for-money exercises, benchmarking exercises, best practice sharing exercises, contracting for outcomes etc. While outcomes models tend to only be thought about explicitly in some of these types of outcomes systems – particularly program evaluation – they are actually a potential component of any of these outcomes systems whether or not they are formally recognized as such. Visual outcomes models built in a particular way can be seen to be the central component in a set of ‘building-blocks’ which lie behind any outcomes system. (See the building-blocks of all outcomes systems).
What purposes are outcomes models attempting to achieve?
In order to be clear about the best way of representing outcomes models we need to be clear about the purpose they server.They can have the following five purposes:
Purpose 1: To set out a comprehensive picture of ‘what it is believed causes what’ from the level of actions and steps taken before, during, and after, an intervention right up to the high-level outcomes which an intervention is seeking to improve.
Purpose 2: To provide information from evaluations and/or experience regarding the evidence (or rationale) for the links between the steps and outcomes in the case of a particular intervention (or types of  interventions).
 
Purpose 3: To provide information about other factors which could influence an intervention achieving its outcomes (these are sometimes referred to as assumptions, risks, external or exogenous factors etc).
Purpose 4: To provide information about measurements (often called indicators) which could be made of steps and outcomes, and sources of information regarding these measurements (e.g. data collections, surveys etc), sometimes these are called ‘means of verification’. On occasion, these also include levels of these measurements (targets) and comparative levels (benchmarks).

Purpose 5: To provide information about those measurements (indicators) for which one can demonstrate attribution to a particular intervention (i.e. prove that the intervention caused them to change).

Purpose 6: To act as a framework for structuring thinking about other important issues such as evaluation questions. 

Timing and outcomes models
In terms of timing, there are two time frames for outcomes models:
  1. Before the fact (ex ante) where they represent what it is believed is likely to happen in the case of a particular intervention (or a type of intervention).
  2. After the fact (ex post) where they represent what it is believed has actually happened in regard to a particular intervention.  
Stakeholder perspective and outcomes models

Since outcomes models represent a claim about how the world works, in theory there could be a range of outcomes models representing the claims by different groups of stakeholders about how they believe a program will (or is) working. 

Some examples of outcomes models

The attempt is often made to fit types of outcomes models (logic models etc) onto a single small page. This is a mistake. Outcomes models need to be large enough to represent all of what is happening within a program or organization in sufficient detail. The level of detail needs to be enough so that those reading the model can see all of the important steps which are required in order to get to higher-level outcomes. Below are some examples of printed models. Outcomes models should also be represented as electronic versions so that it is easy to work with them in real-time. Examples of electronic versions of models are available at OutcomesModels.org. Such models can be represented with line and arrow links showing causality or, as in the examples below, the line and arrow links have been left out (if models are built in DoView outcomes processor software causal links between the steps in the such models can be stored and represented in various ways).

Figure 1: An overview model for a national mountain safety organization. This model has various sub-pages which are drilled down beneath the boxes (read from left to right)


Figure 2: A poster version of the model for a national mountain safety organization showing all of the drill-down pages on a single poster page

Figure 3: Part of an outcomes model for a national department of conservation showing projects mapped onto the higher-levels of the model

What should we be trying to represent in an outcomes model?

Much of the complexity and confusion in discussing and working with outcomes models comes from attempting to achieve all of the six possible purposes set out above at the same time within a model. In particular, many types of outcomes models suffer from the following problems: their ‘technology of representation’ (textual tables, single page diagrams etc.) imposes severe limits on the complexity of the links between steps and outcomes that can be represented; they let the structure be determined by 4 above (measurement/indicators); or they let 5 above (demonstrating attribution) determine the structure of the model. This means that such models often fail to achieve the first purpose of an outcomes model (1 above)  - to develop a comprehensive picture of ‘what it is believed causes what’. All of the purposes above should be able to be achieved by the overall process of outcomes modeling, but it is a mistake to attempt to achieve them all too early in the process.
Some practical ways these ‘problems of representation ‘ present themselves are as follows:  
  1. Outcomes models which cannot effectively represent the richness of causal connections within and between steps and outcomes within a model. For instance, a single cascading list of outcomes and steps as represented in a traditional textual table (e.g. in a system called logframe widely used in the international development area) often cannot do justice to the complexity of the causal links between steps and outcomes in regard to even a simple program.
  2. Outcomes models which only show currently measurable steps and outcomes. Such models are often appropriately criticized for being limited representations of the world of causes in which the program is operating. The currently measurable is a result of the appropriateness, feasibility and affordability of measurement at a particular point in time.
  3. Outcomes models which only show those steps and outcomes which can be demonstrated as being attributable to a particular intervention. Such models are likely to be even more limited than those which are restricted just to the measurable (as in 2 above). 
  4. A variation on 3 above is models which attempt to ‘hard-wire’ attribution into their horizontal or vertical structure. This is done in the traditional logic model used in evaluation where often four levels are set out: inputs, outputs, intermediate outcomes and final outcomes. This approach determines the structure of the model based on measurement and attribution – outputs are, by definition, measurable and attributable to the program. Such structuring often interrupts the free flow representation of causality which is needed in order to achieve Purpose 1 set out above – drawing a comprehensive picture of ‘what it is believed causes what’. 

Example of the problem of forced horizontal layering

Figure 1 below shows an example of a traditionally horizontally structured outcomes model on the left and a more freeform model on the right. As many levels as required are able to be represented in the model on the right (sometimes the attempt is made to restrict outcomes models to single layers within each of the horizontal bands in a model such as the one on the left). Outputs (colored yellow in this case do not have to be kept within a single horizontal band within the model on the right. The model on the right obviously allows a richer representation of the world while at the same time allowing outputs to be identified. 

Figure 1: A traditionally horizontally structured outcomes models on the left and a more freeform model on the right

How outcomes models should be represented
Outcomes theory suggests that in order to be useful for a range of purposes, outcomes models should be represented as follows:
  1. The model should firstly provide the richest possible representation of ‘what it is believed causes what’ in regard to an intervention. This representation should not be distorted by considerations of measurement or attribution. The ‘technology of representation’  should allow any step or outcome to have a link with any other step or outcome and for the model to be as large as is needed to represent all of the important steps and outcomes related to the intervention (including those which are not influenced by the intervention but are relevant to it (often referred as assumptions, risks, external or exogenous factors). 
  2. In terms of structuring the model into higher and lower levels of causality, a visual representation should be used rather than a textual representation (mathematical representations can be enhancements on a visual model). Textual representations rely on verbal labels to classify the level at which a step or outcome lies within an outcomes model. This often leads to discussions such as: ‘is this an intermediate or final outcome?’ When using a visual representation, this issue does not have to be dealt with using verbal labels. Instead, it is dealt with by applying a simple rule as to where a step or outcome lies within the visual space. If a model (as is the convention in outcomes theory) runs from the highest-level outcomes at the top down to the lower level steps below, then the simple rule which has to be applied to determine if Step A is above Step B is as follows. ‘If Step A could be achieved immediately, would one bother with doing Step B?’ If the answer is ‘no’ then Step A lies above Step B within the outcomes model. It is much easier to use this visual way of working with levels within an outcomes model than to have to spend time explaining to groups which are building models the nuances of the difference between different verbal labels.
  3. The model should map measurement and demonstration of attribution back onto the model after it has been built.
  4. The ’technology of representation’ of the outcomes model should provide the minimum possible obstacles to it being worked with in a group; easily amended in the course of discussions; and represented in a variety of formats (e.g. printed, dataprojected, web-based and on screen) so that it can be used across all stages of program planning, monitoring, evaluation etc.
What are the elements that should be allowed to be put into outcomes models?
There are often long debates, when outcomes models are being built, about what ‘elements’ should, and should not, be allowed to go into a model. This discussion often takes the form of: ‘that can’t go in, it’s an activity’; ‘you can’t put that in because it’s not measurable’; ‘we can’t put that in because we can’t prove that we did it’.  As discussed above, the issues of measurement and demonstration of attribution should be dealt with after the basic model is built. 
In a technical sense, the ‘elements’ which are allowed within an outcomes models can be ones which meet one or more of the following features of steps and outcomes (in relation to a particular program or intervention). Such steps and outcomes can be:

  • Relevant – to the outcomes it is hoped will be influenced by a program or intervention. 
  • Influenceable – theoretically able to be influence by a program or intervention (not-necessarily actually demonstrated that it is attributable).
  • Controllable – only influenced by one particular program or intervention. 
  • Measurable – able to be measured. 
  • Demonstrably attributable - it can actually be demonstrated that they can be attributed to a particular program or intervention (i.e. it can be proved that the step or outcome has been changed by it) 
  • Accountable – a particular program or intervention will be rewarded or punished for changes in the step or outcome.
These features are discussed in more detail here
‘Full’ outcomes model versus ones only including the measurable and demonstrable (attributable)
Figure 2 below shows firstly a ‘full’ outcomes models which represents ‘what it is believed causes what’ (represented on the left), the type of outcomes model recommended within outcomes theory. Secondly, in the center, it shows a ‘measurable only model’ which only includes steps and outcomes which have indicators (yellow icons and the word indicator) next to them. Thirdly, on the right it shows a ‘demonstrable (attributable) only’ model which only includes those steps for which it can be demonstrated that changes are attributable to the program (the green ones). As can be seen from this figure, the models on the right are much less rich than the model on the left. If ‘full’ outcomes models are drawn, then measurement and demonstration of attribution can be mapped onto them at a later stage. 
Figure 2: Comparison between ‘full’ outcomes model and ones restricted to the measurable or demonstrable (attributable)

Usefulness of outcomes models drawn as ‘full’ outcomes models
Outcomes models drawn as ‘full’ outcomes models as described here are much more useful than outcomes models which are limited to the measurable or demonstrable (attributable). By mapping measurement (indicators) and demonstrability (attribution) back onto the ‘full’ model after the model has been built ‘full’ models can have all of the functionality of more limited models without any of their downsides. The wide range of functions such models can be used for include:
  • Working out how the program will work – this should focus on all the steps which it is believed need to happen to achieve high-level outcomes, not in the first instance just the measurable and demonstrable (attributable).
  • Discussing how the program works with stakeholders – they are interested in what it is believed will happen in the program, the left-hand model in Figure 2 much more than just looking at models like the ones on the right in that figure.
  • For high-level thinking in developing high-level policy. See here
  • For mapping a number of programs or interventions onto a common outcomes model.
  • Identifying what evidence and rationale there is for the links between the steps and outcomes in the model. 
  • identifying what is currently measurable by mapping indicators onto the model – if the model has been drawn to just show the measurable there is no way that those instances where there is a step or outcome which is not currently being measured can be identified.
  • Identifying what is demonstrable (attributable) to a particular intervention and for what it should be held to account – the discussion about this is much more efficient when it is conducted against a ‘full’ outcomes model rather than being dealt with in other ways (see For contracting below).
  • Setting out a visual evaluation plan for evaluation planning and implementation. For an example see here
  • For planning economic evaluation. 
  • For reporting evaluation results.
  • For contracting – such discussions, particularly in the context of encouraging providers to focus on outcomes are likely to be more effective against a ‘full’ outcomes model. See Contracting for Outcomes for an example.

These ways in which a ‘full’ outcomes model can be used are described in the Easy Outcomes system which is an applied version of outcomes theory. For an article on Easy Outcomes see here.

Visualization of outcomes models

A convention for visualizing outcomes models which are suitable across strategic prioritization, program planning and implementation, performance management, monitoring, evaluation and contracting is set out in the article conventions for visualizing outcomes models.
Examples of outcomes models which meet outcomes theory criteria
Some examples of outcomes models which meet the outcomes theory criteria for a well represented outcomes model are available at www.OutcomesModels.org. These have been drawn according to a set of standards for drawing outcomes models and visualized in DoView [2] outcomes and evaluation software which has been designed to allow models to be built which meet the outcomes theory criteria discussed in this article (models of any size, any step or outcome linked to any other, able to be amended when working with a group).
Working with groups to build outcomes models

The video immediately below is a short practical video showing how one can work with a group to build an outcomes model as described in this article and the second video shows how such models can be broken up into sub-page so that models can be developed which are of any size. 
Untitled embed
Building an outcomes model with a group
Untitled embed
Breaking a large outcomes model up into sub-pages

Conclusion

This article has described what outcomes models are by identifying the purposes which they serve; pointed to problems in the way many outcomes models are represented; and set out the way in which they should be represented. Outcomes models developed in this way can be used for a range of different purposes.
Citing this article
Duignan, P. (2009). What are outcomes models (program logic models)? Outcomes Theory Knowledge Base Article No. 224. (http://knol.google.com/k/paul-duignan-phd/what-are-outcomes-models-program-logic/2m7zd68aaz774/22) or (http://tinyurl.com/ot224).
[If you are reading this in a PDF or printed copy, the web page version may have been updated].
[Outcomes Theory Article # 224 http://www.tinyurl.com/ot224%5D

References

  • The author worked on aspects of outcomes theory while the New Zealand Senior Fulbright Scholar at the Urban Institute in Washington D.C. in 2005. Elements of outcomes theory have been presented at a variety of conferences,including presentations to the American Evaluation Association Conference, Atlanta, 2004. The European Evaluation Society Conference, Berlin, 2004. The Australasian Evaluation Society Conference, Perth, 2008. The Aotearoa New Zealand Evaluation Society Conference, Rotorua, 2008. The European Society Evaluation Conference, Lisbon, 2008. the United Kingdom Evaluation Society Conference, Bristol, 2008. The American Evaluation Society Evaluation Conference, Denver, 2008.
  • Disclosure: The author is involved in the development of DoView outcomes software as a way of creating, working with, and reporting on outcomes structures.
  1. The building-blocks/types of evidence used in outcomes systems (Redirect)
  2. Types of claims able to be made regarding outcomes models (intervention logics/theories of change) (Redirect)
  3. Reconstructing a Community – How the DoView Visual Planning methodology could be used (Redirect)
  4. Simplifying terms used when working with outcomes (Redirect)
  5. Impact evaluation – where it should and should not be used (Redirect)
  6. Types of economic evaluation analysis (Redirect)
  7. Unequal inputs principle (‘level playing field’) principle
  8. Welcome to the Outcomes Theory Knowledge Base
  9. Organizational Requirements When Implementing the Duignan Approach Using DoView Within an Organization
  10. M & E systems – How to build an affordable simple monitoring and evaluation system using a visual approach
  11. Evaluation of Healthcare Information for All 2015 (HIFA2015) using a DoView visual evaluation plan and Duignan’s Visual Evaluation Planning Method
  12. DoView Results Roadmap Methodology
  13. Problems faced when monitoring and evaluating programs which are themselves assessment systems
  14. Reviewing a list of performance indicators
  15. Using visual DoView Results Roadmaps™ when working with individuals and families
  16. Proving that preventive public health works – using a visual results planning approach to communicate the benefits of investing in preventive public health
  17. Where outcomes theory is being used
  18. How a not-for-profit community organization can transition to being outcomes-focused and results-based – A case study
  19. Duignan’s Outcomes-Focused Visual Strategic Planning for Public and Third Sector Organizations
  20. Impact/outcome evaluation design types
  21. Introduction to outcomes theory
  22. Contracting for outcomes
  23. How a Sector can Assist Multiple Organizations to Implement the Duignan Outcomes-Focused Visual Strategic Planning, Monitoring and Evaluation Approach
  24. How community-based mental health organizations can become results-based and outcomes-focused
  25. Paul Duignan PhD Curriculum Vitae
  26. Integrating government organization statutory performance reporting with demands for evaluation of outcomes and ‘impacts’
  27. Non-output attributable intermediate outcome paradox
  28. Features of steps and outcomes appearing within outcomes models
  29. Principle: Three options for specifying accountability (contracting/delegation) when controllable indicators do not reach a long way up the outcomes model
  30. Outcomes theory diagrams
  31. Indicators – why they should be mapped onto a visual outcomes model
  32. What are Outcomes Models (Program logic models)?
  33. Methods and analysis techniques for information collection
  34. What are outcomes systems?
  35. The problem with SMART objectives – Why you have to consider unmeasurable outcomes
  36. Encouraging better evaluation design and use through a standardized approach to evaluation planning and implementation – Easy Outcomes
  37. New Zealand public sector management system – an analysis
  38. Using Duignan’s outcomes-focused visual strategic planning as a basis for Performance Improvement Framework (PIF) assessments in the New Zealand public sector
  39. Working with outcomes structures and outcomes models
  40. Using the ‘Promoting the Use of Evaluation Within a Country DoView Outcomes Model’
  41. What added value can evaluators bring to governance, development and progress through policy-making? The role of large visualized outcomes models in policy making
  42. Real world examples of how to use seriously large outcomes models (logic models) in evaluation, public sector strategic planning and shared outcomes work
  43. Monitoring, accountability and evaluation of welfare and social sector policy and reform
  44. Results-based management using the Systematic Outcomes Management / Easy Outcomes Process
  45. The evolution of logic models (theories of change) as used within evaluation
  46. Trade-off between demonstrating attribution and encouraging collaboration
  47. Impact/outcome evaluation designs and techniques illustrated with a simple example
  48. Implications of an exclusive focus on impact evaluation in ‘what works’ evidence-based practice systems
  49. Single list of indicators problem
  50. Outcomes theory: A list of outcomes theory articles
  51. Standards for drawing outcomes models
  52. Causal models – how to structure, represent and communicate them
  53. Conventions for visualizing outcomes models (program logic models)
  54. Using a generic outcomes model to implement similar programs in a number of countries, districts, organizational or sector units
  55. Using outcomes theory to solve important conceptual and practical problems in evaluation, monitoring and performance management
  56. Free-form visual outcomes models versus output, intermediate and final outcome ‘layered’ models
  57. Key outcomes, results management and evaluation resources
  58. Outcomes systems – checklist for analysis
  59. Having a common outcomes model underpinning multiple organizational activities
  60. What is best practice?
  61. Best practice representation and dissemination using visual outcomes models
  62. Action research: Using an outcomes modeling approach
  63. Evaluation questions – why they should be mapped onto a visual outcomes model
  64. Overly-simplistic approaches to outcomes, monitoring and evaluation work
  65. Evaluation types: Formative/developmental, process and impact/outcome
  66. Terminology in evaluation: Approaches, types (purposes), methods, analysis techniques and designs
  67. United Nations Results-Based Management System – An analysis
  68. Selecting impact/outcome evaluation designs: a decision-making table and checklist approach
  69. Definitions used in outcomes theory
  70. Balanced Scorecard and Strategy Maps – an analysis
  71. The error of limiting focus to only the attributable
  72. Reframing program evaluation as part of collecting strategic information for sector decision-making
  73. Distinguishing evaluation from other processes (e.g. monitoring, performance management, assessment, quality assurance)
  74. Full roll-out impact/outcome evaluation versus piloting impact/outcome evaluation plus best practice monitoring
  75. References to outcomes theory
  76. Techniques for improving constructed matched comparison group impact/outcome evaluation designs