Abstract
Outcomes models (also known as results maps, logic models, program logics, intervention logics, means-ends diagrams, logframes, theories of change, program theories, outcomes hierarchies and strategy maps, amongst other names) are attempts at spelling out in detail how it is believed a program or intervention will lead to improvements in higher-level outcomes. They should represent all of the high level intended outcomes for the program and the lower-level steps which it is believed need to occur to achieve these outcomes. Such models can be represented in text, tables, as printed diagrams, or, increasingly, as interactive models within software. In the past, there have been a number of largely unexamined ‘rules’ about the way in which outcomes models (logic models) should be drawn. This article starts by asking the question: what are outcomes models for? It then works back from this to identify the best way of drawing outcomes models so that they can be the most use for a wide range of purposes. This is a topic article within the Outcomes Theory Knowledge Base.
Outcomes models are models used to show how a program or intervention works to achieve high-level outcomes. Examples can be found at OutcomesModels.org. They have a wide variety of names and can be presented in different formats (e.g. databases, textual tables, visualized models and mathematical models or combinations of these). Some of the names they go by include: results maps, logic models, program logics, intervention logics, means-ends diagrams, logframes, theories of change, program theories, outcomes hierarchies and strategy maps.
Purpose 3: To provide information about other factors which could influence an intervention achieving its outcomes (these are sometimes referred to as assumptions, risks, external or exogenous factors etc).
Purpose 5: To provide information about those measurements (indicators) for which one can demonstrate attribution to a particular intervention (i.e. prove that the intervention caused them to change).
- Before the fact (ex ante) where they represent what it is believed is likely to happen in the case of a particular intervention (or a type of intervention).
- After the fact (ex post) where they represent what it is believed has actually happened in regard to a particular intervention.
The attempt is often made to fit types of outcomes models (logic models etc) onto a single small page. This is a mistake. Outcomes models need to be large enough to represent all of what is happening within a program or organization in sufficient detail. The level of detail needs to be enough so that those reading the model can see all of the important steps which are required in order to get to higher-level outcomes. Below are some examples of printed models. Outcomes models should also be represented as electronic versions so that it is easy to work with them in real-time. Examples of electronic versions of models are available at OutcomesModels.org. Such models can be represented with line and arrow links showing causality or, as in the examples below, the line and arrow links have been left out (if models are built in DoView outcomes processor software causal links between the steps in the such models can be stored and represented in various ways).
Figure 3: Part of an outcomes model for a national department of conservation showing projects mapped onto the higher-levels of the model
What should we be trying to represent in an outcomes model?
- Outcomes models which cannot effectively represent the richness of causal connections within and between steps and outcomes within a model. For instance, a single cascading list of outcomes and steps as represented in a traditional textual table (e.g. in a system called logframe widely used in the international development area) often cannot do justice to the complexity of the causal links between steps and outcomes in regard to even a simple program.
- Outcomes models which only show currently measurable steps and outcomes. Such models are often appropriately criticized for being limited representations of the world of causes in which the program is operating. The currently measurable is a result of the appropriateness, feasibility and affordability of measurement at a particular point in time.
- Outcomes models which only show those steps and outcomes which can be demonstrated as being attributable to a particular intervention. Such models are likely to be even more limited than those which are restricted just to the measurable (as in 2 above).
- A variation on 3 above is models which attempt to ‘hard-wire’ attribution into their horizontal or vertical structure. This is done in the traditional logic model used in evaluation where often four levels are set out: inputs, outputs, intermediate outcomes and final outcomes. This approach determines the structure of the model based on measurement and attribution – outputs are, by definition, measurable and attributable to the program. Such structuring often interrupts the free flow representation of causality which is needed in order to achieve Purpose 1 set out above – drawing a comprehensive picture of ‘what it is believed causes what’.
- The model should firstly provide the richest possible representation of ‘what it is believed causes what’ in regard to an intervention. This representation should not be distorted by considerations of measurement or attribution. The ‘technology of representation’ should allow any step or outcome to have a link with any other step or outcome and for the model to be as large as is needed to represent all of the important steps and outcomes related to the intervention (including those which are not influenced by the intervention but are relevant to it (often referred as assumptions, risks, external or exogenous factors).
- In terms of structuring the model into higher and lower levels of causality, a visual representation should be used rather than a textual representation (mathematical representations can be enhancements on a visual model). Textual representations rely on verbal labels to classify the level at which a step or outcome lies within an outcomes model. This often leads to discussions such as: ‘is this an intermediate or final outcome?’ When using a visual representation, this issue does not have to be dealt with using verbal labels. Instead, it is dealt with by applying a simple rule as to where a step or outcome lies within the visual space. If a model (as is the convention in outcomes theory) runs from the highest-level outcomes at the top down to the lower level steps below, then the simple rule which has to be applied to determine if Step A is above Step B is as follows. ‘If Step A could be achieved immediately, would one bother with doing Step B?’ If the answer is ‘no’ then Step A lies above Step B within the outcomes model. It is much easier to use this visual way of working with levels within an outcomes model than to have to spend time explaining to groups which are building models the nuances of the difference between different verbal labels.
- The model should map measurement and demonstration of attribution back onto the model after it has been built.
- The ’technology of representation’ of the outcomes model should provide the minimum possible obstacles to it being worked with in a group; easily amended in the course of discussions; and represented in a variety of formats (e.g. printed, dataprojected, web-based and on screen) so that it can be used across all stages of program planning, monitoring, evaluation etc.
- Relevant – to the outcomes it is hoped will be influenced by a program or intervention.
- Influenceable – theoretically able to be influence by a program or intervention (not-necessarily actually demonstrated that it is attributable).
- Controllable – only influenced by one particular program or intervention.
- Measurable – able to be measured.
- Demonstrably attributable - it can actually be demonstrated that they can be attributed to a particular program or intervention (i.e. it can be proved that the step or outcome has been changed by it)
- Accountable – a particular program or intervention will be rewarded or punished for changes in the step or outcome.
Figure 2: Comparison between ‘full’ outcomes model and ones restricted to the measurable or demonstrable (attributable) |
- Working out how the program will work – this should focus on all the steps which it is believed need to happen to achieve high-level outcomes, not in the first instance just the measurable and demonstrable (attributable).
- Discussing how the program works with stakeholders – they are interested in what it is believed will happen in the program, the left-hand model in Figure 2 much more than just looking at models like the ones on the right in that figure.
- For high-level thinking in developing high-level policy. See here.
- For mapping a number of programs or interventions onto a common outcomes model.
- Identifying what evidence and rationale there is for the links between the steps and outcomes in the model.
- identifying what is currently measurable by mapping indicators onto the model – if the model has been drawn to just show the measurable there is no way that those instances where there is a step or outcome which is not currently being measured can be identified.
- Identifying what is demonstrable (attributable) to a particular intervention and for what it should be held to account – the discussion about this is much more efficient when it is conducted against a ‘full’ outcomes model rather than being dealt with in other ways (see For contracting below).
- Setting out a visual evaluation plan for evaluation planning and implementation. For an example see here
- For planning economic evaluation.
- For reporting evaluation results.
- For contracting – such discussions, particularly in the context of encouraging providers to focus on outcomes are likely to be more effective against a ‘full’ outcomes model. See Contracting for Outcomes for an example.
Untitled embed |
Building an outcomes model with a group |
Untitled embed |
Breaking a large outcomes model up into sub-pages |
References
- The author worked on aspects of outcomes theory while the New Zealand Senior Fulbright Scholar at the Urban Institute in Washington D.C. in 2005. Elements of outcomes theory have been presented at a variety of conferences,including presentations to the American Evaluation Association Conference, Atlanta, 2004. The European Evaluation Society Conference, Berlin, 2004. The Australasian Evaluation Society Conference, Perth, 2008. The Aotearoa New Zealand Evaluation Society Conference, Rotorua, 2008. The European Society Evaluation Conference, Lisbon, 2008. the United Kingdom Evaluation Society Conference, Bristol, 2008. The American Evaluation Society Evaluation Conference, Denver, 2008.
- Disclosure: The author is involved in the development of DoView outcomes software as a way of creating, working with, and reporting on outcomes structures.