[This article is available https://outcomestheory.wordpress.com/2010/09/16/207/ %5D
THIS ARTICLE IS BEING MAINTAINED AT THE MOMENT
Discussion regarding outcomes systems (systems which attempt to specify, measure, attribute or hold parties to account for changes in outcomes of various types) is often confusing because of the diversity of systems used in different sectors; different names being used for similar components in such systems; and a wide range of different disciplines being involved in such discussions, each with their own technical language. Outcomes systems are known by names such as: performance management systems, results management systems, monitoring systems, indicator frameworks, program evaluation, evidence-based practice, outcomes-focused contracting, strategic planning, and priority setting processes, amongst others. Prior to the development of outcomes theory, there was no common language for discussing such systems across all sectors and all disciplines. This article outlines a simple conceptual model of the basic building-blocks of outcomes systems. This approach provides a sound conceptual basis for holding more coherent discussions regarding the nature of, functioning and improvement of particular outcomes systems.
The Building-Blocks/Evidence Types
The building blocks/evidence types underpinning outcomes systems can be though of in various ways. They can be seen as the key structural elements which should make up all outcomes systems. Comparing the area of outcomes measurement to the allied field of accounting, accounting systems have basic building blocks (e.g. assets register, general ledger, depreciation schedule). In the same way, well-formed outcomes systems need to have a particular set of building blocks. The more an outcomes system has these building blocks in place, the better and more coherent the system is likely to be. Where they are missing from an outcomes system, the system normally suffers from structural problems. The building-blocks can also be thought of as the set of different types of information that can be provided about a program, project, organization, policy or other intervention to provide evidence as to whether or not ‘it works’.
The building-blocks are set out in Figure 1 below.
- Building-block 1: An outcomes model/intervention logic (e.g. a DoView)
An outcomes model sets out the logic of how it is believed lower-level steps undertaken within a program or intervention, will lead to higher-level outcomes. Such outcomes models are often referred to under different names such as: intervention logics, outcomes hierarchies, program logics, logic models, program theories theories of change, ends-means diagrams or strategy maps, outcomes DoViews. Such models may, or may not, be justified by analysis and evidence supporting the links between steps and outcomes in the model. In terms of the features of the steps and outcomes which can be put into such models such outcomes models should not be restricted just to steps and outcomes that can only be measured or attributed to a particular program ( measurement and attribution are important but are best dealt with after an outcomes model is drawn). Outcomes models may be presented in various formats, e.g. textual narratives, tables, databases, or in visual models. If outcomes models are to be fit-for-use within outcomes systems, they should be visualized according to a set of standards to ensure they are well formed and fit for purpose (see Conventions for visualizing outcomes models and Standards for drawing outcomes models).
- Building-block 2: Not-necessarily controllable indicators
These are indicators (measures of steps or outcomes) which track whether or not there has been any improvements in high-level outcomes. These are sometimes described as state, environmental, strategic or progress indicators. They should not be restricted just to indicators which are controllable  by the program or intervention. If they are, it is likely that in the instance of many interventions, they will not reach within the outcomes model up to high-level outcomes. Mapping these not-necessarily controllable indicators back onto a comprehensive outcomes model is a powerful way of identifying those steps and outcomes in the model that are currently measurable and those that are not. This is a much better approach than the traditional, almost universal, approach of just working with a list of indicators and having no real idea of whether it is the important, or just the easily measurable, which is currently being measured (see the article on why performance measures should always be mapped back onto a visual outcomes model and an article on reviewing a list of performance indicators for more information). Tracking trends in not-necessarily controllable indicators is important for strategic planning in order to know whether the outcomes being sought in the outside world are occurring. Of course, merely showing an improvement in not-necessarily controllable indicators does not establish that the improvement can be attributed to a particular program or intervention merely by their measurement. Lack of controllablity means that the mere measurement of these indicators does not necessarily establish that they are the cause of the improvement in the indicator). 
- Building-block 3: Controllable indicators
These are indicators which are under the control of the intervention. They tend to be at a lower level within the outcomes model as the closer an indicator is to an intervention within an outcomes model, the more likely it is that it will be controlled by the intervention (rather than other factors also affecting it). Such controllable indicators include what are often called
outputs (or more correctly output indicators). Output indicators are often used as the basis for accountability of a program or intervention. If controllable output indicators do not reach up to the highest level of outcomes within an outcome model, the mere monitoring of the program through indicators will not say anything about attribution of changes in high-level outcomes to the program. In these cases, if one is seeking to prove attribution, specific evaluations, rather than routine monitoring processes need to be employed as described in the next building-block.
- Building-block 4: Impact/Outcome evaluation.
This is impact/outcome evaluation that attempts to make an attributional claim that it can be proved that a program or intervention has actually changed one or more high-level outcomes in the absence of controllable indicators reaching to the top of an outcomes model. There is a set of
seven possible outcome/impact evaluation design types identified in outcome theory that can be used to make such an attributional claim. In the case of any program or intervention, one or more of these designs may or may not be appropriate, feasible and/or affordable. It cannot be assumed before an analysis has been undertaken (of the appropriateness, feasibility and affordability of these design types) that one or more of these designs will be appropriate in the case of a particular program or intervention. (See also article on when assessing when impact evaluation should be done).
- Building-block 5: Implementation evaluation
Implementation evaluation (sometimes called formative evaluation) focuses on optimizing the implementation of a program or intervention. Because this usually takes place at the start of a program, it is often undertaken before the point in time when impact/outcome evaluation can be done on high-level long-term outcomes. This is because a considerable amount of time needs to elapse before such outcomes occur. Usually this type of evaluation does not make any high-level attributional claims about being able to prove that a program or intervention has caused high-level outcomes to change . It does however provide rich detailed ‘lower-level’ information about a program or intervention which is useful in its own right. It can be used to ensure that: the lower-levels of the outcomes model are being implemented in an appropriate way; to describe what happened in a program for future reference (this is an aspect of what is called
process evaluation); and it can be used to assist in the interpretation of any impact/outcome findings from building-block 4 above – for instance, a program may not achieve its high-level outcomes because it was poorly implemented. 
- Building-block 6: Economic and comparative evaluation.
This is types of evaluation which look beyond the results of the individual program or intervention and compare it with other interventions. This type of evaluation includes comparisons between the effects of different types of programs. It also includes all economic evaluation which potentially provides a way of comparing different programs or intervention on the basis of the relative cost of their programs (economic costing); the cost of achieving a particular effect size (cost effectiveness analysis); and the net cost or benefit of a program (cost-benefit analysis). For more information on types of economic analysis see the article on Types of Economic Evaluation Analysis.
Using the Building-Blocks in Practice
In practice, not every outcomes system will be able to provide evidence and analysis from all of these building-blocks. If one of them is able to be done robustly for a particular outcomes system, there may be less need for one or more of the other building-blocks to be emphasized in that particular outcomes system. An important principle of outcomes theory, often violated by high-level stakeholders dealing with outcomes systems, is that it cannot be assumed (or demanded) before the fact, that any one of these building-blocks will provide robust information in a particular outcomes system. In particular, high-level stakeholders often think that the fourth building-block – impact/impact evaluation can always be undertaken within an outcomes system and that it is appropriate to routinely demand this of outcomes systems (e.g. those funding projects make the demand that impact/outcome evaluation be undertaken for every program they fund). For any particular program, there needs to be an analysis in regard to that particular case to see if outcome/impact evaluation is appropriate, feasible and/or affordable. (See the article on
- The building-block approach underpins thinking within outcomes theory about the best way of structuring an outcomes system. An overview with examples of how to put in place a comprehensive outcomes system which includes the building-blocks identified in this article is set out in the article on
- Duignan’s Outcomes-Focused Visual Strategic Planning
- . Its application more specifically to monitoring and evaluation is set out in the article on
- Duignan’s Visual Monitoring and Evaluation Planning
- . Articles which show aspects of how the outcomes system building-blocks approach can be used to better conceptualize issues in evaluation and performance management include:
- Reframing program evaluation as part of collecting strategic information for sector decision-making
- Contracting for outcomes
- Distinguishing evaluation from other processes (e.g. monitoring, performance management, assessment, quality assurance)
- . The approach can be used to analysis any outcomes system (strategic planning, performance management, evaluation etc.) to identify technical problems with the system. (See the analysis of the
- UN Results-Based Management System
The outcomes system building-block framework can be used to analyze any outcomes system to identify gaps and weaknesses in the system and assist in improving it. It can also be used in the design of new outcomes systems to ensure that they are well constructed.
Citing this article
Duignan, P. (2008). The building-blocks of outcomes systems . Outcomes Theory Knowledge Base Article No. 207. (https://outcomestheory.wordpress.com/2010/09/16/207/).[Outcomes Theory Article #207 (http://tinyurl.com/otheory207)%5D.