The Building-Blocks of Outcomes Systems

· Uncategorized
Authors



[This article is available https://outcomestheory.wordpress.com/2010/09/16/207/ ‎%5D

THIS ARTICLE IS BEING MAINTAINED AT THE MOMENT

Introduction [1]

Discussion regarding outcomes systems (systems which attempt to specify, measure, attribute or hold parties to account for changes in outcomes of various types) is often confusing because of the diversity of systems used in different sectors; different names being used for similar components in such systems; and a wide range of different disciplines being involved in such discussions, each with their own technical language. Outcomes systems are known by names such as: performance management systems, results management systems, monitoring systems, indicator frameworks, program evaluation, evidence-based practice, outcomes-focused contracting, strategic planning, and priority setting processes, amongst others. Prior to the development of outcomes theory, there was no common language for discussing such systems across all sectors and all disciplines. This article outlines a simple conceptual model of the basic building-blocks of outcomes systems. This approach provides a sound conceptual basis for holding more coherent discussions regarding the nature of, functioning and improvement of particular outcomes systems.

The Building-Blocks/Evidence Types
The building blocks/evidence types underpinning outcomes systems can be though of in various ways. They can be seen as the key structural elements which should make up all outcomes systems. Comparing the area of outcomes measurement to the allied field of accounting, accounting systems have basic building blocks (e.g. assets register, general ledger, depreciation schedule). In the same way, well-formed outcomes systems need to have a particular set of building blocks. The more an outcomes system has these building blocks in place, the better and more coherent the system is likely to be. Where they are missing from an outcomes system, the system normally suffers from structural problems. The building-blocks can also be thought of as the set of different types of information that can be provided about a program, project, organization, policy or other intervention to provide evidence as to whether or not ‘it works’.
The building-blocks are set out in Figure 1 below.

d389
A description of each of the building-blocks follows:

  • Building-block 1: An outcomes model/intervention logic (e.g. a DoView)
    An outcomes model sets out the logic of how it is believed lower-level steps undertaken within a program or intervention, will lead to higher-level outcomes. Such outcomes models are often referred to under different names such as: intervention logics, outcomes hierarchies, program logics, logic models, program theories theories of change, ends-means diagrams or strategy maps, outcomes DoViews. Such models may, or may not, be justified by analysis and evidence supporting the links between steps and outcomes in the model. In terms of the features of the steps and outcomes which can be put into such models such outcomes models should not be restricted just to steps and outcomes that can only be measured or attributed to a particular program ( measurement and attribution are important but are best dealt with after an outcomes model is drawn). Outcomes models may be presented in various formats, e.g. textual narratives, tables, databases, or in visual models. If outcomes models are to be fit-for-use within outcomes systems, they should be visualized according to a set of standards to ensure they are well formed and fit for purpose (see Conventions for visualizing outcomes models and Standards for drawing outcomes models).
  • Building-block 2: Not-necessarily controllable indicators
    These are indicators (measures of steps or outcomes) which track whether or not there has been any improvements in high-level outcomes. These are sometimes described as state, environmental, strategic or progress indicators. They should not be restricted just to indicators which are controllable [3] by the program or intervention. If they are, it is likely that in the instance of many interventions, they will not reach within the outcomes model up to high-level outcomes. Mapping these not-necessarily controllable indicators back onto a comprehensive outcomes model is a powerful way of identifying those steps and outcomes in the model that are currently measurable and those that are not. This is a much better approach than the traditional, almost universal, approach of just working with a list of indicators and having no real idea of whether it is the important, or just the easily measurable, which is currently being measured (see the article on why performance measures should always be mapped back onto a visual outcomes model and an article on reviewing a list of performance indicators for more information). Tracking trends in not-necessarily controllable indicators is important for strategic planning in order to know whether the outcomes being sought in the outside world are occurring. Of course, merely showing an improvement in not-necessarily controllable indicators does not establish that the improvement can be attributed to a particular program or intervention merely by their measurement. Lack of controllablity means that the mere measurement of these indicators does not necessarily establish that they are the cause of the improvement in the indicator). [4]
  • Building-block 3: Controllable indicators
    These are indicators which are under the control of the intervention. They tend to be at a lower level within the outcomes model as the closer an indicator is to an intervention within an outcomes model, the more likely it is that it will be controlled by the intervention (rather than other factors also affecting it). Such controllable indicators include what are often called
    outputs (or more correctly output indicators). Output indicators are often used as the basis for accountability of a program or intervention. If controllable output indicators do not reach up to the highest level of outcomes within an outcome model, the mere monitoring of the program through indicators will not say anything about attribution of changes in high-level outcomes to the program. In these cases, if one is seeking to prove attribution, specific evaluations, rather than routine monitoring processes need to be employed as described in the next building-block.
  • Building-block 4: Impact/Outcome evaluation.
    This is impact/outcome evaluation that attempts to make an attributional claim that it can be proved that a program or intervention has actually changed one or more high-level outcomes in the absence of controllable indicators reaching to the top of an outcomes model. There is a set of
    seven possible outcome/impact evaluation design types identified in outcome theory that can be used to make such an attributional claim. In the case of any program or intervention, one or more of these designs may or may not be appropriate, feasible and/or affordable. It cannot be assumed before an analysis has been undertaken (of the appropriateness, feasibility and affordability of these design types) that one or more of these designs will be appropriate in the case of a particular program or intervention. (See also article on when assessing when impact evaluation should be done).
  • Building-block 5: Implementation evaluation
    Implementation evaluation (sometimes called formative evaluation) focuses on optimizing the implementation of a program or intervention. Because this usually takes place at the start of a program, it is often undertaken before the point in time when impact/outcome evaluation can be done on high-level long-term outcomes. This is because a considerable amount of time needs to elapse before such outcomes occur. Usually this type of evaluation does not make any high-level attributional claims about being able to prove that a program or intervention has caused high-level outcomes to change [5]. It does however provide rich detailed ‘lower-level’ information about a program or intervention which is useful in its own right. It can be used to ensure that: the lower-levels of the outcomes model are being implemented in an appropriate way; to describe what happened in a program for future reference (this is an aspect of what is called
    process evaluation); and it can be used to assist in the interpretation of any impact/outcome findings from building-block 4 above – for instance, a program may not achieve its high-level outcomes because it was poorly implemented. [5] 
  • Building-block 6: Economic and comparative evaluation.
    This is types of evaluation which look beyond the results of the individual program or intervention and compare it with other interventions. This type of evaluation includes comparisons between the effects of different types of programs. It also includes all economic evaluation which potentially provides a way of comparing different programs or intervention on the basis of the relative cost of their programs (economic costing); the cost of achieving a particular effect size (cost effectiveness analysis); and the net cost or benefit of a program (cost-benefit analysis). For more information on types of economic analysis see the article on Types of Economic Evaluation Analysis.

Using the Building-Blocks in Practice

In practice, not every outcomes system will be able to provide evidence and analysis from all of these building-blocks. If one of them is able to be done robustly for a particular outcomes system, there may be less need for one or more of the other building-blocks to be emphasized in that particular outcomes system. An important principle of outcomes theory, often violated by high-level stakeholders dealing with outcomes systems, is that it cannot be assumed (or demanded) before the fact, that any one of these building-blocks will provide robust information in a particular outcomes system. In particular, high-level stakeholders often think that the fourth building-block – impact/impact evaluation can always be undertaken within an outcomes system and that it is appropriate to routinely demand this of outcomes systems (e.g. those funding projects make the demand that impact/outcome evaluation be undertaken for every program they fund). For any particular program, there needs to be an analysis in regard to that particular case to see if outcome/impact evaluation is appropriate, feasible and/or affordable. (See the article on

 

  • Conclusion
  • The outcomes system building-block framework can be used to analyze any outcomes system to identify gaps and weaknesses in the system and assist in improving it. It can also be used in the design of new outcomes systems to ensure that they are well constructed.

 

 

  1. The building-blocks/types of evidence used in outcomes systems (Redirect)
  2. Types of claims able to be made regarding outcomes models (intervention logics/theories of change) (Redirect)
  3. Reconstructing a Community – How the DoView Visual Planning methodology could be used (Redirect)
  4. Simplifying terms used when working with outcomes (Redirect)
  5. Impact evaluation – where it should and should not be used (Redirect)
  6. Types of economic evaluation analysis (Redirect)
  7. Unequal inputs principle (‘level playing field’) principle
  8. Welcome to the Outcomes Theory Knowledge Base
  9. Organizational Requirements When Implementing the Duignan Approach Using DoView Within an Organization
  10. M & E systems – How to build an affordable simple monitoring and evaluation system using a visual approach
  11. Evaluation of Healthcare Information for All 2015 (HIFA2015) using a DoView visual evaluation plan and Duignan’s Visual Evaluation Planning Method
  12. DoView Results Roadmap Methodology
  13. Problems faced when monitoring and evaluating programs which are themselves assessment systems
  14. Reviewing a list of performance indicators
  15. Using visual DoView Results Roadmaps™ when working with individuals and families
  16. Proving that preventive public health works – using a visual results planning approach to communicate the benefits of investing in preventive public health
  17. Where outcomes theory is being used
  18. How a not-for-profit community organization can transition to being outcomes-focused and results-based – A case study
  19. Duignan’s Outcomes-Focused Visual Strategic Planning for Public and Third Sector Organizations
  20. Impact/outcome evaluation design types
  21. Introduction to outcomes theory
  22. Contracting for outcomes
  23. How a Sector can Assist Multiple Organizations to Implement the Duignan Outcomes-Focused Visual Strategic Planning, Monitoring and Evaluation Approach
  24. How community-based mental health organizations can become results-based and outcomes-focused
  25. Paul Duignan PhD Curriculum Vitae
  26. Integrating government organization statutory performance reporting with demands for evaluation of outcomes and ‘impacts’
  27. Non-output attributable intermediate outcome paradox
  28. Features of steps and outcomes appearing within outcomes models
  29. Principle: Three options for specifying accountability (contracting/delegation) when controllable indicators do not reach a long way up the outcomes model
  30. Outcomes theory diagrams
  31. Indicators – why they should be mapped onto a visual outcomes model
  32. What are Outcomes Models (Program logic models)?
  33. Methods and analysis techniques for information collection
  34. What are outcomes systems?
  35. The problem with SMART objectives – Why you have to consider unmeasurable outcomes
  36. Encouraging better evaluation design and use through a standardized approach to evaluation planning and implementation – Easy Outcomes
  37. New Zealand public sector management system – an analysis
  38. Using Duignan’s outcomes-focused visual strategic planning as a basis for Performance Improvement Framework (PIF) assessments in the New Zealand public sector
  39. Working with outcomes structures and outcomes models
  40. Using the ‘Promoting the Use of Evaluation Within a Country DoView Outcomes Model’
  41. What added value can evaluators bring to governance, development and progress through policy-making? The role of large visualized outcomes models in policy making
  42. Real world examples of how to use seriously large outcomes models (logic models) in evaluation, public sector strategic planning and shared outcomes work
  43. Monitoring, accountability and evaluation of welfare and social sector policy and reform
  44. Results-based management using the Systematic Outcomes Management / Easy Outcomes Process
  45. The evolution of logic models (theories of change) as used within evaluation
  46. Trade-off between demonstrating attribution and encouraging collaboration
  47. Impact/outcome evaluation designs and techniques illustrated with a simple example
  48. Implications of an exclusive focus on impact evaluation in ‘what works’ evidence-based practice systems
  49. Single list of indicators problem
  50. Outcomes theory: A list of outcomes theory articles
  51. Standards for drawing outcomes models
  52. Causal models – how to structure, represent and communicate them
  53. Conventions for visualizing outcomes models (program logic models)
  54. Using a generic outcomes model to implement similar programs in a number of countries, districts, organizational or sector units
  55. Using outcomes theory to solve important conceptual and practical problems in evaluation, monitoring and performance management
  56. Free-form visual outcomes models versus output, intermediate and final outcome ‘layered’ models
  57. Key outcomes, results management and evaluation resources
  58. Outcomes systems – checklist for analysis
  59. Having a common outcomes model underpinning multiple organizational activities
  60. What is best practice?
  61. Best practice representation and dissemination using visual outcomes models
  62. Action research: Using an outcomes modeling approach
  63. Evaluation questions – why they should be mapped onto a visual outcomes model
  64. Overly-simplistic approaches to outcomes, monitoring and evaluation work
  65. Evaluation types: Formative/developmental, process and impact/outcome
  66. Terminology in evaluation: Approaches, types (purposes), methods, analysis techniques and designs
  67. United Nations Results-Based Management System – An analysis
  68. Selecting impact/outcome evaluation designs: a decision-making table and checklist approach
  69. Definitions used in outcomes theory
  70. Balanced Scorecard and Strategy Maps – an analysis
  71. The error of limiting focus to only the attributable
  72. Reframing program evaluation as part of collecting strategic information for sector decision-making
  73. Distinguishing evaluation from other processes (e.g. monitoring, performance management, assessment, quality assurance)
  74. Full roll-out impact/outcome evaluation versus piloting impact/outcome evaluation plus best practice monitoring
  75. References to outcomes theory
  76. Techniques for improving constructed matched comparison group impact/outcome evaluation designs
%d bloggers like this: