Types of claims able to be made regarding outcomes models (intevention logic models/theories of change))

· Uncategorized
Authors



Welcome to outcomes theory
Subscribe to author’s e-newsletter
Author’s Outcomes Blog
Follow author on Twitter

Summary

An outcomes model is a model that sets out the high-level outcomes and the lower-level steps needed to achieve those outcomes for any project, program, organization, collaboration, sector, region or country. Regardless of the way in which it is represented (verbally, text, table, database, diagram, mathematical model), it lies at the heart of every outcomes system. An outcomes system is any system that attempts to  identify, prioritize, improve, attribute and hold parties to account for outcomes of any type. Outcomes models go under different names such as intervention logics, logic models, program logics, results maps, theories of change, Outcomes DoViews). Such outcomes models are ‘making claims’ about the way the world works in regard to a program or intervention. There are four types of claim which can be made using an outcomes model (or sub-parts of a model). These claims relate to the claimed causal logic of the program being revealed, evidence from previous programs, and/or evidence obtained regarding how the actual program itself is working. These claims can then be used to mount arguments that stakeholders should have confidence that a particular program is improving high-level outcomes. These claims can lead to four ascending levels of confidence stakeholders may have that a program is improving high-level outcomes.

Introduction

Outcomes models are models used to show how a program, or intervention, works to achieve high-level outcomes. They are an essential part of all outcomes systems.  They go by a variety of different names, including: logic models, program logics, intervention logics, means-ends diagrams, logframes, theories of change, program theories, outcomes hierarchies, strategy maps and Outcomes DoViews.

A more general discussion of outcomes models is available in an article which should be read together with this article – What are outcomes models (program logic models)?
That article contains references to a set of standards for drawing outcomes models. It is presumed that outcomes models discussed in this article conform to that set of standards. This current article focuses more narrowly on specifying the claims that can be made regarding an outcomes model and the arguments that can be mounted, based on these claims.

Claims that can be made regarding an outcomes models (or sub-parts of models)

Four claims can be made regarding an outcomes model, or a sub-part of an outcomes model. The four claims and what they are, and are not claiming, are set out below:

Claim 1: This is how we believe that the program will work (is working). 

We are claiming:
This is the way we think high-level outcomes will be caused by a program or intervention when it is run.

We are not claiming:
1) That we have collated specific evidence from the past that establishes that this is what will happen (this is not to say that there may not be evidence out there that supports at least some of the links in the model). 2) That we have demonstrated that what is set out in the outcomes model is what has actually occurred (or is occurring) in regard to this particular program.

Stakeholder Level of Confidence 1: Claim 1 means that stakeholders should have more confidence in the program than they would have in a program where Claim 1 had not been made because they are in a position to assess whether what has been laid out in the outcomes model appears credible to them. In addition, if they wish, they can have the outcomes model reviewed by stakeholders or peer reviewed by expert(s) in the field to see if they also think that it is credible.

Claim 2:  This is how we believe the program will work and we have evidence to support that all, or an identified sub-set, of the causal links set out in the outcomes model do occur in situations like this.

We are claiming:
1) This is the way we think that high-level outcomes will eventuate from a program or intervention. 2) We have evidence that specific parts of a model (specified links between steps and outcomes) are likely to occur.

We are not claiming:
That we have demonstrated that what is set out in the model is what has actually has occurred (or is occurring) in regard to this particular program. (Note, what stakeholders accept as ‘evidence’ is not specified here – that is a separate issue).

Stakeholder Level of Confidence 2: Claim 2 means that (depending on how much of the outcomes model has supporting evidence and the level of confidence the stakeholder, or their peer reviewers, have in that evidence) stakeholders can have a higher level of confidence because there is evidence to back up the idea that the claimed causal mechanism set out in the model has actually occurred in other programs.

Claim 3: We have demonstrated that the causal links set out in the model are actually operating in practice in regard to the lower levels of the outcomes model for this particular program.

We are claiming:
That: 1) we can show that, in the case of this particular program, the lower-levels of the outcomes model are actually occurring; and 2) either, from 1 above or from 2) above stakeholders should have confidence that it is likely that this means that the rest of the outcomes model is occurring in the case of this program.

We are not claiming:
That we have actually established that the higher-levels of the outcomes model are occurring in the case of this particular program.

Stakeholder Level of Confidence 3: Claim 3 means that stakeholders can have a higher level of confidence over a Claim 2 situation because they can be assured that the lower-levels of the outcomes model are actually occurring in the case of the particular program. They can add this additional confidence to whatever confidence they have from Claim 1 and Claim 2 in regard to the particular outcomes model being used in regard to the program.

Claim 4: We have demonstrated that the causal links set out in the model (right to the top of the model) are actually operating in practice in regard to this particular program.

We are claiming:
That we can demonstrate that the program has worked in the way set out in the outcomes model (right to the top of the model) in the particular case of this program. The evidence for this claim is different from the evidence in regard to Claim 2 above in that it is made on the basis of evidence regarding the particular program (not from evidence regarding different programs as in Claim 2 above). From a technical point of view, a claim that this level of demonstrating causal links has reached up to the top level of the outcomes model in regard to a particular program will require evidence from a number of the types of evidence which can be used to show that a program works (also known as the building blocks of outcomes systems). The first type of critical evidence to show that a program has changed high-level outcomes is where controllable indicators reach up to the top of the model (often not the case). The second type of evidence, where the first is not available is that impact evaluation has been done which shows that changes in high-level outcomes can be attributed to the program. And, in some cases, that non-impact evaluation (process evaluation) has established that the mechanisms set out in the outcomes model are actually operating in regard to the program. It should be noted that Claim 4 goes beyond just doing impact/outcome evaluation in that it claims that not only has it been established that high-level outcomes have been changed by a program, but also that the mechanism which made these changes occur is known. This is in contrast to what is known as ‘black-box’ evaluations where we only know that high-level outcomes have been changed, but not the mechanism which has changed them. Because it involved a number of the five building-blocks, Claim 4 can be viewed, in one sense, as a method of summarizing other monitoring and evaluation findings rather than a tool for making a claim in its own right.

Stakeholder Level of Confidence 4: Claim 4 means that stakeholders can have confidence that it has been established that changes in high-level outcomes can be attributed to a particular intervention and that the causal mechanism which has brought this about is also known.

Using these claims to mount arguments 

People may seek to mount arguments on the basis of the above claims regarding an outcomes model. These arguments can be seen to lie along a continuum relating to the level of confidence stakeholders should have that a program is actually improving high-level outcomes.

There are four arguments that can be made that a program is causing high-level outcomes to occur and these can be laid out in a rough order of increasing confidence that it is doing so, as follows:

First Level of Confidence

Argument A: Having a transparent mechanism for how we think the program works makes it more likely that it will work.
Stakeholders should have some confidence that the program is working because we can spell out exactly how we think that the program will work. Claim 1 above.

Second Level of Confidence

Argument B: There is evidence from other programs that what we have laid out in the outcomes model has occurred in the past in the way that we are attempting to do it now.
Stakeholders can have additional confidence because some, or the majority, of the outcomes model is backed up with evidence that the causal links in the model have occurred in the past in regard to other programs. This is based on an outcomes model which makes Claim 2 above.

Third Level of Confidence

Argument C: Same as Argument B but with the addition that we can demonstrate (from a Claim 3 type claim, but just in relation to the lower-levels of the outcomes model) that the lower-levels of the model are occurring in the case of our particular program.
Therefore stakeholders should have even more confidence that since we have established that the lower levels of the model are occurring in the actual program, and that there is evidence from previous programs that these lower levels lead onto achieving the higher-levels of the program, that the program is likely to be improving high-level outcomes. Claim 3.

Fourth Level of Confidence 

Argument D: We can demonstrate that all levels of the outcomes model are occurring in the case of a particular program.
As discussed under Claim 4 above, to be able to make this claim one will have to draw on two of the building-blocks will draw on two of the buildherefore stakeholders should have even more confidence that the program is improving high-level outcomes.

This analysis should not be take to imply that all stakeholders will, or should, accept that these arguments (apart from Argument D actually demonstrates attribution of high-level outcomes to a program in the way that some types of outcome/impact design may do so. The purpose of this analysis is to clarify the type of claim someone may be making so that stakeholders can be clear about their argument and then assess it for themselves.

Conclusion

This article has pointed out that three claims that can be made in regard to outcomes models and how these can be used to mount arguments about how much confidence stakeholders should have in the belief that a program is improving high-level outcomes.

Please comment on this article

This article is based on the developing area of outcomes theory which is still in a relatively early stage of development. Please critique any of the argument laid out in this article so that they can be improved through critical examination and reflection. Comments can be sent to paul (at) parkerduignan.com.

Citing this article

Duignan, P. (2009). Types of claims able to be made using outcomes models (program logic models/theories of change). Outcomes Theory Knowledge Base article No. 228. (https://outcomestheory.wordpress.com/2009/08/27/228/)
[If you are reading this in a PDF or printed copy, the web page version may have been updated].

[Outcomes Theory Article # 228]

  1. The building-blocks/types of evidence used in outcomes systems (Redirect)
  2. Types of claims able to be made regarding outcomes models (intervention logics/theories of change) (Redirect)
  3. Reconstructing a Community – How the DoView Visual Planning methodology could be used (Redirect)
  4. Simplifying terms used when working with outcomes (Redirect)
  5. Impact evaluation – where it should and should not be used (Redirect)
  6. Types of economic evaluation analysis (Redirect)
  7. Unequal inputs principle (‘level playing field’) principle
  8. Welcome to the Outcomes Theory Knowledge Base
  9. Organizational Requirements When Implementing the Duignan Approach Using DoView Within an Organization
  10. M & E systems – How to build an affordable simple monitoring and evaluation system using a visual approach
  11. Evaluation of Healthcare Information for All 2015 (HIFA2015) using a DoView visual evaluation plan and Duignan’s Visual Evaluation Planning Method
  12. DoView Results Roadmap Methodology
  13. Problems faced when monitoring and evaluating programs which are themselves assessment systems
  14. Reviewing a list of performance indicators
  15. Using visual DoView Results Roadmaps™ when working with individuals and families
  16. Proving that preventive public health works – using a visual results planning approach to communicate the benefits of investing in preventive public health
  17. Where outcomes theory is being used
  18. How a not-for-profit community organization can transition to being outcomes-focused and results-based – A case study
  19. Duignan’s Outcomes-Focused Visual Strategic Planning for Public and Third Sector Organizations
  20. Impact/outcome evaluation design types
  21. Introduction to outcomes theory
  22. Contracting for outcomes
  23. How a Sector can Assist Multiple Organizations to Implement the Duignan Outcomes-Focused Visual Strategic Planning, Monitoring and Evaluation Approach
  24. How community-based mental health organizations can become results-based and outcomes-focused
  25. Paul Duignan PhD Curriculum Vitae
  26. Integrating government organization statutory performance reporting with demands for evaluation of outcomes and ‘impacts’
  27. Non-output attributable intermediate outcome paradox
  28. Features of steps and outcomes appearing within outcomes models
  29. Principle: Three options for specifying accountability (contracting/delegation) when controllable indicators do not reach a long way up the outcomes model
  30. Outcomes theory diagrams
  31. Indicators – why they should be mapped onto a visual outcomes model
  32. What are Outcomes Models (Program logic models)?
  33. Methods and analysis techniques for information collection
  34. What are outcomes systems?
  35. The problem with SMART objectives – Why you have to consider unmeasurable outcomes
  36. Encouraging better evaluation design and use through a standardized approach to evaluation planning and implementation – Easy Outcomes
  37. New Zealand public sector management system – an analysis
  38. Using Duignan’s outcomes-focused visual strategic planning as a basis for Performance Improvement Framework (PIF) assessments in the New Zealand public sector
  39. Working with outcomes structures and outcomes models
  40. Using the ‘Promoting the Use of Evaluation Within a Country DoView Outcomes Model’
  41. What added value can evaluators bring to governance, development and progress through policy-making? The role of large visualized outcomes models in policy making
  42. Real world examples of how to use seriously large outcomes models (logic models) in evaluation, public sector strategic planning and shared outcomes work
  43. Monitoring, accountability and evaluation of welfare and social sector policy and reform
  44. Results-based management using the Systematic Outcomes Management / Easy Outcomes Process
  45. The evolution of logic models (theories of change) as used within evaluation
  46. Trade-off between demonstrating attribution and encouraging collaboration
  47. Impact/outcome evaluation designs and techniques illustrated with a simple example
  48. Implications of an exclusive focus on impact evaluation in ‘what works’ evidence-based practice systems
  49. Single list of indicators problem
  50. Outcomes theory: A list of outcomes theory articles
  51. Standards for drawing outcomes models
  52. Causal models – how to structure, represent and communicate them
  53. Conventions for visualizing outcomes models (program logic models)
  54. Using a generic outcomes model to implement similar programs in a number of countries, districts, organizational or sector units
  55. Using outcomes theory to solve important conceptual and practical problems in evaluation, monitoring and performance management
  56. Free-form visual outcomes models versus output, intermediate and final outcome ‘layered’ models
  57. Key outcomes, results management and evaluation resources
  58. Outcomes systems – checklist for analysis
  59. Having a common outcomes model underpinning multiple organizational activities
  60. What is best practice?
  61. Best practice representation and dissemination using visual outcomes models
  62. Action research: Using an outcomes modeling approach
  63. Evaluation questions – why they should be mapped onto a visual outcomes model
  64. Overly-simplistic approaches to outcomes, monitoring and evaluation work
  65. Evaluation types: Formative/developmental, process and impact/outcome
  66. Terminology in evaluation: Approaches, types (purposes), methods, analysis techniques and designs
  67. United Nations Results-Based Management System – An analysis
  68. Selecting impact/outcome evaluation designs: a decision-making table and checklist approach
  69. Definitions used in outcomes theory
  70. Balanced Scorecard and Strategy Maps – an analysis
  71. The error of limiting focus to only the attributable
  72. Reframing program evaluation as part of collecting strategic information for sector decision-making
  73. Distinguishing evaluation from other processes (e.g. monitoring, performance management, assessment, quality assurance)
  74. Full roll-out impact/outcome evaluation versus piloting impact/outcome evaluation plus best practice monitoring
  75. References to outcomes theory
  76. Techniques for improving constructed matched comparison group impact/outcome evaluation designs
%d bloggers like this: