Recent Articles

Principle: Ambiguity in interpreting outcomes or performance measures/indicators of regulatory intervention when also seeking prevention

Principle: Ambiguity in interpreting outcomes or performance measures/indicators of regulatory intervention when also seeking to prevent breaches (Also known as the Chameleon Regulatory Intervention Indicator Principle) The number of regulatory interventions are often used as outcomes or performance measures/indicators for organizations. This usually occurs in the public sector, but it can also occur in private sector settings where […]

Impact/outcome evaluation designs and techniques illustrated with a simple example

Introduction [Note: This article is still being developed. Please post any comments for improving it at the end of the article]. This article works through a simple illustrative example to show the range of possible impact/outcome evaluation designs and techniques for improving the similarity between comparison and intervention groups in impact/outcome evaluation. Impact/outcome evaluation is one […]

Principle: Providing the evidential basis for all estimates

Principle: Providing the evidential basis for all estimates The evidential basis for estimates of any sort should always be provided when estimates are given. This is so that anyone using such estimates can assess their likely accuracy. Failing to give such estimates exposes decision-makers to the risk of thinking that the estimates they have been […]

Anyone else think the way we do our M&E work is too cumbersome and painful? Using DoView Visual Strategic Planning & Success Tracking M&E Software: Simplying, streamlining and speeding up planning, monitoring and evaluation

Duignan, P. (2012). Anyone else think the way we do our M&E work is too cumbersome and painful? Using DoView Visual Strategic Planning & Success Tracking M&E Software: Simplifying, streamlining and speeding up planning, monitoring and evaluation. The 1st Pan Asia-Africa M&E Forum RBM&E and Beyond: Increasing M&E Effectiveness. Bangkok, 26-28 2012. This virtual conference […]

Reconstructing a Community – How the DoView Visual Planning methodology could be used

The Canterbury region in New Zealand is currently being reconstructed following two major earthquakes in 2010 and 2011. In response a Canterbury Earthquake Recovery Authority (CERA) has been set up by the New Zealand government. This provides an example of a community needing to be reconstucted on a number of levels. In such instances of social reconstruction, which arise from natural disasters and other causes, it is important that a productive discussions are facilitated at various levels about the goals and coordination of the reconstruction. The key issues which need to be addressed are: 1. the involvement of stakeholders in major decisions regarding the reconstruction rather than it being dominated by national authorities; 2. determining the best practical tool/process for underpinning the high-level strategic direction discussions at various levels; and, 3. how gaps and overlaps between the multiple projects being undertaken by multiple parties (both from within the location being reconstructed and from outside of it) can be identified either for the reconstruction as a whole, or for certain sectors and sub-areas within the reconstruction. The DoView (R) Visual Planning (TM) process is a visually based tool/process which could potentially be used to ensure that discussions about the direction and priorities for reconstruction are undertaken in a way that facilitates clear strategic thinking. It could be considered for use in reconstruction processes at various levels, for instance at a high community-wide level, or more specifically in regard to service provision for particular sectors within the community which is being reconstructed. More information about DoView Visual Planning at http://outcomescentral.org.

Types of economic evaluation analysis

An article in the Outcomes Theory Knowledge Base

A set of types of economic evaluation analysis can be identified and categorized in a new way based on whether or not effect-size estimates are available for changes to high-level outcomes brought about by the program or intervention being examined. Such an approach makes it easier to make decisions regarding what is the most appropriate type of economic analysis depending on the type of information available from impact evaluation about attribution of change in outcomes to the intervention being examined. The three major types of economic analysis (cost of intervention analysis, cost-effectiveness analysis and cost-benefit analysis) are further sub-divided on the basis of three levels of information being available about effect sizes. These levels are: 1) no attributable effect-size information available apart from the cost of the intervention; 2) attributable effect-size information available on mid-level outcomes; 3) and attributable effect-size information available on high-level outcomes.

Impact evaluation – when it should and should not be used

A topic article in the Outcomes Theory Knowledge Base

Impact evaluation should always be considered in evaluation design but it should not be assumed that impact evaluation (also known as high-level outcome/impact attribution evaluation) should always be attempted. Impact evaluation attempts to prove that changes in high-level outcomes can be attributed to a particular intervention. The appropriateness, feasibility and affordability of doing impact evaluation for any intervention should always be carefully assessed. It is is often be better to save precious evaluation resources for use only on selected high priority impact evaluations or for non-impact evaluation (e.g. implementation/formative evaluation). Attempting impact evaluation where it is not appropriate, feasible or affordable can lead to pseudo-impact evaluations which appear to be impact evaluations, but do not provide robust information sufficient to satisfy key stakeholders that it has been established that changes in outcomes can be actually attributed to the particular program.

Simplifying terms used when working with outcomes

An topic article in the Outcomes Theory Knowedge Base

People use many different terms when working with outcomes systems (results, monitoring, performance management, evaluation, evidence-based pratice and strategic planning systems) and building outcomes models (logic models, results chains, strategy maps, intervention logics). This is partially a result of the range of different disciplines involved. Outcomes theory attempts to identify the smallest number of terms essential to do what needs to be done when doing outcomes-related work. One of outcomes theory’s insights is that the purpose of a number of terminological distinctions (such as vision / mission, final outcomes / intermediate outcomes, process / outcomes distinctions, outcomes / impacts) can be better achieved by working directly with a visual outcomes model and showing the causal position of boxes visually within the model. This approach avoids having to insist that stakeholders use specific terms (e.g. the outcome / impact distinction) in very specific ways. The diversity of the disciplines and settings in which people work with outcomes, plus the widespread continued common sense interpretation of a term such as an outcome, makes tight language control a somewhat futile strategy at the current time. Given that the same results can be achieved by just using a visual model, it is suggested that little energy should be put into arguing about terminological distinctions at the moment. (The substance of this article formed the basis for: Duignan, P. (2009) Rejecting the traditional outputs, intermediate and final outcomes logic modeling approach and building more stakeholder-friendly visual outcomes models. American Evaluation Association Conference, Orlando, Florida, 11-14 November 2009.)

The Building-Blocks of Outcomes Systems

A Topic Article within the Outcomes Theory Knowledge Base

There is a set of building-blocks which underlie all outcomes systems. Outcomes systems are any systems which attempt to specify, measure, attribute, or hold parties to account for changes in outcomes. These systems are called by a range of names such as: results management systems, performance management systems, evaluation systems etc. The building-blocks are: 1) an outcomes model/intervention logic; 2) not-necessarily controllable indicators; 3) controllable indicators; 4) high-level impact/outcome evaluation; 5) implementation evaluation; and 6) economic and comparative evaluation. This conceptual model can be used to clarity, critique and improve outcomes systems of any type in any sector.

  1. The building-blocks/types of evidence used in outcomes systems (Redirect)
  2. Types of claims able to be made regarding outcomes models (intervention logics/theories of change) (Redirect)
  3. Reconstructing a Community – How the DoView Visual Planning methodology could be used (Redirect)
  4. Simplifying terms used when working with outcomes (Redirect)
  5. Impact evaluation – where it should and should not be used (Redirect)
  6. Types of economic evaluation analysis (Redirect)
  7. Unequal inputs principle (‘level playing field’) principle
  8. Welcome to the Outcomes Theory Knowledge Base
  9. Organizational Requirements When Implementing the Duignan Approach Using DoView Within an Organization
  10. M & E systems – How to build an affordable simple monitoring and evaluation system using a visual approach
  11. Evaluation of Healthcare Information for All 2015 (HIFA2015) using a DoView visual evaluation plan and Duignan’s Visual Evaluation Planning Method
  12. DoView Results Roadmap Methodology
  13. Problems faced when monitoring and evaluating programs which are themselves assessment systems
  14. Reviewing a list of performance indicators
  15. Using visual DoView Results Roadmaps™ when working with individuals and families
  16. Proving that preventive public health works – using a visual results planning approach to communicate the benefits of investing in preventive public health
  17. Where outcomes theory is being used
  18. How a not-for-profit community organization can transition to being outcomes-focused and results-based – A case study
  19. Duignan’s Outcomes-Focused Visual Strategic Planning for Public and Third Sector Organizations
  20. Impact/outcome evaluation design types
  21. Introduction to outcomes theory
  22. Contracting for outcomes
  23. How a Sector can Assist Multiple Organizations to Implement the Duignan Outcomes-Focused Visual Strategic Planning, Monitoring and Evaluation Approach
  24. How community-based mental health organizations can become results-based and outcomes-focused
  25. Paul Duignan PhD Curriculum Vitae
  26. Integrating government organization statutory performance reporting with demands for evaluation of outcomes and ‘impacts’
  27. Non-output attributable intermediate outcome paradox
  28. Features of steps and outcomes appearing within outcomes models
  29. Principle: Three options for specifying accountability (contracting/delegation) when controllable indicators do not reach a long way up the outcomes model
  30. Outcomes theory diagrams
  31. Indicators – why they should be mapped onto a visual outcomes model
  32. What are Outcomes Models (Program logic models)?
  33. Methods and analysis techniques for information collection
  34. What are outcomes systems?
  35. The problem with SMART objectives – Why you have to consider unmeasurable outcomes
  36. Encouraging better evaluation design and use through a standardized approach to evaluation planning and implementation – Easy Outcomes
  37. New Zealand public sector management system – an analysis
  38. Using Duignan’s outcomes-focused visual strategic planning as a basis for Performance Improvement Framework (PIF) assessments in the New Zealand public sector
  39. Working with outcomes structures and outcomes models
  40. Using the ‘Promoting the Use of Evaluation Within a Country DoView Outcomes Model’
  41. What added value can evaluators bring to governance, development and progress through policy-making? The role of large visualized outcomes models in policy making
  42. Real world examples of how to use seriously large outcomes models (logic models) in evaluation, public sector strategic planning and shared outcomes work
  43. Monitoring, accountability and evaluation of welfare and social sector policy and reform
  44. Results-based management using the Systematic Outcomes Management / Easy Outcomes Process
  45. The evolution of logic models (theories of change) as used within evaluation
  46. Trade-off between demonstrating attribution and encouraging collaboration
  47. Impact/outcome evaluation designs and techniques illustrated with a simple example
  48. Implications of an exclusive focus on impact evaluation in ‘what works’ evidence-based practice systems
  49. Single list of indicators problem
  50. Outcomes theory: A list of outcomes theory articles
  51. Standards for drawing outcomes models
  52. Causal models – how to structure, represent and communicate them
  53. Conventions for visualizing outcomes models (program logic models)
  54. Using a generic outcomes model to implement similar programs in a number of countries, districts, organizational or sector units
  55. Using outcomes theory to solve important conceptual and practical problems in evaluation, monitoring and performance management
  56. Free-form visual outcomes models versus output, intermediate and final outcome ‘layered’ models
  57. Key outcomes, results management and evaluation resources
  58. Outcomes systems – checklist for analysis
  59. Having a common outcomes model underpinning multiple organizational activities
  60. What is best practice?
  61. Best practice representation and dissemination using visual outcomes models
  62. Action research: Using an outcomes modeling approach
  63. Evaluation questions – why they should be mapped onto a visual outcomes model
  64. Overly-simplistic approaches to outcomes, monitoring and evaluation work
  65. Evaluation types: Formative/developmental, process and impact/outcome
  66. Terminology in evaluation: Approaches, types (purposes), methods, analysis techniques and designs
  67. United Nations Results-Based Management System – An analysis
  68. Selecting impact/outcome evaluation designs: a decision-making table and checklist approach
  69. Definitions used in outcomes theory
  70. Balanced Scorecard and Strategy Maps – an analysis
  71. The error of limiting focus to only the attributable
  72. Reframing program evaluation as part of collecting strategic information for sector decision-making
  73. Distinguishing evaluation from other processes (e.g. monitoring, performance management, assessment, quality assurance)
  74. Full roll-out impact/outcome evaluation versus piloting impact/outcome evaluation plus best practice monitoring
  75. References to outcomes theory
  76. Techniques for improving constructed matched comparison group impact/outcome evaluation designs
Follow

Get every new post delivered to your Inbox.