Simplifying terms used when working with outcomes

· Uncategorized
Authors

 

Introduction to outcomes theory

List of outcomes theory articles

Subscribe to author’s e-newsletter

Author’s Outcomes Blog

Author on Twitter

For a simple summary of this article see here.

Summary

People use many different terms when working with outcomes systems (results, monitoring, performance management, evaluation, evidence-based pratice and strategic planning systems) and building outcomes models (logic models, results chains, strategy maps, intervention logics). This is partially a result of the range of different disciplines involved. Outcomes theory attempts to identify the smallest number of terms essential to do what needs to be done when doing outcomes-related work. One of outcomes theory’s insights is that the purpose of a number of terminological distinctions (such as vision / mission, final outcomes / intermediate outcomes, process / outcomes distinctions, outcomes / impacts) can be better achieved by working directly with a visual outcomes model and showing the causal position of boxes visually within the model. This approach avoids having to insist that stakeholders use specific terms (e.g. the outcome / impact distinction) in very specific ways. The diversity of the disciplines and settings in which people work with outcomes, plus the widespread continued common sense interpretation of a term such as an outcome, makes tight language control a somewhat futile strategy at the current time. Given that the same results can be achieved by just using a visual model, it is suggested that little energy should be put into arguing about terminological distinctions at the moment. (The substance of this article formed the basis for: Duignan, P. (2009) Rejecting the traditional outputs, intermediate and final outcomes logic modeling approach and building more stakeholder-friendly visual outcomes models. American Evaluation Association Conference, Orlando, Florida, 11-14 November 2009.)

Introduction

Many different disciplines are involved in working with outcomes systems (results, monitoring, performance management, evaluation, strategic planning and evidence-based practice systems) and building outcomes models (logic models, results-chains, program theories, intervention logics, logframes, strategy maps, ends-means diagrams etc). These disciplines include managers, performance managers, evaluators, strategic planners, policy analysts, economists, HR specialists, quality control specialists and others. Because of the number of professions involved, there is a great diversity of language used by those working with outcomes systems.

As with any theory attempting to give a the simplest accurate account of the world, one of the primary tasks of outcomes theory is to reduce any unnecessary language used when working with outcomes down to the minimum number of terms necessary for working effectively with outcomes systems. One of the insights of outcomes theory is that the number of terms which are needed can be minimized by working directly with visual outcomes models. As will be discussed below, some of the terms which are used in traditional outcomes systems are used in order to perform functions which can more easily be performed using a visual outcomes model. For a discussion of why visual models should be used over other types of models see Causal models – how to structure, represent and communicate them. Given the current diversity of language and disciplines involved in working with outcomes, it is preferable to eliminate the need to use a term (for instance by achieving the same effect by working visually) rather than attempting to force often very busy and distracted stakeholders to use terms in very precise ways. Such terminologicaldiscipline is particularly difficult where there are ongoing common sense meanings of a word like outcome which will persist regardless of the demands by certain people that people use the term in certain ways.At the moment, a significant amount of the time spent when working on outcomes is spent in protracted discussions with stakeholders about the use of particular terms. This terminological argument is largely a waste of time if you move to a fully visual approach to working with outcomes.

A small ‘toolkit’ (a set of terms and ways of working) for use with outcomes systems of any type

One of the challenges for outcomes theory is to come up with the smallest possible ‘toolkit’ for use in talking about and working with outcomes systems. This toolkit should consist of both a set of terms (as few as possible) and, equally importantly, ways of working, which will allow stakeholders to do everything they need to do with outcomes sets.

As with any toolkit, what is needed within it depends on what it is going to be used for. Outcomes theory identifies the basic set of needs which stakeholders have when dealing with outcomes of any sort. It then identifies the most efficient tool for meeting each of these needs.

The basic set of stakeholder needs when working with outcomes is as follows:

1. Having a way of indicating hierarchical causal structure within an outcomes set. In traditional systems this is done by using distinctions between various terms such as processes,intermediate outcomes,outcomes, impacts etc.In an outcomes theory approach, the the structure of the outcomes set is communicated, not by using labels, but by always using a drawn outcomes model as the central organizing element for an outcomes system. In addition to the advantage that this visual approach helps rapid communication of the structure of the model to stakeholders, a drawn model also allows two things to be done which mean that a number of traditional terms used within outcomes systems are no longer needed. These two things are:

  • Getting stakeholders to focus higher up the outcomes model and to identify higher-level outcomes rather than just lower-level activities. When using a visual outcomes model this is achieved by simply pointing to the next box in the causal sequence and saying: ‘why are you wanting to do this activity? Tell me and I will put the reason in this next box’. Getting stakeholders to focus further along the outcomes model is part of the purpose of many distinctions made with terms traditionally used when working with outcomes. All of the following distinctions have, as part of their function, to encourage stakeholders to move their attention higher up an outcomes model: vision versus mission; impacts versus outcomes;final outcomes versus intermediate outcomes; intermediate outcomes versus outputs; results versus activities; and, outcomes versus processes. More generally this is the purpose of distinctions such as: why not how; why not what; and ends rather than means. A fully visual approach eliminated the need to use these terms.
  • Knowing what should go at which hierarchical level within an outcomes model. When working using a fully visual approach, this is achieved by applying the simple rule: ‘If Box A occurred on its own, would we bother doing Box B?’ If the answer is no, then Box B sits before Box A. Traditionally, hierarchical structuring of outcomes has been done by using a narrative or table-based approach structured around terms such as outputs, intermediate outcomes, final outcomes etc. The same result can be achieved within a visual model by using this simple rule and without having to use any of the traditional terms apart from the words steps and outcomes. Or if you want to be more radical, just using the term boxes. Practical instructions for how to build an outcomes model (an Outcomes DoView) using a fully visual approach are outlined here.

2. Having a way of making a distinction between an outcome and its measurement. One distinction which it is important to keep clear in all outcomes systems is the distinction between a step or outcome and its measurement. This is done within outcomes theory by using the term indicator to describe a measurement of a step or outcome. In a visual model a step or outcome and its measurement should be kept visually separate (e.g. by representing the indicator as a separate icon).

3. Having a way of differentiating those changes in steps or outcomes which are controllable by an intervention and those which are not. This is dealt with in outcomes theory by making a distinction between controllable indicators and not-necessarily controllable indicators. The beauty of a controllable indicator is that the mere measurement that it has occurred proves that it was caused by the party that controls it. Sometimes thecontrollable/ not-necessarily controllable distinction can be thought of in terms of an attributable / notnecessarily attributable distinction. [4]

4. Having a way of dealing with the question of what a program should be held accountable for. This is usually dealt with in outcomes systems by just holding programs and organizations directly accountable for controllable indicators as defined in 3. above. In the public sector it usually makes sense to restrict direct accountability to only controllable indicators. This can be accompanied with the requirement that parties are in addition accountable for showing that they are focusing their activity on influencing priority non-controllable outcomes. In outcomes theory this is called being accountable for showing ‘line-of-sight’. However in the private sector individuals and organizations are sometimes held accountable for not-necessarily controllable indicators, for more detail see Contracting for outcomes.

In traditional, non-visual, approaches to working with outcomes, the term outputs is usually used to meet this need for clearly defining what a program or organization will be held directly accountable for. An output is a final good or service produced by an organization. It has all of the benefits of a controllable indicator in that its mere measurement means that it was caused by a particular party (assuming that no fraud is involved). An output is subtly different from a controllable indicator in that a controllable indicator can reside further along an outcomes model than an output. For instance, an increase in knowledge by workshop participants may be a controllable indicator for a workshop trainer. However, his or her output would just be running one workshop.

It should be noted that controllable indicators can also be located further down an outcomes model than an outputs (e.g. the room having been booked for the workshop would be a controllable indicator for the trainer but not an output according to the technical definition of an output as a final good or service produced by a party). It is often the case within outcomes systems that outputs are regarded as the things that a particular party is directly accountable for. Outcomes theory would suggest that a clearer way of thinking about the issue of direct accountability in an outcomes-focused age is to make parties directly accountable for controllable indicators. This means that direct accountability may rise above the level of outputs if there are controllable indicators above them.

There is no technical problem with identifying outputs within a visual outcomes model, this can be done by simply marking up those boxes (or indicators if one is thinking in terms of indicators) which are outputs. However it is a technicalvisualizationmistake which is almost always made, to require that outputs to be located at a certain position within a visual outcomes model. Typically this mistake is made within a visual model when outputs are restricted to a ‘column’ or row in a visual outcomes model. To do this is to attempt to use the same visualization mode (horizontal position in a left-to-right outcomes model) to represent both the flow of causality and whether or not a box is an output. Forcing output boxes into a particular column within a visual outcomes model distorts the natural flow of causality and confuses the reader who is looking for the left-to-right dimension to represet the flow of causality.

5. Having a way of showing where the current focus of priorities and activity is within an outcomes model. This is done visually either by just highlighting (with color, letter codes etc.) the steps in an outcomes model which are current priorities for effort (e.g. A, B, C, BAU – Business As Usual). Or visually mapping activities (projects) back onto the higher-levels of an outcomes model which shows common outcomes for a number of activities or projects.

6. Having a way of integrating evaluation questions asked about an outcomes model with monitoring activity being undertaken on it. It is important to be able to integrate evaluation activity undertaken in regard to an outcomes model with monitoring activity. Items 2, 3 and 4 above are more concerned with monitoring – the routine collection of information about program performance. In contrast, evaluation activity is often a more one-off, or in-depth, activity focused on specific questions. In an outcomes theory approach this is all dealt with by mapping evaluation questions back onto the outcomes model and by using the framework provided by the Five building-blocks of outcomes systems.

The five key technical terms needed in the ‘toolkit’

Using the approach set out above, which combines a few selected technical terms with relying heavily on a visual approach to working with outcomes, there are only five key terms which need to be taught to stakeholders to meet most of the needs set out above. These five key terms are:

A step– any ’cause’ (i.e. something which makes something else happen) which appears in an outcomes model

An outcome– the top step in an outcomes model (there may be more than one of these) these can be referred to as the highest-level boxes if one objects to the use of the word outcome (e.g. because they think that the highest level should be called impacts).

An indicator– a measure of a step or outcome.

An controllable indicator– an indicator for which changes are controlled by a particular intervention.

An evaluation question– a question about aspects of the program (e.g. whether it is, or is not improving outcomes).

These terms are all that are needed in cases where a program is being held to account only for controllable indicators. There are, of course, many other terms used in outcomes theory for specific purposes (see the definitions list ) but the five terms defined above are all that is needed in most cases to successfully build outcomes models stakeholders meet needs set out above.

If you want to go even further and apply a radically simplistic approach based only on visual modeling, you can, when working with appropriate outcomes and evaluation software such as DoView® reduce the number of technical terms you used down to only a single technical term. You can do this by just saying ‘lets put the things you are doing in boxes’ (instead of mentioning the terms steps and outcomes) and ‘let’s use the yellow element (the indicator icon you can insert in DoView®) to show how we can measure what’s in your boxes (instead of mentioning the term indicators). Then ‘let’s put in questions about the model’, using the green evaluation question icons. This takes the number of specifically defined technical terms needed to work successfully with outcomes down to the only one left of the five above –controllable indicators which you need to use to explain what it is, and what it is not, appropriate to hold most (public and third sector programs, and much of the private sector) accountable for.

Resources for working with groups in this way

Resources are available to assist in practically working with groups in this way. Refer to DoView.com/plan and OutcomesCentral.org and the video below.

Video: Building outcomes models with a group

Conclusion

It has been argued that with five relatively simple terms, and relying on using a visual outcomes models, most stakeholder needs in regard to working with outcomes can be met. This is the practical approach used when working with the applied version of outcomes theory – DoView Visual Planning and Management.

Please comment on this article

This article is based on the developing area of outcomes theory which is still in a relatively early stage of development. Please critique any of the arguments laid out in this article so that they can be improved through critical examination and reflection.

Citing this article

Duignan, P. (2009). Simplifying the use of terms when working with outcomes. Outcomes Theory Knowledge Base Article No. 236. (https://outcomestheory.wordpress.com/article/simplifying-terms-used-when-working-2m7zd68aaz774-73/). The substance of this article formed the basis for the following presentation and the argument can be cited as: Duignan, P. (2009) Rejecting the traditional outputs, intermediate and final outcomes logic modeling approach and building more stakeholder-friendly visual outcomes models. American Evaluation Association Conference, Orlando, Florida, 11-14 November 2009.)

[If you are reading this in a PDF or printed copy, the web page version may have been updated].

References

  1. For information on how to do such prioritization and mapping of projects or activities onto an outcomes model see Duignan, P. (2010). Duignan’s Outcomes-Focused Visual Strategic Planning for Public and Third Sector Organizations. Outcomes Theory Knowledge Base Article No. 283. (https://outcomestheory.wordpress.com/article/duignan-s-outcomes-focused-visual-2m7zd68aaz774-162/).
    Reference Link
  2. See Duignan, P. (2010). M&E Systems – How to Build an affordable simple monitoring and evaluation system using a visual approach. Outcomes Theory Knowledge Base Article No. 267. ( https://outcomestheory.wordpress.com/article/m-e-systems-how-to-build-an-affordable-2m7zd68aaz774-134/ )
    Reference Link
  3. Disclosure: The author is involved in the development of DoView outcomes and evaluation software and has developed the DoView Visual Planning approach.
  4. In some contexts this can be thought of in terms of attribution and so the distinction is made between demonstrably attributable indicators and not-necessarily demonstrably attributable indicators. By definition if an indicator is controllable by an intervention and if it has been measured then it is attributable to that intervention. See the article below on the features of steps and outcomes within an outcomes model.
    Reference Link

 

  1. The building-blocks/types of evidence used in outcomes systems (Redirect)
  2. Types of claims able to be made regarding outcomes models (intervention logics/theories of change) (Redirect)
  3. Reconstructing a Community – How the DoView Visual Planning methodology could be used (Redirect)
  4. Simplifying terms used when working with outcomes (Redirect)
  5. Impact evaluation – where it should and should not be used (Redirect)
  6. Types of economic evaluation analysis (Redirect)
  7. Unequal inputs principle (‘level playing field’) principle
  8. Welcome to the Outcomes Theory Knowledge Base
  9. Organizational Requirements When Implementing the Duignan Approach Using DoView Within an Organization
  10. M & E systems – How to build an affordable simple monitoring and evaluation system using a visual approach
  11. Evaluation of Healthcare Information for All 2015 (HIFA2015) using a DoView visual evaluation plan and Duignan’s Visual Evaluation Planning Method
  12. DoView Results Roadmap Methodology
  13. Problems faced when monitoring and evaluating programs which are themselves assessment systems
  14. Reviewing a list of performance indicators
  15. Using visual DoView Results Roadmaps™ when working with individuals and families
  16. Proving that preventive public health works – using a visual results planning approach to communicate the benefits of investing in preventive public health
  17. Where outcomes theory is being used
  18. How a not-for-profit community organization can transition to being outcomes-focused and results-based – A case study
  19. Duignan’s Outcomes-Focused Visual Strategic Planning for Public and Third Sector Organizations
  20. Impact/outcome evaluation design types
  21. Introduction to outcomes theory
  22. Contracting for outcomes
  23. How a Sector can Assist Multiple Organizations to Implement the Duignan Outcomes-Focused Visual Strategic Planning, Monitoring and Evaluation Approach
  24. How community-based mental health organizations can become results-based and outcomes-focused
  25. Paul Duignan PhD Curriculum Vitae
  26. Integrating government organization statutory performance reporting with demands for evaluation of outcomes and ‘impacts’
  27. Non-output attributable intermediate outcome paradox
  28. Features of steps and outcomes appearing within outcomes models
  29. Principle: Three options for specifying accountability (contracting/delegation) when controllable indicators do not reach a long way up the outcomes model
  30. Outcomes theory diagrams
  31. Indicators – why they should be mapped onto a visual outcomes model
  32. What are Outcomes Models (Program logic models)?
  33. Methods and analysis techniques for information collection
  34. What are outcomes systems?
  35. The problem with SMART objectives – Why you have to consider unmeasurable outcomes
  36. Encouraging better evaluation design and use through a standardized approach to evaluation planning and implementation – Easy Outcomes
  37. New Zealand public sector management system – an analysis
  38. Using Duignan’s outcomes-focused visual strategic planning as a basis for Performance Improvement Framework (PIF) assessments in the New Zealand public sector
  39. Working with outcomes structures and outcomes models
  40. Using the ‘Promoting the Use of Evaluation Within a Country DoView Outcomes Model’
  41. What added value can evaluators bring to governance, development and progress through policy-making? The role of large visualized outcomes models in policy making
  42. Real world examples of how to use seriously large outcomes models (logic models) in evaluation, public sector strategic planning and shared outcomes work
  43. Monitoring, accountability and evaluation of welfare and social sector policy and reform
  44. Results-based management using the Systematic Outcomes Management / Easy Outcomes Process
  45. The evolution of logic models (theories of change) as used within evaluation
  46. Trade-off between demonstrating attribution and encouraging collaboration
  47. Impact/outcome evaluation designs and techniques illustrated with a simple example
  48. Implications of an exclusive focus on impact evaluation in ‘what works’ evidence-based practice systems
  49. Single list of indicators problem
  50. Outcomes theory: A list of outcomes theory articles
  51. Standards for drawing outcomes models
  52. Causal models – how to structure, represent and communicate them
  53. Conventions for visualizing outcomes models (program logic models)
  54. Using a generic outcomes model to implement similar programs in a number of countries, districts, organizational or sector units
  55. Using outcomes theory to solve important conceptual and practical problems in evaluation, monitoring and performance management
  56. Free-form visual outcomes models versus output, intermediate and final outcome ‘layered’ models
  57. Key outcomes, results management and evaluation resources
  58. Outcomes systems – checklist for analysis
  59. Having a common outcomes model underpinning multiple organizational activities
  60. What is best practice?
  61. Best practice representation and dissemination using visual outcomes models
  62. Action research: Using an outcomes modeling approach
  63. Evaluation questions – why they should be mapped onto a visual outcomes model
  64. Overly-simplistic approaches to outcomes, monitoring and evaluation work
  65. Evaluation types: Formative/developmental, process and impact/outcome
  66. Terminology in evaluation: Approaches, types (purposes), methods, analysis techniques and designs
  67. United Nations Results-Based Management System – An analysis
  68. Selecting impact/outcome evaluation designs: a decision-making table and checklist approach
  69. Definitions used in outcomes theory
  70. Balanced Scorecard and Strategy Maps – an analysis
  71. The error of limiting focus to only the attributable
  72. Reframing program evaluation as part of collecting strategic information for sector decision-making
  73. Distinguishing evaluation from other processes (e.g. monitoring, performance management, assessment, quality assurance)
  74. Full roll-out impact/outcome evaluation versus piloting impact/outcome evaluation plus best practice monitoring
  75. References to outcomes theory
  76. Techniques for improving constructed matched comparison group impact/outcome evaluation designs
%d bloggers like this: