Types of economic evaluation analysis

· Uncategorized
Authors
 
 

Summary

Discussion of economic evaluation of programs and interventions usually starts from the perspective economists and economics. This article takes a different perspective, one more user-friendly to those whose primary interest in the subject is its application to outcomes measurement and outcomes theory issues rather than economics. (Outcomes theory is the general theory of identifying, measuring, attributing and holding parties to account for outcomes). The article first makes the point that economic evaluation analyses should be more transparent about which costs and benefits are being measured and argues that the most accessible way of doing this is to use a visual outcomes model of what the program is trying to achieve to quickly show what is (and equally importantly), is not, being measured. Secondly, the article categorizes types of economic evaluation  based on the robustness of the information available on the effect of a program or intervention (called effect-size estimates, sourced from impact evaluation). Traditionally, economic discussions of economic evaluation have neglected this fundamental aspect of the undertaking. They tend to conflate situations where robust empirically derived effect-size estimates are available and those where effect-size estimates are being plucked out of thin air. This article makes decision-making about economic evaluation options easier because it starts by identifying whether robust effect-size estimates are available from impact evaluation and for mid-level or high-level outcomes and works forward from that point to identify the most appropriate type of economic analysis for each situation. The three major types of economic analysis (cost of intervention analysis, cost-effectiveness analysis and cost-benefit analysis) are further sub-divided on the basis of three levels of information being available about effect-sizes. These levels are: 1) no attributable effect-size information available apart from the cost of the intervention; 2) attributable effect-size information available on mid-level outcomes; 3) and attributable effect-size information available on high-level outcomes.
Introduction

Economic evaluation analyses are an important type of analysis which is often promoted as a way of helping decision-makers select between different types of intervention. Outcomes theory, the general theory of identifying, measuring, attributing and holding parties to account for outcomes, can makes two contributions to thinking about economic evaluation. The first is to insist on more transparency around which costs and benefits have been included within any economic analysis. The second is to provide a way of thinking about possible types of economic evaluation that could be used in a particular case based on what type of effect-size estimates are available from impact evaluation in a specific case.

Economic analyses can be divided into: cost of intervention analyses which simply show what it cost to run an intervention; cost-effectiveness analyses which show what it cost to achieve a certain effect; and cost-benefit analyses which show the overall costs and benefits of an intervention. Undertaking economic evaluation usually requires specialist technical expertise and either experts in the area, or the relevant literature, should be consulted when undertaking such economic evaluations. The intention of this article is not to explain how to do such economic evaluation, but to provide an easy way of working out which types of economic evaluation are appropriate in which cases.

First a note about outcomes theory

Outcomes theory is a new ‘mid-level’ theory which provides a conceptual framework for thinking about work related to outcomes of any type in any sector. It helps with thinking about outcomes specification, measurement, attribution and accountability. It brings together a set of best-practice principles in a generic cross-disciplinary format to improve outcomes work being undertaken in any discipline (policy analysis, economics, performance management, program evaluation, organizational development, evidence-based practice etc.). The outcomes theory best-practice principles which are particularly relevant to this article are as follows:

  • Outcomes work is much easier if it is based on a comprehensive visual model of the intervention being examined. 
  • In outcomes theory such visual models are called outcomes models (and within applied outcomes theory work they are also know as Outcomes DoViews. This is to distinguish them from the range of other types of intervention logic diagrams that can be drawn to describe to interventions (e.g. logic models, ends-means diagrams, strategy maps etc.).
  • Outcomes models are drawn according to a set of 13 rules which ensure that they are fit-for-purpose for all aspects of outcomes work. 
  • The applied version of outcomes theory is known as DoView Visual Outcomes Planning. The approach consists of a set of steps for accomplishing a range of different project and organizational tasks tightly aligned around intervention outcomes.

Increased transparency about which costs and benefits are included within any economic analysis

It can be difficult for the non-economist decision-maker reading a cost-benefit analysis to quickly work out which costs and which benefits have been included in the analysis. If there are either costs or benefits the decision-maker considers important, but which are excluded from the cost-benefit analysis, the decision-maker needs to be cautious in the way they use the results of the analysis in their decision-making.
A powerful way of quickly communicating exactly what has, and had not, been included in any economic analysis, is to mark-up the variables included in the analysis on a visual outcomes model of the intervention. Such models show in a visual format all of the high-level outcomes being sought by an intervention and all of the lower-level steps which it is believed are needed in order to achieve them. Such models are drawn according to a set of 13 rules. These rules ensure an intervention’s outcomes model accurately represent its steps and outcomes. It is important that an outcomes model for an intervention presents a full picture of what it is thought will be happening in regard to an intervention. For instance, the outcomes model should include all outcomes, whether or not they are: measurable; fully controllable by the intervention; intended or unintended.
An outcomes model should be included at the start of every cost-benefit analysis so that the variables which are used in the analysis can be marked-up onto the intervention’s outcomes model. If this were routinely done in economic analyses it would provide the busy reader with a tool for quickly determining which variables are, and are not, included in the analysis. Attempting to communicate this information in a narrative form is much more inefficient than using a quickly accessible visual outcomes model.
The routine use of an outcomes model in economic analyses would also make it much easier for people comparing different cost-benefit reports on the same type of intervention. The marked-up outcomes model at the start of each cost-benefit analysis would help the reader quickly understand whether differences in the results from different cost-benefit analyses result from particular variables having been left out of one or other of the analyses.

Transparency regarding where effect-size estimates are empirically based and where they are not

The second issue on which outcomes theory argues for improved transparency in economic analysis concerns the evidential status of the effect-size estimates being put into economic analyses. From the point of view of outcomes theory, an effect-size is formally defined as the amount of change in a higher-level outcome within an outcomes model that can be fully attributed to the causal effect of a lower level step (or a set of steps – i.e. an intervention) within the same outcomes model.

Traditional discussion of cost-effectiveness and cost-benefit in the economic literature tends to neglect or move quickly over the question of deriving effect-size estimates within economic analyses. In such  discussions the emphasis tends to be on the methodology by which cost effectiveness or cost benefit analysis should be carried out. This type of discussion covers points such as the appropriate discount rate to be applied (the rate at which estimates of dollar savings in the future will be reduced because they are the result of current expenditure which, once expended, is no longer available for alternative uses at the current time). The issue which is often not comprehensively addressed is the the empirical foundation for deriving effect-sizes for the effect of an intervention on outcomes which will then be used to derive estimates of future benefits.

Of course, cost effectiveness and cost benefit analyses stand and fall on the accuracy of such estimates. Significant error in such estimates can render any cost effectiveness or cost benefit analysis worse than useless (because it is likely to provide decision-makers with a false sense of certainty about its essentially arbitrary conclusions).  And a lack of discipline and transparency around striking such estimates simply encourages ‘gaming’ in which almost any result a client would wish to see can be obtained by plugging in different estimates for key variables. Criticism of the estimates which are put into an economic evaluation is often seen as merely a technical matter and the critique is lost on the vast majority of people who go on to use the results of the analysis.

A key focus of outcomes theory is on the issue of what is known and what is not know about the effects of interventions and assisting us to deal with these issues in the most transparent way possible both when we have information and also in those many instances when we do not have complete information. Crucially, the approach attempts to clearly convey to decision-makers when they are operating in these two very different environments – a high-quality information environment or an environment where there is little robust information about effect-sizes. Therefore an outcomes theory approach to economic analysis pays particular attention to quickly communicating the empirical solidity of what is being claimed in any analysis. At the current time, within economic analysis, the vehicle for communicating uncertainty around estimates within the analysis is through a sensitivity analysis. A sensitivity analysis runs the analysis a number of times while varying key estimates about which there may be doubt in order to see what effect this will have on the final result.

However such sensitivity analyses tend to be buried in the bowels of complex economic evaluation reports and often are not even read by busy decision-makers who simply look at the bottom line result from the cost effectiveness or cost benefit analysis and use this in their decision-making. Most use of cost benefit analysis results involve the simple use of the result figure alone without those communicating it highlighting the sensitivity analysis.

While the use of sensitivity analysis should always be encouraged, within outcomes theory a complementary approach is proposed. It is one which makes transparent the distinction between different types of economic analysis based on the empirical foundation of the estimates used within them. The first is those for which there are good empirical effect-sized estimates available from robust impact/outcome evaluation having been carried out. The second is for those for which good empirical effect-size estimates are lacking. The outcomes theory argument for making this distinction is if such a transparent distinction is not made, there is no quick way for decision-makers to fully understand the level of risk they are carrying around uncertainty. They need to be quickly informed of situations in which they have a low level of decision-making risk due to good empirical data versus those situations where there is much more risk due to uncertainty. One could view this approach as a ‘truth of labeling’ approach to economic analysis where it is argued that the label describing the type of economic analysis should immediately convey the soundness of the estimates on which the whole analysis rests.

An outcomes theory therefore approach then encourages a classification of types of economic analysis determined by the empirical basis of the estimates which are used within them. However, with such an approach there obviously is a problem as to how to proceed in those cases where there is actually a great deal of uncertainty around what the effect-sizes actually are for a particular intervention. A hard line approach would be to say that no cost effectiveness or cost benefit analysis should be done in such cases because it simply creates the illusion that there is more certainty than there actually is.

An argument can be made in support of this hard line approach in certain situations where there are many costs and benefits all of which are likely to have significant error associated with their estimates. At a certain point, the honest approach is to simply say that it is not possible to produce a coherent economic evaluation analysis because of the level of uncertainty. This is consistent with the outcomes theory approach which is applied in the case of impact/outcome evaluation where it is not assumed before the fact that is will be possible to do an impact/outcome evaluation. (See When impact/outcome evaluation should and should not be used). The intention of outcomes theory is to open up the expectation on the part of clients and those being commissioned by clients to look at doing these types of analysis (impact/outcome evaluation or cost benefit analysis) that it is perfectly reasonable to decide to not do them where they are impossible to do to an adequate level of rigor. This is in cases where there is too much error likely to be built into the analysis rendering it of no value to decision makers because it will just create a false sense of certainty. The argument here is that a bad impact/outcome evaluation or cost benefit analysis is worse than no impact/outcome evaluation or cost benefit analysis. Such pseudo impact/outcome evaluations and pseudo cost benefit analyses do nothing except dilute the evidence-base in any domain by filling it with studies which have no value and skew overall findings. So the aim is to teach clients of such analyses to beware of anyone who always offers to do an impact/outcome evaluation or a cost benefit analysis in contrast to someone who simply first offers to merely assess whether doing such an analysis is going to be of value and then only secondly actually only goes on to do such analyses in those cases where they are likely to actually add value.

However, this having been said, there may well be situations where many of the estimates used in such analyses can have a reasonable basis but that there is major uncertainty or simply a lack of information about the major effect-size of an intervention on the key outcome it is trying to achieve.

A case can be made for allowing such economic evaluation analyses when there is little or no data on the main outcome effect-size but to clearly label these as being ‘hypothetical’ or ‘assumed’ effect-size analyses. This type of analysis  needs to be clearly differentiated from more robust economic evaluation analysis. So the two types of approach which can be adopted are first, where there is sound empirical data on effect-sizes one can present decisions-makers with a statement such as: ‘this is what the costs and benefits would look like from this program based on empirical estimates of the program effect.’ The second case can be described as: ‘this is what the costs and benefits would look like if you decided to simply assume an effect-size of X, this assumption-based approach is being used because there is not enough empirical information to robustly determine what the effect-size actually is.’

The second type of analysis can be seen as, in effect, brining the sensitivity analysis up into the actual cost benefit analysis results and present the results for a number of different assumed effect sizes because of the uncertainty around the effect-size estimate (e.g. for an assumed effect-size of X(1) the cost benefit would be Y(1), for and assumed effect-size of X(2), it would be Y(2).

The list of types of economic analysis below therefore is based on distinguishing between situations where there is an empirical foundation for effect-sizes and those where there is no such foundation.

The starting point for using this list of types of economic analysis below is to ask the question:

‘To what level within the outcomes model (a visual model of a program showing all of the important steps which lead up to high-level outcomes) do we have good information about the size of the effect on an outcome which can be attributed to the intervention?’

Putting the question in this way also means that this framework can be used as a practical tool by those coming to economic evaluation from an outcomes and program evaluation perspective rather than a specialist economics perspective. Knowing what type of estimates they will be able to produce from the evaluation they are conducting, they can then use this to identify the type of economic analysis which is appropriate.The above question can have one of three different answers: 1) no empirical attributable outcomes effect-size information available about the intervention level; 2) empirical attributable outcomes effect-size information available on mid-level outcomes; and 3) empirical attributable outcomes effect-size information available on high-level outcomes. Those wishing to use the list of economic evaluation analyses set out below can, on the basis of the answer to the question above, just proceed to the relevant section in the list below and select the appropriate type of economic analysis.

The analyses are grouped into three sets – those that can be done when there are no actual empirical effect-size estimates for attributable outcomes above the intervention level; those that can be done when there are empirically derived estimates available for mid-level outcomes; and those that can be done where there are empirically derived estimates for high-level attributable outcomes. In summary, the first grouping can only be done if there are estimates available of the cost of the intervention; for the second grouping their also need to be empirically derived estimates available for mid-level outcome effect-sizes; for the third grouping there need to be empirically derived estimates for high-level outcome effect-sizes.
It should be noted that moving through the three overall groups of analyses, if a later analysis can be done, then by definition one of the corresponding earlier analyses can also be done. So if Analysis 3.2 can be done, then 2.2, 2.1 and all of the Analyses 1.1-1.4 can also be done.
The ten economic evaluation analyses types grouped into three groups

1. Analyses when no empirical impact/outcome evaluation effect-size information above the intervention level is available

Analysis 1.1 Cost of Intervention Analysis, Single Intervention. Cost of intervention analysis just looks at the cost of an intervention, not its effectiveness (cost-effectiveness is how much it costs to change an outcome by a certain amount) or the benefits (the result of subtracting the estimated dollar cost of the program from the estimated benefits of the program estimated in dollar terms). This analysis just allows you to say what the estimated cost of the intervention is (e.g. $1,000 per participant).

Analysis 1.2 Cost of Intervention Analysis, Multi-Intervention Comparison. Same as 1.1 but a multi-intervention comparison. This analysis allows you to compare the costs of different interventions (e.g. Program 1 – $1,000 per participant; Program 2 – $1,500 per participant or to put it in terms of Program 2 costing 1 1/2 times more than Program 1 per participant).

Analysis 1.3 Cost (Assumed) Benefit Analysis (Assumed Effect-Sizes for High-Level Outcomes) Single Intervention. Even where you cannot establish any attributable outcomes above the intervention level, but you do have an estimate of the cost of the intervention (from 1.1 above), you can just use assumed arbitrary (hypothetical) effect sizes in an analysis if it is clearly labeled as such. In this analysis, estimates are available for the cost of the intervention and the costs and benefits of all outcomes can be reasonably accurately determined in dollar terms with the effect-size of the main effect of the program being assumed to be at a specified level. This is an assumed (hypothetical) cost benefit analysis (e.g. for a hypothetical effect size of 5%, 10% or 20%). It is essential that this type of assumed (hypothetical) analysis is clearly distinguished from Analyzes 3.3 which is based on actual estimates from actual measurement of effect sizes. This analysis allows you to estimate the overall benefit (or loss) of running the intervention if any of the hypothetical effect sizes were achieved (e.g. there would be a loss of $500 per participant for a 5% effect-size, a gain of $100 for a 10% effect-size and gain of $600 per participant for a 20% effect-size).


Analysis 1.4 Cost (Assumed) Benefit Analysis (Assumed Effect-Sizes for High-Level Outcomes) Multi-Intervention. Same as 1.3 but a multi-intervention comparison. This analysis allows you to compare the overall loss or gain from more than one program for various hypothetical effect sizes (e.g. for a 5% effect size, Program 1 would have an estimated loss of $500 per participant whereas Program 2 would have a gain of $200 and so on, you could even theoretically vary the arbitrary effect-sizes if there was some reason to believe that there might be differences, e.g. a general population program is likely to have a lower effect-size than an intensive one-on-one program, but this may not say anything about the overall loss or gain when comparing two such programs – because the cost per person intervened with is likely to be higher in the case of the one-on-one intervention). Once again, it is essential that this type of assumed (hypothetical) analysis is clearly distinguished from Analyses 3.4 which is based on actual estimates from actual measurement of effect-sizes through certain types of impact/outcome evaluation.

2: Analyses when empirical impact/outcome evaluation effect-size information from attributing mid-level outcomes is available

Analysis 2.1 Cost (Empirical) Effectiveness Analysis (Empirical Effect-Size Estimates for Mid-Level Outcomes) Single Intervention. In this analysis, estimates are available of the attributable effect-size of the intervention on mid-level outcomes. When combined with the estimated cost of the intervention this allows you to work out the cost of achieving a certain level of effect on mid-level outcomes (e.g. a 6% increased in X cost approximately $1,000 per participant). It is important that readers of such analyses understand that they are mid-level outcomes and a visual outcomes model should be used to clearly indicate the level at which the effect is located.

Analysis 2.2 Cost (Empirical) Effectiveness Analysis (Empirical Effect-Size Estimates for Mid-Level Outcomes) Multi-Intervention Comparison. Same as 2.1 but a multi-intervention comparison. This analysis lets you work out the cost of achieving a certain level of effect on mid-level outcomes for a number of interventions (e.g. a 6% increase in X cost approximately $1,000 per participant for Program 1 whereas it cost $1,500 for Program 2). It is likely that the measured mid-level effect-sizes of different interventions will vary, therefore you may need to adjust estimates to a common base. This may or may not reflect what would happen in regard to the actual programs in reality. It is important that readers of such analyses understand that they are mid-level outcomes and a visual outcomes model should be used to clearly indicate the level at which the effect is located.

3: Analyses when empirical impact/outcome evaluation effect-size information for attributing high-level outcomes is available

Analysis 3.1 Cost (Empirical) Effectiveness Analysis (Empirical Effect-Size Estimates for High-Level Outcomes) Single Intervention. Same as 2.1 except you can work out the cost of achieving a high-level outcome effect-size of a certain amount for a particular intervention.

Analysis 3.2 Cost (Empirical) Effectiveness Analysis (Empirical Effect-Size Estimates for High-Level Outcomes) Multi-Intervention Comparison. Same as 2.2 except you can work out the cost of achieving a high-level outcome effect size of a certain amount and compare this across more than one intervention.

Analysis 3.3 Cost (Empirical) Benefit Analysis (Empirical High-Level Effect-Size Estimates for High-Level Outcomes, Single Intervention). In this analysis, estimates are available for the cost of the intervention, its attributable effect on high-level outcomes, and the costs and benefits of all outcomes can be reasonably accurately determined in dollar terms. If this information is not available this type of analysis cannot be done and your only option is to fall back to analysis type 1.3. This analysis lets you compare the overall loss or gain from running the program (e.g. the program cost $1,000 per participant and other negative impacts of the program are estimated at $1,000 while the benefits of the program are estimated at $2,500 per participant. Therefore there is an overall benefit of the program of $500 per participant.)

Analysis 3.4 Cost (Empirical) Benefit Analysis (Empirical High-Level Effect-Size Estimates for High-Level Outcomes) Multi-Intervention Comparisons. Same as 3.3 but a multi-intervention comparison. This analysis lets you work out the overall cost or benefit for a number of programs compared (e.g. Progam 1 has an overall benefit of $500 whereas Program 2 has and overall benefit of only $200 per participant).

[Note: This list of designs is still provisional within outcomes theory. The way they are named may well be able to be probably be improved. There are theoretically other types, for instance there could be a ‘Cost (Assumed) Benefit (Assumed Effect-Sizes for Mid-Level Outcomes)’ type, however it is not clear why anyone would do this rather than 1.3 or 1.4 which use assumed high-level effect sizes. Comment on whether this is actually an exhaustive list of economic evaluation designs would be appreciated (please use this contact form).


Conclusion

Using an approach based on what information is available about effect-sizes on outcomes from impact/outcome evaluation, a list of ten possible types of economic evaluation analysis has been identified for use in determining what economic analysis should be undertaken in the case of a particular organization, program or other intervention.

Please comment on this article

This article is based on the developing area of outcomes theory which is still in a relatively early stage of development. Please critique any of the argument laid out in this article so that they can be improved through critical examination and reflection. Send comments here please.

Citing this article

Duignan, P. (2009-2012). Types of economic evaluation analysis. Outcomes Theory Knowledge Base article No. 251. https://outcomestheory.wordpress.com/article/types-of-economic-evaluation-analysis-2m7zd68aaz774-110/.

[If you are reading this in a PDF or printed copy, the web page version may have been updated].

Old Knol link, don’t use:  (http://knol.google.com/k/paul-duignan-phd/-/2m7zd68aaz774/110)

[Outcomes Theory Article # 251, Update 29 Sept 2012]
  1. The building-blocks/types of evidence used in outcomes systems (Redirect)
  2. Types of claims able to be made regarding outcomes models (intervention logics/theories of change) (Redirect)
  3. Reconstructing a Community – How the DoView Visual Planning methodology could be used (Redirect)
  4. Simplifying terms used when working with outcomes (Redirect)
  5. Impact evaluation – where it should and should not be used (Redirect)
  6. Types of economic evaluation analysis (Redirect)
  7. Unequal inputs principle (‘level playing field’) principle
  8. Welcome to the Outcomes Theory Knowledge Base
  9. Organizational Requirements When Implementing the Duignan Approach Using DoView Within an Organization
  10. M & E systems – How to build an affordable simple monitoring and evaluation system using a visual approach
  11. Evaluation of Healthcare Information for All 2015 (HIFA2015) using a DoView visual evaluation plan and Duignan’s Visual Evaluation Planning Method
  12. DoView Results Roadmap Methodology
  13. Problems faced when monitoring and evaluating programs which are themselves assessment systems
  14. Reviewing a list of performance indicators
  15. Using visual DoView Results Roadmaps™ when working with individuals and families
  16. Proving that preventive public health works – using a visual results planning approach to communicate the benefits of investing in preventive public health
  17. Where outcomes theory is being used
  18. How a not-for-profit community organization can transition to being outcomes-focused and results-based – A case study
  19. Duignan’s Outcomes-Focused Visual Strategic Planning for Public and Third Sector Organizations
  20. Impact/outcome evaluation design types
  21. Introduction to outcomes theory
  22. Contracting for outcomes
  23. How a Sector can Assist Multiple Organizations to Implement the Duignan Outcomes-Focused Visual Strategic Planning, Monitoring and Evaluation Approach
  24. How community-based mental health organizations can become results-based and outcomes-focused
  25. Paul Duignan PhD Curriculum Vitae
  26. Integrating government organization statutory performance reporting with demands for evaluation of outcomes and ‘impacts’
  27. Non-output attributable intermediate outcome paradox
  28. Features of steps and outcomes appearing within outcomes models
  29. Principle: Three options for specifying accountability (contracting/delegation) when controllable indicators do not reach a long way up the outcomes model
  30. Outcomes theory diagrams
  31. Indicators – why they should be mapped onto a visual outcomes model
  32. What are Outcomes Models (Program logic models)?
  33. Methods and analysis techniques for information collection
  34. What are outcomes systems?
  35. The problem with SMART objectives – Why you have to consider unmeasurable outcomes
  36. Encouraging better evaluation design and use through a standardized approach to evaluation planning and implementation – Easy Outcomes
  37. New Zealand public sector management system – an analysis
  38. Using Duignan’s outcomes-focused visual strategic planning as a basis for Performance Improvement Framework (PIF) assessments in the New Zealand public sector
  39. Working with outcomes structures and outcomes models
  40. Using the ‘Promoting the Use of Evaluation Within a Country DoView Outcomes Model’
  41. What added value can evaluators bring to governance, development and progress through policy-making? The role of large visualized outcomes models in policy making
  42. Real world examples of how to use seriously large outcomes models (logic models) in evaluation, public sector strategic planning and shared outcomes work
  43. Monitoring, accountability and evaluation of welfare and social sector policy and reform
  44. Results-based management using the Systematic Outcomes Management / Easy Outcomes Process
  45. The evolution of logic models (theories of change) as used within evaluation
  46. Trade-off between demonstrating attribution and encouraging collaboration
  47. Impact/outcome evaluation designs and techniques illustrated with a simple example
  48. Implications of an exclusive focus on impact evaluation in ‘what works’ evidence-based practice systems
  49. Single list of indicators problem
  50. Outcomes theory: A list of outcomes theory articles
  51. Standards for drawing outcomes models
  52. Causal models – how to structure, represent and communicate them
  53. Conventions for visualizing outcomes models (program logic models)
  54. Using a generic outcomes model to implement similar programs in a number of countries, districts, organizational or sector units
  55. Using outcomes theory to solve important conceptual and practical problems in evaluation, monitoring and performance management
  56. Free-form visual outcomes models versus output, intermediate and final outcome ‘layered’ models
  57. Key outcomes, results management and evaluation resources
  58. Outcomes systems – checklist for analysis
  59. Having a common outcomes model underpinning multiple organizational activities
  60. What is best practice?
  61. Best practice representation and dissemination using visual outcomes models
  62. Action research: Using an outcomes modeling approach
  63. Evaluation questions – why they should be mapped onto a visual outcomes model
  64. Overly-simplistic approaches to outcomes, monitoring and evaluation work
  65. Evaluation types: Formative/developmental, process and impact/outcome
  66. Terminology in evaluation: Approaches, types (purposes), methods, analysis techniques and designs
  67. United Nations Results-Based Management System – An analysis
  68. Selecting impact/outcome evaluation designs: a decision-making table and checklist approach
  69. Definitions used in outcomes theory
  70. Balanced Scorecard and Strategy Maps – an analysis
  71. The error of limiting focus to only the attributable
  72. Reframing program evaluation as part of collecting strategic information for sector decision-making
  73. Distinguishing evaluation from other processes (e.g. monitoring, performance management, assessment, quality assurance)
  74. Full roll-out impact/outcome evaluation versus piloting impact/outcome evaluation plus best practice monitoring
  75. References to outcomes theory
  76. Techniques for improving constructed matched comparison group impact/outcome evaluation designs
%d bloggers like this: