Economic evaluation analyses are an important type of analysis which is often promoted as a way of helping decision-makers select between different types of intervention. Outcomes theory, the general theory of identifying, measuring, attributing and holding parties to account for outcomes, can makes two contributions to thinking about economic evaluation. The first is to insist on more transparency around which costs and benefits have been included within any economic analysis. The second is to provide a way of thinking about possible types of economic evaluation that could be used in a particular case based on what type of effect-size estimates are available from impact evaluation in a specific case.
Economic analyses can be divided into: cost of intervention analyses which simply show what it cost to run an intervention; cost-effectiveness analyses which show what it cost to achieve a certain effect; and cost-benefit analyses which show the overall costs and benefits of an intervention. Undertaking economic evaluation usually requires specialist technical expertise and either experts in the area, or the relevant literature, should be consulted when undertaking such economic evaluations. The intention of this article is not to explain how to do such economic evaluation, but to provide an easy way of working out which types of economic evaluation are appropriate in which cases.
Outcomes theory is a new ‘mid-level’ theory which provides a conceptual framework for thinking about work related to outcomes of any type in any sector. It helps with thinking about outcomes specification, measurement, attribution and accountability. It brings together a set of best-practice principles in a generic cross-disciplinary format to improve outcomes work being undertaken in any discipline (policy analysis, economics, performance management, program evaluation, organizational development, evidence-based practice etc.). The outcomes theory best-practice principles which are particularly relevant to this article are as follows:
- Outcomes work is much easier if it is based on a comprehensive visual model of the intervention being examined.
- In outcomes theory such visual models are called outcomes models (and within applied outcomes theory work they are also know as Outcomes DoViews. This is to distinguish them from the range of other types of intervention logic diagrams that can be drawn to describe to interventions (e.g. logic models, ends-means diagrams, strategy maps etc.).
- Outcomes models are drawn according to a set of 13 rules which ensure that they are fit-for-purpose for all aspects of outcomes work.
- The applied version of outcomes theory is known as DoView Visual Outcomes Planning. The approach consists of a set of steps for accomplishing a range of different project and organizational tasks tightly aligned around intervention outcomes.
Increased transparency about which costs and benefits are included within any economic analysis
Transparency regarding where effect-size estimates are empirically based and where they are not
The second issue on which outcomes theory argues for improved transparency in economic analysis concerns the evidential status of the effect-size estimates being put into economic analyses. From the point of view of outcomes theory, an effect-size is formally defined as the amount of change in a higher-level outcome within an outcomes model that can be fully attributed to the causal effect of a lower level step (or a set of steps – i.e. an intervention) within the same outcomes model.
Traditional discussion of cost-effectiveness and cost-benefit in the economic literature tends to neglect or move quickly over the question of deriving effect-size estimates within economic analyses. In such discussions the emphasis tends to be on the methodology by which cost effectiveness or cost benefit analysis should be carried out. This type of discussion covers points such as the appropriate discount rate to be applied (the rate at which estimates of dollar savings in the future will be reduced because they are the result of current expenditure which, once expended, is no longer available for alternative uses at the current time). The issue which is often not comprehensively addressed is the the empirical foundation for deriving effect-sizes for the effect of an intervention on outcomes which will then be used to derive estimates of future benefits.
Of course, cost effectiveness and cost benefit analyses stand and fall on the accuracy of such estimates. Significant error in such estimates can render any cost effectiveness or cost benefit analysis worse than useless (because it is likely to provide decision-makers with a false sense of certainty about its essentially arbitrary conclusions). And a lack of discipline and transparency around striking such estimates simply encourages ‘gaming’ in which almost any result a client would wish to see can be obtained by plugging in different estimates for key variables. Criticism of the estimates which are put into an economic evaluation is often seen as merely a technical matter and the critique is lost on the vast majority of people who go on to use the results of the analysis.
A key focus of outcomes theory is on the issue of what is known and what is not know about the effects of interventions and assisting us to deal with these issues in the most transparent way possible both when we have information and also in those many instances when we do not have complete information. Crucially, the approach attempts to clearly convey to decision-makers when they are operating in these two very different environments – a high-quality information environment or an environment where there is little robust information about effect-sizes. Therefore an outcomes theory approach to economic analysis pays particular attention to quickly communicating the empirical solidity of what is being claimed in any analysis. At the current time, within economic analysis, the vehicle for communicating uncertainty around estimates within the analysis is through a sensitivity analysis. A sensitivity analysis runs the analysis a number of times while varying key estimates about which there may be doubt in order to see what effect this will have on the final result.
However such sensitivity analyses tend to be buried in the bowels of complex economic evaluation reports and often are not even read by busy decision-makers who simply look at the bottom line result from the cost effectiveness or cost benefit analysis and use this in their decision-making. Most use of cost benefit analysis results involve the simple use of the result figure alone without those communicating it highlighting the sensitivity analysis.
While the use of sensitivity analysis should always be encouraged, within outcomes theory a complementary approach is proposed. It is one which makes transparent the distinction between different types of economic analysis based on the empirical foundation of the estimates used within them. The first is those for which there are good empirical effect-sized estimates available from robust impact/outcome evaluation having been carried out. The second is for those for which good empirical effect-size estimates are lacking. The outcomes theory argument for making this distinction is if such a transparent distinction is not made, there is no quick way for decision-makers to fully understand the level of risk they are carrying around uncertainty. They need to be quickly informed of situations in which they have a low level of decision-making risk due to good empirical data versus those situations where there is much more risk due to uncertainty. One could view this approach as a ‘truth of labeling’ approach to economic analysis where it is argued that the label describing the type of economic analysis should immediately convey the soundness of the estimates on which the whole analysis rests.
An outcomes theory therefore approach then encourages a classification of types of economic analysis determined by the empirical basis of the estimates which are used within them. However, with such an approach there obviously is a problem as to how to proceed in those cases where there is actually a great deal of uncertainty around what the effect-sizes actually are for a particular intervention. A hard line approach would be to say that no cost effectiveness or cost benefit analysis should be done in such cases because it simply creates the illusion that there is more certainty than there actually is.
An argument can be made in support of this hard line approach in certain situations where there are many costs and benefits all of which are likely to have significant error associated with their estimates. At a certain point, the honest approach is to simply say that it is not possible to produce a coherent economic evaluation analysis because of the level of uncertainty. This is consistent with the outcomes theory approach which is applied in the case of impact/outcome evaluation where it is not assumed before the fact that is will be possible to do an impact/outcome evaluation. (See When impact/outcome evaluation should and should not be used). The intention of outcomes theory is to open up the expectation on the part of clients and those being commissioned by clients to look at doing these types of analysis (impact/outcome evaluation or cost benefit analysis) that it is perfectly reasonable to decide to not do them where they are impossible to do to an adequate level of rigor. This is in cases where there is too much error likely to be built into the analysis rendering it of no value to decision makers because it will just create a false sense of certainty. The argument here is that a bad impact/outcome evaluation or cost benefit analysis is worse than no impact/outcome evaluation or cost benefit analysis. Such pseudo impact/outcome evaluations and pseudo cost benefit analyses do nothing except dilute the evidence-base in any domain by filling it with studies which have no value and skew overall findings. So the aim is to teach clients of such analyses to beware of anyone who always offers to do an impact/outcome evaluation or a cost benefit analysis in contrast to someone who simply first offers to merely assess whether doing such an analysis is going to be of value and then only secondly actually only goes on to do such analyses in those cases where they are likely to actually add value.
However, this having been said, there may well be situations where many of the estimates used in such analyses can have a reasonable basis but that there is major uncertainty or simply a lack of information about the major effect-size of an intervention on the key outcome it is trying to achieve.
A case can be made for allowing such economic evaluation analyses when there is little or no data on the main outcome effect-size but to clearly label these as being ‘hypothetical’ or ‘assumed’ effect-size analyses. This type of analysis needs to be clearly differentiated from more robust economic evaluation analysis. So the two types of approach which can be adopted are first, where there is sound empirical data on effect-sizes one can present decisions-makers with a statement such as: ‘this is what the costs and benefits would look like from this program based on empirical estimates of the program effect.’ The second case can be described as: ‘this is what the costs and benefits would look like if you decided to simply assume an effect-size of X, this assumption-based approach is being used because there is not enough empirical information to robustly determine what the effect-size actually is.’
The second type of analysis can be seen as, in effect, brining the sensitivity analysis up into the actual cost benefit analysis results and present the results for a number of different assumed effect sizes because of the uncertainty around the effect-size estimate (e.g. for an assumed effect-size of X(1) the cost benefit would be Y(1), for and assumed effect-size of X(2), it would be Y(2).
The list of types of economic analysis below therefore is based on distinguishing between situations where there is an empirical foundation for effect-sizes and those where there is no such foundation.
The starting point for using this list of types of economic analysis below is to ask the question:
‘To what level within the outcomes model (a visual model of a program showing all of the important steps which lead up to high-level outcomes) do we have good information about the size of the effect on an outcome which can be attributed to the intervention?’
Putting the question in this way also means that this framework can be used as a practical tool by those coming to economic evaluation from an outcomes and program evaluation perspective rather than a specialist economics perspective. Knowing what type of estimates they will be able to produce from the evaluation they are conducting, they can then use this to identify the type of economic analysis which is appropriate.The above question can have one of three different answers: 1) no empirical attributable outcomes effect-size information available about the intervention level; 2) empirical attributable outcomes effect-size information available on mid-level outcomes; and 3) empirical attributable outcomes effect-size information available on high-level outcomes. Those wishing to use the list of economic evaluation analyses set out below can, on the basis of the answer to the question above, just proceed to the relevant section in the list below and select the appropriate type of economic analysis.
1. Analyses when no empirical impact/outcome evaluation effect-size information above the intervention level is available
Analysis 1.1 Cost of Intervention Analysis, Single Intervention. Cost of intervention analysis just looks at the cost of an intervention, not its effectiveness (cost-effectiveness is how much it costs to change an outcome by a certain amount) or the benefits (the result of subtracting the estimated dollar cost of the program from the estimated benefits of the program estimated in dollar terms). This analysis just allows you to say what the estimated cost of the intervention is (e.g. $1,000 per participant).
Analysis 1.2 Cost of Intervention Analysis, Multi-Intervention Comparison. Same as 1.1 but a multi-intervention comparison. This analysis allows you to compare the costs of different interventions (e.g. Program 1 – $1,000 per participant; Program 2 – $1,500 per participant or to put it in terms of Program 2 costing 1 1/2 times more than Program 1 per participant).
Analysis 1.3 Cost (Assumed) Benefit Analysis (Assumed Effect-Sizes for High-Level Outcomes) Single Intervention. Even where you cannot establish any attributable outcomes above the intervention level, but you do have an estimate of the cost of the intervention (from 1.1 above), you can just use assumed arbitrary (hypothetical) effect sizes in an analysis if it is clearly labeled as such. In this analysis, estimates are available for the cost of the intervention and the costs and benefits of all outcomes can be reasonably accurately determined in dollar terms with the effect-size of the main effect of the program being assumed to be at a specified level. This is an assumed (hypothetical) cost benefit analysis (e.g. for a hypothetical effect size of 5%, 10% or 20%). It is essential that this type of assumed (hypothetical) analysis is clearly distinguished from Analyzes 3.3 which is based on actual estimates from actual measurement of effect sizes. This analysis allows you to estimate the overall benefit (or loss) of running the intervention if any of the hypothetical effect sizes were achieved (e.g. there would be a loss of $500 per participant for a 5% effect-size, a gain of $100 for a 10% effect-size and gain of $600 per participant for a 20% effect-size).
Analysis 1.4 Cost (Assumed) Benefit Analysis (Assumed Effect-Sizes for High-Level Outcomes) Multi-Intervention. Same as 1.3 but a multi-intervention comparison. This analysis allows you to compare the overall loss or gain from more than one program for various hypothetical effect sizes (e.g. for a 5% effect size, Program 1 would have an estimated loss of $500 per participant whereas Program 2 would have a gain of $200 and so on, you could even theoretically vary the arbitrary effect-sizes if there was some reason to believe that there might be differences, e.g. a general population program is likely to have a lower effect-size than an intensive one-on-one program, but this may not say anything about the overall loss or gain when comparing two such programs – because the cost per person intervened with is likely to be higher in the case of the one-on-one intervention). Once again, it is essential that this type of assumed (hypothetical) analysis is clearly distinguished from Analyses 3.4 which is based on actual estimates from actual measurement of effect-sizes through certain types of impact/outcome evaluation.
2: Analyses when empirical impact/outcome evaluation effect-size information from attributing mid-level outcomes is available
Analysis 2.1 Cost (Empirical) Effectiveness Analysis (Empirical Effect-Size Estimates for Mid-Level Outcomes) Single Intervention. In this analysis, estimates are available of the attributable effect-size of the intervention on mid-level outcomes. When combined with the estimated cost of the intervention this allows you to work out the cost of achieving a certain level of effect on mid-level outcomes (e.g. a 6% increased in X cost approximately $1,000 per participant). It is important that readers of such analyses understand that they are mid-level outcomes and a visual outcomes model should be used to clearly indicate the level at which the effect is located.
Analysis 2.2 Cost (Empirical) Effectiveness Analysis (Empirical Effect-Size Estimates for Mid-Level Outcomes) Multi-Intervention Comparison. Same as 2.1 but a multi-intervention comparison. This analysis lets you work out the cost of achieving a certain level of effect on mid-level outcomes for a number of interventions (e.g. a 6% increase in X cost approximately $1,000 per participant for Program 1 whereas it cost $1,500 for Program 2). It is likely that the measured mid-level effect-sizes of different interventions will vary, therefore you may need to adjust estimates to a common base. This may or may not reflect what would happen in regard to the actual programs in reality. It is important that readers of such analyses understand that they are mid-level outcomes and a visual outcomes model should be used to clearly indicate the level at which the effect is located.
3: Analyses when empirical impact/outcome evaluation effect-size information for attributing high-level outcomes is available
Analysis 3.1 Cost (Empirical) Effectiveness Analysis (Empirical Effect-Size Estimates for High-Level Outcomes) Single Intervention. Same as 2.1 except you can work out the cost of achieving a high-level outcome effect-size of a certain amount for a particular intervention.
Analysis 3.2 Cost (Empirical) Effectiveness Analysis (Empirical Effect-Size Estimates for High-Level Outcomes) Multi-Intervention Comparison. Same as 2.2 except you can work out the cost of achieving a high-level outcome effect size of a certain amount and compare this across more than one intervention.
Analysis 3.3 Cost (Empirical) Benefit Analysis (Empirical High-Level Effect-Size Estimates for High-Level Outcomes, Single Intervention). In this analysis, estimates are available for the cost of the intervention, its attributable effect on high-level outcomes, and the costs and benefits of all outcomes can be reasonably accurately determined in dollar terms. If this information is not available this type of analysis cannot be done and your only option is to fall back to analysis type 1.3. This analysis lets you compare the overall loss or gain from running the program (e.g. the program cost $1,000 per participant and other negative impacts of the program are estimated at $1,000 while the benefits of the program are estimated at $2,500 per participant. Therefore there is an overall benefit of the program of $500 per participant.)
Analysis 3.4 Cost (Empirical) Benefit Analysis (Empirical High-Level Effect-Size Estimates for High-Level Outcomes) Multi-Intervention Comparisons. Same as 3.3 but a multi-intervention comparison. This analysis lets you work out the overall cost or benefit for a number of programs compared (e.g. Progam 1 has an overall benefit of $500 whereas Program 2 has and overall benefit of only $200 per participant).
[Note: This list of designs is still provisional within outcomes theory. The way they are named may well be able to be probably be improved. There are theoretically other types, for instance there could be a ‘Cost (Assumed) Benefit (Assumed Effect-Sizes for Mid-Level Outcomes)’ type, however it is not clear why anyone would do this rather than 1.3 or 1.4 which use assumed high-level effect sizes. Comment on whether this is actually an exhaustive list of economic evaluation designs would be appreciated (please use this contact form).
Using an approach based on what information is available about effect-sizes on outcomes from impact/outcome evaluation, a list of ten possible types of economic evaluation analysis has been identified for use in determining what economic analysis should be undertaken in the case of a particular organization, program or other intervention.
Please comment on this article
This article is based on the developing area of outcomes theory which is still in a relatively early stage of development. Please critique any of the argument laid out in this article so that they can be improved through critical examination and reflection. Send comments here please.
Citing this article
Duignan, P. (2009-2012). Types of economic evaluation analysis. Outcomes Theory Knowledge Base article No. 251. https://outcomestheory.wordpress.com/article/types-of-economic-evaluation-analysis-2m7zd68aaz774-110/.
[If you are reading this in a PDF or printed copy, the web page version may have been updated].
Old Knol link, don’t use: (http://knol.google.com/k/paul-duignan-phd/-/2m7zd68aaz774/110)