Шрифт:
Интервал:
Закладка:
During the initial pilot phase, HISP was introduced in 100 rural areas. Of the 4,959 households in the baseline sample, a total of 2,907 were enrolled in HISP, and the program operated successfully through its pilot stage over the next two years. All health clinics and pharmacies serving 100 villages accepted patients under the insurance program, and surveys showed that most enrolled households were happy with the program. Data was collected before the start of the pilot run and at the end of the two-year period, using the same sample of 4,959 households.
PROOF OF IMPACT
Has HISP affected out-of-pocket health expenditures of poor rural households? Yes it has, and it has been proven mathematically. The impact evaluation approach used as part of HISP was to select the most rigorous method, given the specifics of the project.
HISP implementation case study provides us with the following «menu» of options for impact evaluation methods:
• randomized assignment;
• instrumental variables;
• regression discontinuity design;
• difference-in-differences;
• benchmarking method.
All of these approaches aim at identifying valid comparison groups so that the true impact of the program on out-of-pocket health care expenditures of poor households can be evaluated.
So, we build on the stage when the evaluation indicators are selected and elaborated in detail, the data collection plan is ready and the data is collected properly.
We will review the evaluation methodology selected for this case by introducing the concept of counterfactual (that is, a fact that contradicts the hypothesis). And then, within the framework of this article, we will give an overview of the most rigorous evaluation method proposed by HISP and tested on this program.
There are two concepts that are integral to the process of making accurate and reliable impact evaluations — the concept of causation and that of counterfactual.
First of all, issues of social impact are related to causation, for example, with the search for answers to such questions:
Does teacher training improve student test scores? Do additional funding programs for health facilities result in better health outcomes for children? Do vocational training programs increase a trainee's income?
Finding answers to these questions can be difficult. For example, in the context of a vocational training program, simply observing how a trainee’s income increases after completing such a program is not sufficient to establish a causal relationship. A trainee’s income could have increased even if he or she had not been trained — all through the trainee’s own efforts, due to changing conditions in the labor market, or due to many other factors that could affect income.
The challenge is to find a method that will allow us to establish a causal relationship. We must empirically determine to what extent a particular program — and that program alone — contributed to the change in outcome. The methodology must exclude all external factors.
The answer to the basic question of impact evaluation is what is the impact or causal effect of the program (P) on the outcome of interest (Y)? — is given by the basic impact evaluation formula:
Δ = (Y | P = 1) − (Y | P = 0).
This formula states that the causal effect (Д) of the program (P) on the outcome (Y) is the difference between the outcome (Y) with the program (in other words, when P = 1) and the same outcome (Y) without the program (i.e. when P = 0).
For instance, should P denote a training program and Y denote income, then the causal effect of the training program (Д) is the difference between a trainee’s income (Y) after participating in the training program (in other words, when P = 1) and the same person at the same point in time if he or she did not participate in the program (in other words, when P = 0).
If this were possible, we would observe how much income the same person would have at the same point in time both with and without the program, so that the program would be the only possible explanation for any difference in that person’s income. By comparing the same person with himself or herself at the same time, we would be able to exclude any external factors that could also explain the difference in outcomes.
But unfortunately, measuring two versions of the same unit at the same time is impossible: at a particular point in time, the person either participated or did not participate in the program.
This phenomenon is called the counterfactual problem: how can we measure what would happen if other circumstances prevailed?
COMPARISON AND TREATMENT GROUPS
In practice, the task of impact evaluation is to identify a comparison group and a treatment group that are similar in their parameters, but one of them participates in the program and the other does not. That way any difference in results must be due to the program.
The treatment and comparison groups should be the same in at least three respects:
1. The baseline properties of the groups should be identical. For example, the mean age of the treatment group should be the same as that of the comparison one.
2. The program factor should not affect the comparison group directly or indirectly.
3. The results in the comparison group should change in the same way as the results in the treatment group if both groups were (or were not) enrolled in the program. That is, groups should respond to the program in the same way. For example, if the income in the treatment group increased by RUB 5,000 due to the training program, then the income in the comparison group would also increase by RUB 5,000 if they received the training.
When the above three conditions are