We have all heard the joke “50% of statistics are made up (with some made up percentage),” but it is a little worrying to think about how these numbers influence our work or even our daily lives. My work in Monitoring and Evaluation (M&E) in Indonesia has made me think more about what numbers really mean, how data are used in global health and development, and how data influences programming toward evidence-based approaches.
When I first arrived in Jakarta, I was assisting with the final evaluation of a project focused on improving breastfeeding practices. It has been shown that exclusive breastfeeding for the first 6 months of life and early initiation of breastfeeding are critical for the health of both the mother and the child. Despite this evidence, exclusive breastfeeding rates decreased eight percent from 40% in 2003 to 32 % in 2007 in Indonesia and still continue to decrease (DHS 2007). Similarly, bottle feeding within the first 6 months has increased 11% and only 44% of infants in Indonesia were breastfed within the first hour of life (DHS 2007).
Rapid declines in these breastfeeding practices have primarily been due to the influence of very strong marketing efforts by formula promotion companies and misinformation. This considerable change in behavior has led to a variety of innovative interventions aimed at the promotion of improved breastfeeding practices.
The project I was assisting with was focused on creating supportive environments that would encourage mothers to breastfeed. One element of this program was the use of mother support groups to increase rates of exclusive and early initiation of breastfeeding. A cluster randomized controlled trial and before and after data assessed the effectiveness of the mother support group model in increasing breastfeeding practices. The mother support groups had a significant positive effect in one intervention area, but not in another, and the results showed a significant decline in practices in the third intervention area.
With so many pieces of data included in the evaluation (from three intervention areas focused on both early initiation and exclusive breastfeeding), different people interpreted the results in different ways. For example, many people chose to mainly quote figures that were supportive of the intervention and dismiss the negative findings from the other areas as due to differences in implementation. However, this is only one possible interpretation of the findings, which could also likely indicate that the model is not effective overall.
The mother support group model – the model being evaluated – was also presented in another project proposal using the most positive interpretation of results, and was selected for implementation. This means that the model will continue to be replicated in a new project in Indonesia, even though it may not be the most effective way.
As I began to work on other projects, I noticed others difficulties with the M&E process. The importance of randomization, how to determine a sample size and how to select which type of sampling to use were often not clear for project M&E officers who may not have formal statistics or research training. For example, some surveys used a sample size of 20 or compared “before and after data” that was actually collected in two different areas and not comparable. There was sometimes a lack of understanding of the problems with data collection but a need to have numbers to report.
Through my experience in Indonesia, I have come to think that it is difficult to have a clear and transparent M&E process without also strengthening global health M&E requirements at a broader level. If data can be interpreted in different ways, there is an incentive to use positive interpretations. For instance, NGOs may not be completely objective when presenting the results of their projects because they want to win more funding from donor organizations.
Similarly, without more rigid guidelines, many organizations simply do the best with the resources and capacity they have without realizing the gaps in the data collection process. If the process were more standardized for all global health projects (through donor requirements or having an independent evaluation organization), we could also more easily compare projects and create a stronger knowledge base about global health interventions.
Data is useful for decision-making but in applying my epidemiology education during my practicum, I have also realized the importance of understanding the potential problems with data. When we use numbers to make important decisions, particularly regarding resource allocation, it is important to ensure that they are as objective and as accurate as possible. The organization I am working for is making a strong individual effort to improve the monitoring and evaluation process, but my experience has taught me to always inquire more about the origin of statistics and how we can improve their quality.