Print
Category: Latest News

Cambridge, Massachusetts - When a person lives on less than $2 a day - as some 2.7 billion people around the world do - there isn’t room for a product like a solar lantern or a water filter to fail.

It’s a challenge development agencies, nongovernmental organizations, and consumers themselves face every day: With so many products on the market, how do you choose the right one?

Now MIT researchers have released a report that could help answer that question through a new framework for technology evaluation. Their report — titled “Experimentation in Product Evaluation: The Case of Solar Lanterns in Uganda, Africa” — details the first experimental evaluations designed and implemented by the Comprehensive Initiative on Technology Evaluation (CITE), a U.S. Agency for International Development (USAID)-supported program led by a multidisciplinary team of faculty, staff, and students.

Building an evaluation framework

CITE’s framework is based on the idea that evaluating a product from a technical perspective alone is not enough, according to CITE Director Bishwapriya Sanyal, the Ford International Professor in MIT’s Department of Urban Studies and Planning

“There are many products designed to improve the lives of poor people, but there are few in-depth evaluations of which ones work, and why,” Sanyal says. “CITE not only looks at suitability — how well does a product work? — but also at scalability — how well does it scale? — and sustainability — does a product have sticking power, given social, economic, and environmental context?”

CITE seeks to integrate each of these criteria — suitability, scalability, and sustainability — to develop a deep understanding of what makes products successful in emerging economies. The program’s evaluations and framework are intended to better inform the development community’s purchasing decisions.

“CITE’s work is incredibly energizing for the development community,” said Ticora V. Jones, director of the USAID Higher Education Solutions Network. “These evaluations won’t live on a shelf. The results are actionable. It’s an approach that could fundamentally transform the way we choose, source, and even design technologies for development work."

Evaluating solar lanterns in Uganda

In summer 2013, a team of MIT faculty and students set off for western Uganda to conduct CITE’s evaluation of solar lanterns. Researchers conducted hundreds of surveys with consumers, suppliers, manufacturers, and nonprofits to evaluate 11 locally available solar lantern models.

To assess each product’s suitability, researchers computed a ratings score from 0 to 100 based on how the product’s attributes and features fared. “Attributes” included characteristics inherent to solar lanterns, such as brightness, run time, and time to charge. “Features” included less-central characteristics, such as a lantern’s ability to charge a cellphone.

The importance of cellphone charging was a surprising and noteworthy finding, Sanyal says.

“One of the things that stuck with me was that [consumers] were most concerned with whether or not the solar lantern charged their cellphone. It was a feature we never expected would be so important,” Sanyal says. “For some, having connections may be more valuable than having light.”

Learning from partnerships

CITE worked with USAID to select solar lanterns as the product family for its first evaluation. Sanyal says evaluating solar lanterns allowed CITE to learn from USAID’s existing partnership with Solar Sister, a social enterprise that distributes solar lanterns in Uganda, a country where few people have access to light after dark. 

CITE researchers also worked closely with Jeffrey Asher, a former technical director at Consumer Reports, to learn from an existing product-evaluation model.

Evaluating products in a laboratory at MIT or Consumer Reports is much different than evaluating them in rural Uganda, but both are important, says Asher, who is a co-author of the CITE report.

“Consumer Reports’ greatest challenge has been evaluating products that are currently in the U.S. marketplace,” Asher says. “CITE has found that, in developing countries, we have to be even more nimble to keep up with an ever-changing market.”

Putting CITE’s results to work

Over the next two years, CITE will hone its approach, using experimental evaluations of technologies like water filters, post-harvest storage solutions, and malaria rapid-diagnostic tests to design a replicable approach that development professionals can use in their day-to-day work, Sanyal says.

“We’re aiming to make our evaluation process leaner, less expensive, and more nimble, while maintaining rigor. That’s our challenge, looking forward,” Sanyal says.

David Nicholson, director of the environment, energy, and climate change technical support unit at the international development organization Mercy Corps, says evaluation tools like CITE’s can be invaluable in making procurement decisions, especially when organizations are working with finite resources.

“Development agencies like Mercy Corps are increasingly looking to the commercial sector for solutions to long-term development challenges,” says Nicholson, who did not participate in the CITE research. “Evaluations like this can help program managers make informed decisions on which commercial products are most suitable for the program goals and the target communities.”

CITE’s research is funded by the USAID U.S. Global Development Lab. CITE is led by MIT’s Department of Urban Studies and Planning and supported by MIT’s D-Lab, Public Service Center, Sociotechnical Systems Research Center, and Center for Transportation and Logistics.

In addition to Sanyal and Asher, co-authors on the CITE report include Daniel Frey, Derek Brine, Jennifer Green, Jonars Spielberg, Stephen Graves, Olivier de Weck, and Jarrod Goentzel.