There is tremendous power in data; it enables better decision making and wiser resource allocation. There is good reason that we’re having a national conversation about how the social sector can become more data driven. Much is at stake. Many believe that billions of “impact” dollars might be unleashed if social enterprises—whether they are for-profit, nonprofit, cooperatives (co-ops), or B Corps—can better demonstrate their performance and impact. The public sector is aiming increasing resources at programs that provide “evidence” of effectiveness. Programs such as the Social Innovation Fund (SIF), and the Department of Education’s Investing in Innovation (i3) Fund, reward evidence-based programs with higher funding levels.

However, progress is slow. Our conversations are bogged down by seeking agreement on things such as top-down industry standards that might apply to vastly different types of organizations and efforts. Standardization sounds good, but the social sector is full of diversity. We create tools that require organizations to fill out survey forms with self-reported data that never result in any real benefits to the submitters of data. Many times, their data never makes its way back to them, let alone in a form that can lead to actionable insights. Is there any wonder that these databases are hard to populate?

Topics such as “big data,” although thrilling us with their potential for breakthrough insights, do not help us focus on the building blocks of a desperately needed data infrastructure. The road to building data infrastructure for the social sector is through tools that create tangible benefits for enterprises and, we would argue, for whole sectors as well. The data must enable better decisions and unleash resources, either through cost savings or by attracting more capital to worthy programs. These tangible benefits will drive the culture change inside organizations that is necessary for data to become a strategic driver of performance in the social economy. In the end, being data driven is all about culture.

Organizations around the country have begun experimenting with “bottom-up” approaches that unlock the more tangible and immediate benefits of better performance data. Here we discuss two such systems, CoMetrics (formerly CoopMetrics) and HomeKeeper, and although we are still learning what it takes to make these tools sustainable, we think that they illustrate an important approach for using data to drive both social and financial performance in the social sector.


In the late 1990s, Whole Foods Market was growing very fast. This growth was, in many ways, a triumph for a retail sector that had been promoting healthier food for decades, but many of the neighborhood-based, community-owned co-op stores that pioneered the sector were now struggling to compete with this well-capitalized and centrally managed chain. Leaders in the co-op grocery sector realized that they would have to change to keep their social enterprises alive.

The food co-ops came together to create a new financial data platform, now called CoMetrics, which allowed them to work together to improve the financial health of their individual businesses and their sector as a whole. The new tool allowed each store to track its own financial performance on a set of standard metrics and to see the detailed performance of their peers. Sharing data in this way made it possible for the co-ops to create a shared purchasing program based on shared financial risk. The program gave them access to goods and credit on terms that made them competitive with a national chain.

CoMetrics has grown beyond the natural foods sector and built a suite of tools for organizations ranging from ethanol producers to affordable housing developers that want to better understand their own performance, and measure that against a set of peers. CoMetrics pulls financial information from disparate accounting systems and maps it to a common chart of accounts. To submit their data, participants only have to export existing files. There are no spreadsheets to fill in and no separate forms to keep track of. In fact, some accounting programs automatically send trial balances straight to CoMetrics with the click of button. The common chart standardizes the data, which is stored in a multidimensional database for ease of analysis. The initial work of agreeing on the common chart, and mapping the data the first time takes some effort, but once that is done, quarterly uploads of the data become routine. Furthermore, the process forces issues of common definitions and accounting best practices to the forefront. When successfully resolved, greater standardization of terms and practices will benefit the whole sector.

CoMetrics creates interactive reports that provide a standardized view of the financial performance of each participating business and enables companies to gauge their performance against their closest peers. In addition, this sector-wide data platform allows networks, trade associations, and funders to gain a high-level overview of the financial strengths (and weaknesses) in a sector.

The tools created by CoMetrics are being used to create data cultures that improve the financial performance of enterprises and sectors. It goes without saying that creating tools for financial data is easier than creating tools for impact data, but the same principles can apply to both types. The next example, HomeKeeper Project, demonstrates that a very similar approach can be used to unlock the power of social impact data.


HomeKeeper is a data system created by Cornerstone Partnership, a program of Capital Impact Partners, with key support from the Ford Foundation. Cornerstone Partnership is working to build and strengthen the field of nonprofit and government agencies that help lower-income families to purchase homes while preserving lasting affordability of those homes for future generations of homebuyers.

The HomeKeeper application helps manage the day-to-day tasks of running an affordable homeownership program. It is built on the Salesforce.com platform and available on the Salesforce AppExchange. For most users, the HomeKeeper app replaces half a dozen or more different spreadsheets users previously maintained to keep track of all the moving parts in their programs. However, unlike other administrative data systems, HomeKeeper was built from the ground up to answer key questions about the long-term social impact of these programs. Each instance of the HomeKeeper application automatically submits anonymous transaction-level data on each home sale to the HomeKeeper National Data Hub. The data that is shared includes household demographic data (stripped of identifying information); the size and age of the home purchased; the purchase price; and detailed information about financing and public subsidy sources (again, stripped of identifying information). Cornerstone uses the aggregated data to produce standardized and accessible social impact reports that help individual programs understand their social performance relative to other participating organizations. At the same time, by standardizing data among many organizations, Cornerstone is able to understand and analyze the impact of the sector as a whole.

HomeKeeper and CoMetrics are different in many ways but they share an underlying approach that has proved powerful in both cases and has the potential to be truly transformative.

Performance Management for a Whole Sector

These two systems grew in response to two problems common to social enterprises. First, small organizations have as much to gain from the use of performance data as large corporations do, but small organizations generally cannot afford to build the kinds of systems necessary to address their most important performance business needs. Second, to bring in new resources to expand a sector, it is helpful to have better data on the overall performance of the sector; however, it is almost impossible to compare and consolidate data from many small organizations that each track different metrics.

CoMetrics and HomeKeeper both grew from a common recognition that these two problems are easier solved together than alone. That is, the way to solve the second problem and obtain quality standardized data on the performance of a whole sector is to help individual social enterprises work together on a shared solution to the first problem so that they can each gather better data to manage their performance internally.

Teaming Up to Capture Enterprise Performance Data

The Champlain Housing Trust (CHT) in Burlington, Vermont, sells homes at deeply discounted prices to lower-income families that would otherwise be priced out of homeownership. However, they make an agreement with those families that is becoming increasingly more common as homeownership subsidy sources run scarce; in exchange for help in buying a home, CHT maintains an equity stake in the property, the value of which is passed to the next family that needs help to purchase the unit when it is sold. When the organization was founded in 1984, this was still a very new idea and no one knew how well it would work. For example, could homeowners build meaningful wealth while the program preserved affordability?

By 2008, when CHT had sold more than 400 houses and seen 200 of those homes resold by the original buyer to another lower-income buyer, they decided that they had enough experience to finally ask hard questions about whether the experiment was working. CHT dedicated significant resources for an entire year to tracking down its paper files on each home sale, entering the relevant data into a spreadsheet and analyzing the results. The resulting report focused on whether the program had delivered on its initial promises: Did owners build wealth, did the program preserve affordability, were subsequent buyers able to purchase without any new public assistance, and other similar questions.[1] They found that the homes became slightly more affordable with time (reselling at prices that were affordable to a lower-income group than the initial buyers) whereas the sellers realized enough equity gain that 70 percent were able to purchase market-rate homes with no public assistance. These results were encouraging but also quite surprising. Because the work of compiling the performance data was so daunting, they had operated for decades without really knowing if their program was doing what it was designed to do.

CHT is not alone in facing this driving-in-the-dark dilemma and, in fact, they are unique among comparable homeownership programs because they made the resources available to answer these big questions.

There is a trend in philanthropy to provide nonprofit staff with training in designing systems for data collection and evaluation and then expecting each organization to manage an ongoing research effort on its own. For the smaller organizations that constitute the bulk of the sector, this is an unrealistic expectation given limited resources and the lack of data cultures. When Cornerstone Partnership was formed, we realized that it was not practical to expect every homeownership program to undertake the kind of research project that CHT had, but we wanted every organization to be able to see its performance in the same way.[2]

Rather than asking each organization to construct its own “theory of change” and related impact metrics, and then build a unique data collection program, Cornerstone convened stakeholders from more than 100 organizations to identify shared values and common social impact goals. Cornerstone then constructed a set of social impact metrics focused on the social goals shared by most of these programs. These standardized measures are specific to this particular type of housing program and most would not be relevant to any other type of program. At the same time, they surely do not capture every impact that is important to every participating organization. Just as the food co-ops did through CoMetrics, by bringing together a large number of programs that were similar enough, we were able to spread costs in a way that made it practical to undertake the kind of thoughtful and robust data collection project that would have been entirely impractical for any organization to undertake alone.

Understanding Sector Performance

At the launch of Cornerstone Partnership it was clear that growth in the sector would require better long-term performance data—not just in one community but across the country. Cornerstone commissioned the Urban Institute to conduct a formal evaluation of seven shared equity programs that concluded that other programs could have similar results in very different housing markets and using different affordability mechanisms.[3] However, the process took a full year and thousands of hours of staff time. Although this kind of formal evaluation is essential, HomeKeeper was born out of the recognition that we needed a more everyday approach to performance measurement.

To obtain relevant data about long-term outcomes, we needed all the programs to collect a standardized set of data at the time that they were providing service. Cornerstone convened a working group to define metrics and design a data system that would consolidate outcome data from the entire sector. Most of the participating organizations reported that they already collected the key data necessary to track the common impact metrics, and we initially assumed that the primary challenge would be to convince participating organizations to change their existing databases to collect the data in a more standardized format. However, as we looked more closely, we found that none of the organizations had anything approaching a formal data system. They were tracking different data in different systems and, in many cases, key information was not tracked in any electronic system. If we wanted impact data, we would have to help our members build better administrative data systems. At the time, this seemed like a setback, but in hindsight, it was a lucky break. We wanted a top-down view of the sector, but to find a useful view we had to start from the bottom.

Although HomeKeeper is still under revision, the users consistently mention how much they love it. It has yet to develop the bells and whistles that people have come to expect from slick software, but it is designed from the ground up around the very specific tasks that an administrator of a homeownership program has to complete every day. It makes people’s jobs easier. Putting all of their data in one place makes it possible to answer questions in seconds that previously took hours to answer—if they could be answered at all.

Each HomeKeeper user has his or her own version of HomeKeeper, which can be customized and modified as needed. Only the relatively small number of fields that are used in calculating the social impact metrics cannot be modified by users. HomeKeeper was built so that as programs sell homes, the system continuously submits transaction data to the national data hub. Users have the ability to see and correct data that is being submitted, but they don’t have to take any special action to submit it.

In the HomeKeeper National Data Hub, data are aggregated from all the participating programs and standardized social impact reports are produced. These reports help people see how their programs are doing on the common impact metrics and benchmark their performance against the results from their peers.

On the strength of the Urban Institute research and the initial investment in HomeKeeper, Cornerstone was able to secure a $5 million competitive grant from the federal Social Innovation Fund (SIF). The SIF was designed to support social program innovations that are backed by evidence of effectiveness. Cornerstone Partnership is working with the Urban Institute to complete a formal evaluation of its SIF investments, and HomeKeeper is providing a ready platform for data collection for that study. Although most HomeKeeper users are not involved in the SIF grant program, the Urban Institute is able to access data collected through HomeKeeper by the SIF grantees to conduct its evaluation.

Figure 1. Sample Risk Matrix Report. This visualization makes it easy to pinpoint underperformers, who are then given technical assistance to improve their performance. The multidimensional database underlying the presentation of the data is used for deeper analysis of problems and can point to resolutions. Throughout the Great Recession, no defaults occurred under the national purchasing agreement. This outcome was possible only because co-ops had access to data in an actionable form and the ability to respond quickly to problems.

Figure 1. Sample Risk Matrix Report. This visualization makes it easy to pinpoint underperformers, who are then given technical assistance to improve their performance. The multidimensional database underlying the presentation of the data is used for deeper analysis of problems and can point to resolutions. Throughout the Great Recession, no defaults occurred under the national purchasing agreement. This outcome was possible only because co-ops had access to data in an actionable form and the ability to respond quickly to problems.

In the food sector, which operates on notoriously slim margins, the national organization that sponsors CoMetrics, National Cooperative Grocers Association (NCGA), has created a $1.6 billion shared purchasing program that delivers an increase of more than 1 percentage point in gross margin, on average, for its members. Because the organizations share risk to gain this kind of value, NCGA must regularly take the pulse of members’ financial performance. CoMetrics creates a quarterly risk matrix report (Figure 1), which gives a snapshot of performance and an easy, faster way to identify potential trouble.

Some Lessons

Although both of these experiments in shared data are relatively new, they point in a very promising direction. These two projects have developed some practices that seem worthy of widespread implementation in any shared data project.

Telling a Story with the Data

As hard as it is to collect useful data, it is even harder to put the data to work to change policy or practice. People want to use data but, when confronted by complex tables and charts, meaning can be elusive. The human mind processes stories more readily. A key challenge for data projects such as these is to assemble data into a narrative that is relevant and actionable. This is easier said than done because with most data there is no obvious narrative.

This challenge can be addressed in two important ways. First, organize data analysis around specific, plain-language questions that practitioners have identified as important. Second, provide peer benchmarks, which put these answers in context, make them more concrete, and provide a natural narrative framework that makes it easier for people to understand and act on the data.

Leading With Questions That Matter

Before building the HomeKeeper data system or designing the Social Impact Report, Cornerstone convened more than 100 industry stakeholders in three different daylong meetings to discuss what success looks like for an affordable homeownership program.[4] Furthermore, although they ultimately developed mathematical formulas that produce standardized “metrics,” Cornerstone started with plain language statements about what an ideal program “should” accomplish. For example, everyone seemed to agree that a successful program should serve families that were otherwise underserved and should have a low foreclosure rate.

For each of these “should” statements, there is a corresponding performance question (e.g., “Who did you serve?” or “How many foreclosures were there?”). To build the HomeKeeper system, Cornerstone identified the data that organizations currently were or easily could be collecting that were relevant to these questions. Next came intensive technical work to develop metrics with precise definitions for each of the elements. However, when we designed the HomeKeeper Social Impact Report, we returned to the big-picture questions. The HomeKeeper reports are structured around 27 plain-language questions. For each question, there are one to three charts or metrics that are meant to answer the question. (Figure 2)

Figure 2. Sample HomeKeeper Report

Figure 2. Sample HomeKeeper Report

The questions include:

  • What are the income levels of homebuyers?
  • Are buyers paying more than they can afford?
  • Is the program preserving affordability?
  • How often were homes sold?
  • What return on investment did sellers receive at resale?
  • How many buyers still own a home after five years?
  • Are foreclosures common?

These reports are automatically generated by our data system with little or no human editing, but they were designed to follow a clear storyline as much as possible. In many cases, in addition to charts and graphs with the relevant metrics, the report includes a plain-language restatement of the finding in the chart so that the “answer” appears twice—once as a chart and once in a sentence. The hope is that by structuring the meaning into the presentation in this way, the data becomes more accessible and ultimately easier for people to use to improve their work.

Finding Common Cause

The power of sharing performance data is not always immediately obvious. At one point, CoMetrics founder, Walden Swanson, was conducting a data dive with a peer group of produce managers from dozens of retailers from the Northeast. To Swanson’s surprise, a general manager (GM) of one of the stores walked into the meeting.

The GM pulled Swanson aside to let him know why he was there. The reports generated by CoMetrics showed a decline in performance of his store’s produce department. He wanted to fire his produce manager, and he was there to find and recruit the best performing produce manager in the region.

As the GM watched from the back of the room, Swanson led the produce managers through their numbers. Because results are standardized, individual store performance can easily be compared with the peer group. It did not take long to discover that everyone’s produce department performance was down. The group grappled with the reasons why and concluded that there was a common cause: it was an El Niño weather year and rising produce prices had pushed everyone’s margins down.

The store the GM managed was performing in the top quartile. As it turns out, the GM already had one of the highest performers. The problem was that without this kind of comparative benchmark, it is impossible to understand what is driving overall performance.

Closing the Loop: Taking Data in a Full Circle

Too much of the social sector’s data flows in one direction only—away from the people doing the work and toward the people funding the work. Obviously funders have a legitimate need for data. However, there is a concern that some funders cannot make real use of the data that they collect because neither the funder nor the grantees has any confidence that the data accurately reflect what is happening on the ground.

There are many reasons for this lack of confidence. Whenever we try to aggregate data from multiple organizations, real work is involved in translating the way each organization codes its information into whatever the standard is. In the worst cases, grantees are left to their own devices to struggle with this problem and, in the face of limited resources, they do whatever is easiest even though that might generate misleading data. However, even in the best cases in which grantees diligently attempt to fit their square-peg data into the funder’s round holes, they have to make many assumptions just to make things work. The grantees never know if they are doing it “right.” Whoever receives and attempts to analyze the resulting data cannot know what those assumptions were and, in all likelihood, different grantees made differing assumptions. The result is that the aggregate data are less and less useful to the funder. This is an inherent challenge facing any data aggregation project, no matter how well designed and executed.

HomeKeeper and CoMetrics have developed a similar response to this challenge. Rather than pull data one way only, we take the data around a full circle; after being aggregated and analyzed, the data return to the hands of the very people who created it. Sometimes what they see makes sense to them and sometimes the results look very wrong and they speak up. Sometimes we have to change the way we interpret the data that they are providing and sometimes they have to change the way they enter the data in the first place. Either way, we both end up with greater confidence that the end analysis is “right.”

When the data makes a round trip, end users often discover outcomes previously unrealized. There was wide agreement among HomeKeeper users that all homeownership programs should be ensuring that their homeowners were paying no more than an “affordable” share of their monthly income for housing costs. The Department of Housing and Urban Development (HUD) considers any household that pays more than 30 percent of its income for housing to be “cost burdened,” but most HUD programs do not strictly prohibit selling to buyers who will be cost burdened so long as they meet other standards. Although many of our stakeholders believed that 33 percent or even 35 percent might be a more appropriate standard, they agreed that this measure was a key part of evaluating a programs performance. Therefore, when data started flowing into the HomeKeeper hub, we were surprised to see that nearly 20 percent of homebuyers were paying housing costs that initially represented more than 33 percent of their income.

Because the program administrators themselves are a key audience for the HomeKeeper Social Impact Reports, we were able to receive quick and clear responses to this finding. Users viewing the HomeKeeper report can click on individual data points and “drill down” to the underlying transaction data, and one further click will open their Salesforce account to the relevant homebuyer’s record so that they can make changes or explore further. Users confronted with this unexpected outcome could easily perform a “reality check” on the average and tell us if we were doing something wrong or if they were.

What we learned was that the problem resulted from a number of different situations that fell into three general categories:

  1. Some programs were delegating the job to mortgage lenders of ensuring that purchases were affordable. In the past, lenders had been unwilling to lend to buyers who would be cost burdened, but like so many other lending standards, this one was relaxed significantly in the early 2000s. In these cases, the impact report was doing its job and pointing out a failure on the part of the programs.
  2. Just as often, we found that complexities related to the entry of a household’s income (particularly related to income sources such as Social Security or child support) made buyers who actually were not cost burdened appear to our report to be paying a higher share of income than they really were. This was a failure of our data system to consistently capture all the relevant information.
  3. But after accounting for both of these cases, a large number remained in which programs had knowingly allowed buyers to purchase even when their total housing costs exceeded the cost-burden standard. Most frequently, this occurred because the families had been facing an even higher cost burden in their prior housing situation. HUD rules generally allow this kind of exception but we were surprised by how many buyers fell into this category. This is a failure of the standard itself. As housing costs have risen, families have become accustomed to paying far more than one-third of their income for housing, and what was once a rare exception has become something of the norm.

Without this kind of open-ended exploration of the data directly alongside the end users, we could never have made sense of this result. We would have had to choose between wrongly concluding that the data pointed to an enormous failure of our programs to meet an appropriate standard or wrongly concluding that there was no cause for concern because our data was flawed. As it turned out, there was some cause for concern. The data called out a practice that was leading some programs to sell to buyers who might be in over their heads, but the practice was nowhere near as widespread as it looked at first. Only by bringing the data full circle back to the ground level data providers could we have developed enough confidence in the results to call attention to the real problem.

The F. B. Heron Foundation appreciates how interacting around the data can contribute to better performance management of its investees. It has broken from the pack by investing in CoMetrics to create a common platform through which recipients of its newest investment product, Philanthropic Equity (PE), will report their results. PE is funding targeted to the growth of enterprises (nonprofit, for-profit, or other legal forms) and not to specific projects. CoMetrics has created a common chart of accounts onto which each investee has been mapped. Each quarter, investees will submit both financial and social impact data, which can be standardized, reported to Heron through an interactive web interface, and then returned to investees for their own analysis.


Everyone wants better data, but it is hard to justify taking scarce resources away from delivering social impact and putting it into measuring social impact. In the private sector, data has already won this rhetorical battle: There is a widespread recognition that companies that have invested in better data have frequently been able to use that data to drive improvements in financial performance that more than justify even very significant data system costs. Better data helps companies do everything else more effectively and efficiently.

But in the social sector, we have not yet proven this point. HomeKeeper and CoMetrics show that by working together, social enterprises can marshal data to drive meaningful insights, but we have yet to fully see those insights consistently hitting the (social) bottom lines of participating organizations. Data projects such as these will be fully sustainable only when participating organizations tap the power of the data to drive regular and ongoing incremental improvements in how they deliver social impact. Unless organizations can use better data to make more of a difference, better data will be an expensive luxury both for organizations and their funders.

[1]   J. E. Davis and A. Stokes, Lands in Trust, Homes That Last: A Performance Evaluation of the Champlain Housing Trust (Burlington, VT: Champlain Housing Trust, 2009).

[2]   At the time, Rick Jacobus was Director of Cornerstone Partnership and Annie Donovan was Chief Operating Officer of Capital Impact Partners.

[3]   K. Temkin, B. Theodos, and D. Price, Balancing Affordability and Opportunity: An Evaluation of Affordable Homeownership Programs with Long-term Affordability Controls (Washington, DC: Urban Institute, 2010).

[4]   Cornerstone Partnership, “Stewardship Principles for Affordable Homeownership.” http://affordableownership.org/principles/.