Historically, governance decisions have been informed by a range of data, from crop yields to military inventory to population size. As early communities shifted from hunter-gatherer to agrarian societies, the need for data and record-keeping often served as the impetus to create systems of writing, and the systematic use of data allowed for increased complexity in governance. The data practitioner can take pride in the notion that civilization is often built at least in part on measurement.
As our capacity to measure has increased, though, so have information clutter and difficulties in zeroing-in on what matters most for our decision-making processes. Community indicators projects serve a critical public purpose in distilling data into a prioritized set of measures that are chosen to shape action and policy responses.
In creating a community indicators project, participants select a set of data points that describe the well-being of their community. This set can range from a dozen to well over 100 measures that provide a snapshot of the state of the community. The process is repeated on a regular basis, generally annually, as the group reviews both the trend lines and the story the data tell about the community—where progress is being made and where the situation is worsening.
In the last three decades, the rapid development of community indicator systems has accelerated understanding of how to select and use data to generate community change. The creation of different frameworks, and the resulting dialogue from distinct perspectives, adds to the knowledge base as communities search for strong measures to inform policy and action. Two key, interrelated questions have emerged from the debate: Which measures matter, and—perhaps more importantly—who decides which measures matter?
What to Measure: Finding the “Perfect” Indicators
That which gets measured has a significant impact on priority-setting, decision-making, and policy. Finding the right metrics has long been a staple of management textbooks and political campaigns. For the indicators movement, finding the right measures has been an ongoing effort. This effort begins with understanding the characteristics of indicators, a collection of data with qualities that distinguish them as more effective than other measures. A single measure outside of a framework is only a statistic. Good indicators are statistics with direction: data whose trend lines tell a story of movement and identify the distance between the actual and the desired. Great indicators add context and allow for projection of future outcomes; by examining anticipated trend lines, policy development and action can be implemented to bend the trend lines. Finding good indicators takes work, but many good indicators exist. Great indicators are much more difficult to hone in on. The community indicators movement is in search of great indicators.
For example, the infant mortality rate for Duval County, at nine per 1000 live births, is a statistic. Adapting that statistic to become a good indicator means situating it in a trend line that can tell a story—for instance, time series data can show that the infant mortality rate in Duval County declined by 25 percent in a four-year period. A great indicator goes beyond trend reporting to create priorities for community action. A great indicator may help identify that the racial disparity in infant mortality rates increased in the same time period, and that African American infants in Duval County are still more than twice as likely to die before their first birthday as white infants. Another approach might place the rate in context with state and national rates to identify whether the county faces geographically-based disparities.
A third might look at smaller geographical subsets to identify “hotspots” of negative outcomes. Great indicators can thus highlight specific community needs and challenges, and can point to potential targets for intervention.
Great indicators are responsive to changes and have strong reliability and validity. They are clear in what they measure, filtering out extraneous factors to focus on the issue. They measure an important community condition and are able to be affected by public policy or community action. Moreover, they are powerful storytellers of the community, evoking a response among the public. They are compelling narrators of community conditions and are accessible to a broad range of actors. They anticipate community problems with enough time for action to create tangible outcomes. They spur change and respond with results.
The challenge with indicators is that they are, by nature, descriptive and not prescriptive. They describe what is, but not what needs to be done about what is or how to create what should be. The focus of every indicator project—their reason for existence—is to prompt policy change. Indicators are more than just data curiosities; they are intended to impact policy discussion and action. To be great, then, indicators must both describe a relevant aspect of the community and be linked within a framework that invites a policy response. The difference between interesting and inspiring data may be small, but it is a critical distinction.
What to Measure and Who Decides?
A common thread among different indicator projects is the desire to find the right measures to influence policy and action. Thousands of indicator sets have been created, and each one reflected a fundamental desire to identify the metrics that would shed light on an important issue and influence the decision-making process of the appropriate governance systems. The questions of what measures matter—and more recently, who is allowed to take part in deciding what measures matter—undergird the community indicators movement. As indicators effectively influence public policy, then those who select the indicators will find themselves with a stronger influence on public policy than those who do not. Indicator frameworks are often designed by people with vastly different community roles. For example, an activist and an academic will differ not only in the decisions they make, but also in their decision-making processes. These differences persist even as projects develop their own “indicator selection guidelines” to inform their decision making. Factors that influence who participates in measurement selection and what measures are selected include the following:
Some indicator systems take a neutral approach to geography: They measure key factors and report these factors for different geographies for convenience and comparison. The geographic unit does not motivate measure selection. It is not part of who decides what to measure or what is measured. For example, the Annie E. Casey Foundation’s KIDS COUNT report is designed to measure the well-being of children, wherever they live in the United States.
Other indicator systems are actively focused on a geographic area (eg. state, metropolitan, or neighborhood level). In these systems, what matters is specific to a particular geography; in these instances, those who make the determination of what to measure are based within that geography, and are both aided by the strengths and constrained by limitations of extensive localized knowledge. The Jacksonville example discussed later falls into this category.
When geography is the driving force, the data system may include rich specificity in localized issues. The trade-off here is that comparing local indicators with other geographies, or placing local trends within the context of broader factors, may be difficult because the localized data may not exist at larger scales. On the other hand, national or global indicator sets may have less applicability to local issues and may be less useful for local decision-making processes.
Framework and Focus
The organizing framework selected often influences what indicators are measured. Since the 1990s, four frameworks have been generally used to determine what mattered most: quality-of-life, sustainability, healthy community, and government benchmarking. Quality-of-life projects described a broad array of issues in the external environment of a city. Sustainability initiatives began with an environmental emphasis, whereas healthy community projects began by looking at health and associated determinants. Government performance benchmarking efforts evaluated the effectiveness and efficiency of public efforts to improve the community.
In the first decade of the 2000s, a fifth framework, centered on subjective well-being, gained momentum. These initiatives focused measures on questions of happiness in a population and sought policy changes to improve public happiness. Noted examples are Bhutan’s Gross National Happiness Index, the Organisation for Economic Cooperation and Development’s (OECD) Better Life Initiative, and the United Nations’ (UN) World Happiness Report.
The framework of a system drives who is involved in the indicator selection. An indicator system operating from a sustainability framework may engage a number of environmentalists in determining the best measures to describe progress toward sustainability. Another system, developed in a healthy community framework, is more likely to include public health officials in making decisions about what is or is not measured. The same is true for those indicator reports with a focus on economic development, or those framed around social responsibility or racial disparities—the framework influences (and is influenced by) those who share common values.
A primary implication of setting the framework is that indicators that don’t align well with the chosen framework may be excluded.
In restricting the indicator set’s scope, the potential for the indicators to point to innovative or cross-sectoral solutions to issues may be limited. In short, the power of the indicators to illuminate the state of the community and influence changes in programs or policy may be curtailed precisely because the intended policies can be hard-wired into the indicator selection process, leaving little room for unanticipated learning.
Additionally, a strong framework focus can lead to the inclusion of suboptimal indicators to capture particular aspects of a hard-to-measure issue. For example, if the framework suggests that vibrancy in cultural arts is an essential desired quality for the community, but
the community lacks effective ways to measure cultural vibrancy, substitute indicators such as “museum attendance” may be shoehorned into the indicator set to plug a perceived hole in the measurement system. This can occur even when the identified indicator is seen as merely a weak approximation of the desired aspect of community life.
Rigor Versus Relevance
Many initiatives often identify indicators in a process tensioned between the ideal and the available, the possible and the practical. In the 1980s, the community indicators movement began by engaging local residents to self-determine the measurements that matter for progress. For initiatives in which decisions are made by citizens, the selection process is generally influenced by a desire for familiar, simple, easily-understood measures. By contrast, academic research in data for improved decision making often points to measures with greater rigor, which may also have increased complexity.
As a result, community-based decisions about what to measure can frustrate a researcher—one famously referred to community indicators as a “folk movement” and argued for increased scientific rigor in their methodologies. The metrics may be seen as too simplistic, or too limited, to answer questions of policy importance, and may not lend themselves to robust policy analysis. Conversely, decisions made by researchers may produce results that are too far removed from the lived experiences of residents, isolating them both from the information presented and the opportunities for policy action. For example, using the Gini coefficient rather than poverty rates has implications for how inequality is understood in a community; the former provides more information but is less accessible to the layperson, whereas the latter has numerous flaws in construction but has the advantage of familiarity.
Many projects have attempted to simplify the complex by using indices, which have the advantage of associating a single number to a concept and allowing a general trend to be understood as easily as a letter grade on a report card. However, as on a report card, a single grade may not provide sufficient information on what specific policy areas need to be addressed—merely that overall performance is unsatisfactory. Unpacking an index into its component parts, on the other hand, may create policy focused on just one piece of a larger puzzle without regard to the interdependencies among multiple factors influencing outcomes.
Politics and Power
Some indicator projects are designed to justify a course of action more than to identify one. The use of data for marketing or advocacy is not a new concept; data a chamber of commerce uses to promote a city is different from the data selected to advocate for unmet social needs in the community. Political pressures to promote the positive and downplay the negative can make public reporting of some information difficult.
The political implications of data reporting can thus shape data selection. If the decisions on what to measure are made by someone attempting to create or preserve a political legacy, the indicators selected may be different from those that might be chosen by someone with a different or competing agenda. This distinction often plays out in national political debates on the state of the economy. For example, the party in power tends to emphasize measures that show positive economic movement, whereas the opposing party emphasizes measures of misery—and the choices of which indicators to measure change with the political tides.
In addition, the availability or quality of information provided can be affected if the data appear to challenge or threaten the institutions responsible for gathering and providing the information. If those in power do not like what the data shows, they might adjust definitional frameworks, limit funding for data collection, restrict access to data, or replace the measures entirely. Data that support institutional values may result in an increased use of that data, even if better data could be made available.
At times, the indicators selected for reporting may reinforce an existing course of action—sometimes referred to as decision-driven data-making. On the other hand, if those with the political power to make policy changes are not involved in the selection of indicators, they may be less inclined to use the data in their decision-making process.
In summary, indicator systems are not neutral collections of statistics. They are shaped by the interests and values of the people and institutions included in making decisions about data selection. This has profound implications both for the utility of these systems for influencing community action and for the inclusion of community in the decision-making process.
Case Study: Jacksonville, Florida
Jacksonville, Florida, has the nation’s oldest and longest-running community-based indicator system, the “Quality of Life Progress Report.” This project began in 1985 with a group of 94 citizen volunteers attempting to define and measure the quality of life of the community, creating an initial indicator set of 83 measures covering topics including the economy, public safety, health, education, natural environment, mobility, government/political environment, social environment, and the cultural/recreational environment. (Figure 1)
The project was motivated by a strong desire for community improvement and began with the assumption that the factors that would be important to community well-being and amenable to improvement through policy or program changes are both measurable and accessible in the external community environment. The committee recognized from the beginning that they could not accurately measure some aspects of community life, owing to existing data limitations. Over time, some (but not all) of these limitations have been addressed; for example, the project has expanded to allow for measurement of how poverty affects access to health care, but it still lacks an adequate measure for religious harmony and cooperation.
At the project’s outset, the committee made key decisions that have continued to influence how indicators are selected, maintained, and used:
The indicators would be selected by citizens in the community, as informed by experts. Every year since 1985, the indicator set has been reviewed by a citizen’s committee. This process has directly resulted in improving the usability of the data in to the community. For instance, terms such as “per capita” were changed to “per person” because, as one committee member explained, “If you mean ‘per person,’ just say so. Don’t make it harder for people to understand what you’re talking about.” Measures such “age-adjusted death rate” or “years of potential life lost” were rejected because they were too technical and took too much effort to explain to laypeople.
The project was designed to measure Jacksonville’s progress over time, and not to compare Jacksonville’s progress with other cities. Although regional-, state-, and national-level data are provided for basic context-setting, the focus is on internal progress and change. This means that the project is influenced more by internal trends and needs rather than efforts to find common measures that enable cross-jurisdictional comparison. This has both decreased the capacity for strong comparative data and increased the opportunity for creative, local-specific measures.
The data set from the first report was explicitly designed to be open to adaptation. In its annual review, both the quality and effectiveness of each of the indicators are up for discussion, and revisions are made quickly as better data become available. For example, in the first report, the data underlying several mobility indicators were generated by having volunteers drive fixed routes and time themselves, and then averaging the results the volunteers obtained to determine changes in commuting times from year to year. As better data have become available, other methods for calculating commuting times have been used.
The project continues to be shepherded by the Jacksonville Community Council, Inc., an independent, nonpartisan, nonprofit organization. The project has remained outside of the political process, enabling it to survive through multiple changes in local political administrations. The trade-off for independence and sustainability, however, has meant that the project has never been a core initiative of any one political leader, a situation that requires a constant effort to educate and encourage elected officials to embrace and use the data.
The audience for the project was defined broadly from the beginning to include neighborhood activists, human services planners, media, politicians, students, grant writers, and philanthropists. We recognized that no single presentation format could meet the needs of this wide variety of intended audiences, so we offer multiple presentation options for the indicators, from simple one-page briefings and brochures to more complex, deeper reports, as well as an interactive web-based mapping system.
The intent of the project was to spur action, not just report trends. In its annual review prior to publication, a citizen’s committee assigns “gold stars” to the trend lines moving in a positive direction and “red flags” to trend lines indicating trouble. In any given year, the organization uses one or more of the red flags to mobilize the community for action through a shared learning, engagement, and advocacy process. Meanwhile, organizations that are contributing to positive change are highlighted annually, reinforcing a shared community responsibility for improving the trend lines.
A shared language about measurement is evolving out of two key collaborations among organizations working to improve understanding of measurement systems and their community impact.
Community Indicators Consortium: In 2003, the International Society for Quality of Life Studies sponsored a conference on community indicators, leading practitioners and researchers from various backgrounds, using different frameworks, to recognize the need for a multidisciplinary conversation to advance the science and practice of community indicators. In 2004, organizations and individuals from sustainability, healthy community, quality-of-life, government benchmark, and other perspectives met to create cross-fertilization of ideas and find synergies across efforts. The result was the creation of the Community Indicators Consortium, which continues to sponsor conferences, webinars, training, and research on best practices for community-based measurement and impact.
The Consortium developed a descriptive model on integrating community indicators with government performance measures that encouraged these government benchmarking initiatives to communicate more fully with other frameworks. Government benchmarking initiatives tend to examine internal government processes and practices to determine their effectiveness, whereas community indicators frameworks tend to focus on community outcomes to gauge the success of existing policies. Integrating the two approaches may result in greater effectiveness in community improvement than using a single framework alone.
The Consortium has brought open data and big data proponents into dialogue with community indicator developers to build common understanding of the possibilities and pitfalls of increasing data accessibility and to share lessons learned from community data systems as new actors launch data advocacy efforts. In the process, increased blending among sustainability, healthy community, and quality-of-life frameworks have resulted, as the interconnectivity of metrics has been explored.
Beyond GDP: The limitations of using gross domestic product (GDP) as a sole measure for societal progress has been recognized for decades. Robert Kennedy in 1968 famously said that it “measures everything in short, except that which makes life worthwhile.” Building on the many initiatives that have tried to find better measures of well-being that include economic progress as well as societal well-being and environmental sustainability, the Beyond GDP movement started to coalesce in 2007 at the Beyond GDP Conference at the European Parliament.
In 2008, French President Nicolas Sarkozy put together a commission led by economists Joseph Stiglitz, Amartya Sen, and Jean-Paul Fitoussi. The report that resulted continued to drive the debate forward on how to build a new global measure of progress, and initiatives in the OECD and the UN are well underway to find a workable global answer.
In this area, local community indicators systems are serving as laboratories for implementing measures of progress that allow for great creativity and flexibility. As research in understanding new measures of happiness, or internal well-being, continues on the global front, local communities are discovering how to integrate these measures and create new indicator frameworks. The opportunity to share information, from global research to local application and back again, can strengthen both local initiatives and international debate.
Challenges for the Field
From inception, the greatest strength of the community indicators movement was perhaps the ability to hyper-localize measures of community well-being. Many early community indicator projects were driven by a desire to democratize data–to make information more available to the general public. The projects were undergirded by the belief that better decision making, public accountability, and community dialogue would result if everyone in the community had access to the same information. Since then, much has changed. The primary challenge today is not making data available to the public. More information is available on the cell phone of the average resident of a community than any organization could hope to publish in the 1980s. The new challenge is sifting through the incredible complexity of available data to discover what is meaningful and what is powerful—the data that shed light on community conditions and inspire action toward improvement.
In addition, the movement has greater information about which indicators have been more effective at creating change than others. There is an increasing demand for standardization of metrics and the creation of a national index of well-being, such as those other countries have developed. This creates a natural (and healthy) tension between local creativity and experimentation in indicators development and national and global accumulation of expertise in effective measurement systems. This also directly impacts the question of who determines what success looks like in communities, and who is involved in selecting the measures to define, report, and hold the community accountable for reaching that success.
The push to create a common index also highlights a tension between the desire for simple measures that aggregate myriad data and the need for granular data that can more accurately reflect complex aspects of communal life. The challenge for the community indicators movement is to advocate for the usability of indicators to create collective impact and influence change, which requires (in most cases) a disaggregation of data to focus public priorities. Greater data availability will allow for more disaggregation across dimensions such as poverty, race and ethnicity, gender, age, and small-scale geography to allow the community to be more precise in targeted interventions and measurement of results. Community indicators at the core are designed to be more than description—they need to compel action.
The likely short-term future for the community indicators movement is increasing diversity of local measurements informed by national and international debates about indicator systems and frameworks. The growth in data literacy, facilitated by easier-to-use tools and clearer data visualization opportunities, allows for local data choices to respond to national and global research about data effectiveness, reliability, and clarity. At the same time, because more people are familiar with and use data in their own organizational decision making, the opportunities for creative data creation strategies are outpacing capacity to analyze effectiveness of these data solutions. In short, more is countable, and more of what is countable can be used to answer local questions about community progress.
A movement that began somewhat idealistically with hope for democratizing data is now focused on the how to use this shared data in public decision making, and increasingly understanding the linkages among sharing information and sharing decision-making power. Once upon a time, the thought was simply that information is power; today, it is perhaps more accurate to say that information, along with the tools to use that information, creates powerful opportunities for change. Indicators, in other words, are a necessary but insufficient portion of a community change model. A primary challenge for the movement is to become intentional about how the indicators fit into a theory of change and create measurable action. Projects that are only reporting information risk irrelevancy if they do not build the collaborative partnerships necessary to ensure targeted use of the data.
The movement is beginning to explore the strengths and challenges of bringing together aspects from different measurement frameworks, and is wrestling with the coordination and tradeoffs that this effort requires. Increased transparency and trust among organizations is needed to work toward a shared vision of community improvement that builds on the values of the community. Integrating subjective and objective measures, as well as externalized community aspects with internal satisfaction measures, are already beginning to happen. A key question that should be at the forefront of these discussions is: How will these measures be used to influence policy?
Over the last 30 years, the community indicators movement has become more widespread and more effective at identifying indicators that are broadly accessible to the public and useful for generating positive change. The next step for the movement is to evaluate and endorse higher-quality indicators that have greater efficacy while encouraging continued local-level experimentation with new measures that will continue to expand the knowledge base of the field.
For Further Reading
Council of Europe, “Involving Citizens and Communities in Securing Societal Progress for the Well-Being of All: Methodological Guide” (Strasbourg, France: Council of Europe Publishing, 2011).
Jon Hall and Lousie Rickard, “People, Progress and Participation: How Initiatives Measuring Social Progress Yield Benefits Beyond Better Metrics” (Berlin, Germany: Bertlesmann-Siftung Foundation, 2013).
David Swain and Danielle Hollar, “Measuring Progress: Community Indicators and the Quality of Life,” International Journal of Public Administration, 26 (7) (2003): 789–814
Milan Dluhy and Nicholas Swartz, “Connecting Knowledge and Policy: The Promise of Community Indicators in the United States,” Social Indicators Research, 79 (1) (2006): 1−23.
Benjamin Warner, “The Jacksonville, Florida Experience.” In Community Quality-of-Life Indicators: Best Cases II, edited by Don Rahtz, David Swain, and M. Joseph Sirgy (Dordrecht, Netherlands: Springer, 2006).