The Data Backlash

I’m a huge fan of Chapin Hall at the University of Chicago, which is why I participated in their recent webcast on measuring child well-being at the neighborhood level. I expected a tutorial in new (perhaps GIS-based) methods of gathering demographic information, but got more of a snapshot of where one city, Chicago, is in its efforts to produce and share data in a way that actually improves the chances of people living in distressed communities. We should note that Chapin Hall has played a starring role in human services research, creating the model for understanding the characteristics of kids in the child welfare system in Illinois and surrounding states, and how they fare both in and after leaving the system. Cities and towns with less robust research players, and agencies not lucky enough to be part of well-funded, comprehensive neighborhood quality-of-life projects, are likely to struggle with more basic problems concerning evaluation. But still, the webcast panel provided some useful takeaways. Points made by the panelists include these (with some elaboration from me):

  •  The data that agencies are required to collect typically isn’t used to plan, implement or manage projects; it’s used only to ‘tell the story’ of what happened once a project is over. But telling the story doesn’t have anything to do with actually improving client outcomes, since at that point the story is over. Why should agencies have to bother with it, in that case? That fundamental question has produced a backlash against the ever-greater push toward using data to measure program impacts.
  • That said, organizations are awash in data — data they generate themselves; data on their city or county generated by outside groups; community demographic data extracted and packaged by the census. But organizations don’t necessarily know how to analyze that data, or even feel like they need to analyze it.
  • Why? A bunch of reasons. Because the data only tells a part of the story; because it’s hard to vouch for the reliability of all of it, given that collection and input isn’t necessarily consistent; because it feels like local intuition and knowledge is being subordinated to a cookie-cutter measurement rubric supplied by someone else; and because, in the end, data that supposedly ‘proves’ program impact might prove the opposite, and ultimately be used to punish the agency.
  • Yet providers really do need to analyze it, because the qualitative information they tend to rely on to make internal decisions — complaints from clients, for instance, or anecdotes from caseworkers about what they see working — is important but insufficient, particularly if the program is part of a larger collaborative seeking to make many kinds of differences.
  • Though it seems like we have an awful lot of data, it’s not necessarily easy to get the right data. For instance, if you’re part of a coalition working on strengthening families in your community, you need a good indicator of family functioning that tells you if your interventions are working. It turns out school attendance is such an indicator, at least for younger children. But school attendance records at the sub-neighborhood level (say, the block level) are hard to get, and particularly so when they’re broken down by age ranges. There might be only 10 families living in a particular block with kids in grade 6 or lower; finding out that those particular kids have high absentee rates comes perilously close to identifying them as individuals. To get that data, you may very well need parental consent. Since chances are you can’t get it, you can’t use this indicator very effectively. It’s possible to solve such issues, of course, and an initiative in Chicago has even done it, but it takes committed players that include the school system. Someone has to think it’s important enough to do, and follow through with doing it.
  • Agencies need to create ‘data leadership’ in their programs. They need to engage staff in creating evaluation systems and lead those staff to use data differently than they are currently using it. That’s not necessarily easy, and it costs some money. Foundations at the forefront of this work are recognizing that getting agencies to use data better is really an issue of building their capacity to do. It’s about hiring dedicated staff to collect and input the numbers; it’s about teaching staff what evaluation is and getting them to frame some of their own research questions so they’re bought in. Finally, it’s about using data to make mid-course corrections, and knowing you won’t be penalized for doing so by the very funder who asked you to collect it in the first place.

As government and private foundations move further toward demanding rigorous evaluation of programs, it’s good to acknowledge that funders themselves need to work a little harder to make sure agencies can use data rather than just generate it.

There are some good  intermediary organizations out there that collect and package data, and help groups tailor public data for their own uses. The list below, provided by Chapin Hall, is Chicago-heavy, but still valuable for anyone wondering what’s possible:

I would of course add Youth Catalytics’ own training and technical assistance on evaluation, customized to your own particular needs. Check it out here.

~ Melanie Wilson, Youth Catalytics Research Director

Posted in uncategorized

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe and Receive E-mail Updates