We face a choice between two models for donating data: one governed by corporations and one determined by grassroots civic action. The winner will decide how much control we have over our digital information.
By Lucy Bernholz & Brigitte Pawliw-Fry Winter 2022
(Illustration by the Project Twins) 
When vaccines against COVID-19 first became available in the United States, demand far outpaced supply. Public-health websites were difficult to use, varied by county, and lacked the user-friendly navigation mastered by ecommerce. All around the country, people with web-scraping skills got started designing easier ways for people to find and make vaccination appointments. They called themselves vaccine hunters.
By March 2021, similar efforts were under way in Canada and elsewhere. A web developer named Andrew Young kicked off the vaccine-hunter effort in Toronto after struggling to make a vaccine appointment for his father. He and several colleagues launched Vaccine Hunters Canada (VHC) with little more than a Twitter account and a Discord server hosting updated information on vaccine availability.1 With these two tools and a team of committed volunteers, VHC quickly created a way to source and organize up-to-date information on where to get vaccinated so that people could find and make appointments easily. The volunteers used data they found on pharmacy and public-health websites, as well as information that people at the vaccination sites sent to VHC’s Twitter account through either direct messages or tagged tweets. This collective effort enabled VHC to alert people when sites ran out of vaccines and to provide information on which types of vaccine were available at what sites, what languages were spoken at a particular site, and up-to-the-minute information on eligibility. Torontonians who got vaccinated reported back about new rules at the sites, sometimes including information and changes that even city councillors or members of parliament didn’t know about.2
VHC is an example of people using data to power collective action. It relies on information from a variety of donors—including companies like Walmart and grocery chain Sobeys—that host vaccination sites, public-health agencies, and individuals who send updates based on their own experiences. This mix of data sources is integrated for ease of use, and site-level observations from people keep it up to date. The VHC team is extremely careful with the information it collects and uses, focusing on collective availability and access, not on any particular person’s action.
Collective action driven by data is an increasingly common practice, one that includes examples from civic science, medical research, art and culture, consumer advocacy, and other domains. Running through these diverse initiatives is the idea that people can deliberately contribute digital data to a cause greater than themselves, in the way in which people have long given money and time. We already have laws and norms for giving money and time, including those governing nonprofits, charitable giving, and volunteering. Among other considerations, these laws have to be written in ways that distinguish intentional donations of time from exploitative labor practices and financial contributions from extortion or fraud. Both instances rest on the donor’s making a deliberate choice. Individuals have control over whether or not they give money or time to a cause, and a set of protections is in place for both the donor and the receiving organization. Rules about giving time and money have existed for so long that many people don’t think about them much, save for policy advocates, scholars, and reformers. But when it comes to giving data, we are starting anew, and the decisions we make will matter to all of us. The fight over if and how we give our data, and who gives it, is just beginning.
Two very different futures hang in the balance. One future resembles efforts like VHC, in which people seek information that is hard to aggregate or locked behind proprietary walls. We’ll look at a range of cases, from people sharing bird photographs to cable bills to health data. In each of these examples, communities are setting the rules about data use, managing the technology, and providing new services or analyses. This approach is people-driven—decisions about what data to collect, under what terms, and with what protections are often made by people whose own data are involved. The software code underlying it all is often open source, and caretakers pay close attention to what can and cannot be done with the data.
The other future has already been evident for more than a decade. It is a top-down, corporate-led model of data philanthropy, a term coined by the United Nations. In UN parlance, data philanthropy happens when “the private sector shares data to support more timely and targeted policy action.” 3 In this approach, companies with large data sets (which is just about all companies these days) allow researchers access to that data to address a particular challenge. Examples here include telecom companies analyzing aggregated phone location data to track people’s movements during pandemics or analyzing social media feeds to assist in disaster relief. This approach enshrines the companies’ complete control over their data streams and allows them to reap goodwill by conducting research on some externally defined question. People whose data are in that stream have no say over what happens to it, external review of the analysis or the underlying data rarely happens, and the companies’ own terms of service capture the decision-making process.
Allowing corporations to set the agenda for giving data tramples the free-choice aspect of donations. It also runs counter to democratic interests by opening loopholes through which companies can avoid public accountability and constrict civil society. In 2021 alone, Facebook shut down academic researchers (at NYU) and civil society organizations (AlgorithmWatch) by simply cutting off access to the data they were using. The company has started to limit access to CrowdTangle, a tool that it owns and that scholars use widely to study social media. Even more worrisome, Facebook announced to researchers in September 2021 that data sets the company shared on election studies were flawed. The revelation raised questions about the researchers’ findings and showed just how many ways the company has to put its thumb on the scales of oversight.4 Because Facebook (and Google, Amazon, Apple, Twitter, and others) control their data, they have undue say in research and oversight. They have the power to cut off research they don’t want done. By controlling what data (if any) are made available for study, the companies can influence what questions get asked and answered. To avoid these restrictions, we must separate the need for public accountability and research integrity from the domain of philanthropy and ensure that companies not receive opportunities or incentives to blur the two.
At stake in the fight between these two futures is whether we will have the ability to make intentional choices about using our data for a public cause. The first path, that of community-driven models for giving data, is crowded with ways to do so. They’re not yet very visible or consistent, and there are still too few protections against misuse or fraud. The space is fragmented and not easy to navigate. But it contains a tremendous amount of creative energy—and all sorts of possibilities for becoming easier, more common, and more consistent. Although the “hows” of individuals giving data are still emerging, one thing they all share is a commitment to intentionality. The participants developing methods and rules want to be as clear as possible that participation is voluntary and deliberate—that you, and only you, are choosing to contribute your data. Just as rules about volunteer time prevent worker exploitation, rules about giving data will need to prevent further data extraction.
Many possibilities exist for using digitized data for public benefit and for requesting data contributions. The vaccine hunters, for example, show how usable data sets can be created by drawing together information from corporate databases and individuals. These different data are then curated and managed to help people take action. Other examples illustrate alternative problems, questions, and answers that are worth considering in some detail. While they rely on different types of data, engage different communities, and aim to address very different issues, they explore some cross-cutting questions. First, they must find ways to invite participation. Then they must dig into the details of the data itself, people’s range of privacy concerns, and many questions of control. Digital data often represent relationships between people as much as they do individual behavior. Emails have senders and recipients; DNA represents people across generations—how will those relationships be protected? What types of consent or opt-in mechanisms do people understand and trust? What protections and promises can be made about data security, secondary use, or deletion? How might we create processes to contribute data that respect people’s individual interests and collective privacy? The examples that follow come from iNaturalist, Consumer Reports, and medical researchers. The processes they’re developing can inform what must be collectively decided to create safe and equitable ways to contribute data.
Cat Chang started using the phone app iNaturalist because mushroom guidebooks are heavy. A Native Hawaiian, Cat carries two distinct universes of knowledge in her head. She remembers walking with her grandmother on the community land her family inhabited for generations, spending time talking to and carefully caressing and inspecting every plant. Cat and her family moved to Northern California when she was in high school. In college, she turned to botany and horticulture—once again learning about plants, but this time through the lens of Western science and Latin taxonomies.5 As a landscape architect, Cat became fascinated with soil health and mushrooms, which she examines on long hikes. Carrying a backpack full of guidebooks on a trek in the woods is tough. “When a friend told me about iNaturalist—a phone app that would let me leave the guidebooks at home—I was hooked,” she says. “I don’t use a lot of apps, and I’m not very techy, but this one was for me.”
Cat uses the iNaturalist app almost every day. She uploads high-quality photos and blurry ones. When she can, she includes her best guess at identifying what she’s looking at. “But that’s just the beginning,” she says. “Once the photos are uploaded, the online community checks them out. They suggest identifications. If enough people agree, the photo is tagged as identified. If it’s a good-quality photo and it’s properly identified, it gets marked as ‘research grade.’” Those photos are then marked in the iNaturalist database so scientists can use them. Even if a photo is not research grade, however, Cat is likely to learn something.
Every photo is a donation of digital data. As of July 2021, the iNaturalist data set contained more than 78 million photos, videos, and audio recordings. The app, the database that powers it, and the community of four million users includes experts and amateurs in every field of natural science. The uploaded images are sorted by the metadata (information about location, time, and date) that phone cameras automatically embed in photos.
Scientists use iNaturalist’s data set to track change over time in ecosystems around the world. Amateurs use iNaturalist to quickly get the name for something they’ve seen, share what they’ve learned, and help answer questions. The photos are all donations; the time people spend helping answer questions is all volunteered. The results are extraordinary. Millions of people get their curiosity sated, an incomparable biodiversity database grows from donated pictures, and communities of like-minded people connect online and off. Data from iNaturalist have also contributed to scientific breakthroughs. Other communities, such as those that use an iNaturalist-type app called eBird, have helped build a database from which hundreds of peer-reviewed science papers have been independently published. When it comes to data donations, small contributions have a big impact.
People on iNaturalist are there because they want to be. People have to download an app or seek out the website to participate. The team at iNaturalist has to be as clear as possible about what happens with uploaded photos and must give people choices at every step to opt in, to remove information from the photos, or to set up an account. For each of these decisions, the team is weighing what’s better for science and what’s better for people in the community. For example, if you want to upload a photo to get an identification but not share your location information with the site, you can do that. It downgrades the photo for research purposes but confirms that the photographer controls the data. Users of iNaturalist are in a reciprocal relationship. They upload photos to get their questions answered, and their contribution of data helps scientists study the world around us.
A further example of people-centered donations of data comes from Consumer Reports (CR), a US-based consumer protection group known for its rigorous testing of appliances and cars. In July 2021, CR launched an effort called Fight for Fair Internet. The goal is to learn about cable service in different parts of the country as experienced by customers, not as advertised by cable companies. To achieve this objective, CR encouraged people to contribute data from their cable bills. The CR team then analyzed the bills for hidden fees and location-based differences in pricing. Working with a dozen organizational partners, CR developed an intake process for the bills that protects contributors’ privacy. It also created an optional survey for participants to provide additional information about income, demographics, and political identity (hypothesizing that broadband access was a bipartisan issue). More than 36,000 people have participated. CR thanked participants for their donations by giving them a free membership to CR.
In setting up this research effort, CR drew on some of the practices of civic science and medical research. Participation is entirely voluntary. Clear statements explain what data are being collected, how they’re being used, and what will result from the effort. Data contributors are respected and acknowledged with a gift. Their data—which could be useful to CR in many other ways—is used only for the purposes of the study. The staff of CR’s Digital Lab, which is running the study, has worked with marketing, membership, and research staff across the organization to protect the integrity of the data donors’ relationship to the study.
CR built reciprocity into its study of cable bills, providing memberships in return for data but also answering a society-wide question that the individual participants care about. Like iNaturalist’s work, the research on discriminatory pricing is intended to benefit society. Reciprocity is also part of the power of Worker Info Exchange, a London-based group using data from rideshare drivers to help understand the power dynamics between gig platforms and the people who drive for them. These are efforts to use data to level the playing field between corporations and people.
Community organizers, nonprofits, and philanthropy are well suited to thinking through these dynamics, because they prioritize trustworthy relationships and operate under norms that favor engaging people by choice.
As often happens, new regulations are both the cause and the result of such activism. The General Data Protection Regulation (GDPR), an EU regulation about data, gives people the right to demand copies of their data. The California Consumer Privacy Act (CCPA) also does so. Both laws create new practices for requesting people’s data from a company or having someone else do this for them as a proxy. A 2020 report by two EU researchers found dozens of examples of aggregating proxies as a tool for building community and creating new data sets.6 As research and advocacy efforts like CR and Worker Info Exchange grow, we will likely see these proxy relationships rise in importance as tools for aggregating data. They might even become as important to this kind of research as Freedom of Information Act filings are to researchers and journalists.
The emergence of data proxies as a tool for aggregating data is one way in which civil society organizations are responding to new rules about data. Community organizers, nonprofits, and philanthropy are well suited to thinking through these dynamics, because they prioritize trustworthy relationships and operate under norms and regulations that favor engaging people by choice. In a somewhat odd irony, the rules that ensure financial donors privacy, choice, and influence might provide a template for how nonprofits think about data donors. The translation of practices is not straightforward, however, because digital data don’t function the way money does and the harms of misuse can extend far beyond simple fraud. One of the most important principles guiding the design of norms and mechanisms for data contributions should be rules committed to harm reduction.
While digital data are newly ubiquitous, medical research is one field that has long depended on donated information. The history of gathering data in medicine is rife with horrors, especially those perpetrated against racialized populations and women, just some of which are captured in books like The Immortal Life of Henrietta Lacks. Only in the past half century have we seen deliberate, structured efforts to protect the most basic rights of people interacting with the medical system in general or participating in research in particular. These structures include professional oaths, ethical review boards, licensure, standards of practice, and enforcement. They’re fallible, but they do exist, which is more than most domains can claim. The medical research community has generally welcomed the proliferation of self-generated digital data as a potential boon for breakthroughs—so long as the data can be collected, used, protected, and secured in ways that protect the individual while contributing to the aggregate.
A big community of people is focused on creating new kinds of institutions that we might trust to hold our data and let us determine how they get used. This effort includes experimenting with new organizations such as data cooperatives, civic data trusts, and open collectives. These organizations are uncommon now, but some of them are bound to succeed and become as familiar as nonprofits are today. Think about it: Nonprofit corporations have proliferated in the United States as trusted institutions that put donated time and money to work creating change. We’re now seeing great efforts to create new forms of purpose-built institutional structures that enable safe and trusted digital data donations.
For instance, in 2015, Apple introduced ResearchKit, a software framework that allows people to share—by choice—data from their phones with medical researchers. One early study on Parkinson’s disease, mPower, involved a team of researchers from a Seattle-based nonprofit, Sage Bionetworks. They wanted to collect movement data—how much a person moves around each day—to see how the tremors that mark Parkinson’s disease change over the course of a day and whether relationships between tremors and exercise level exist.
The team at Sage Bionetworks spends its time nowadays working on ways to build trust, respect, and protection into research procedures. This is hard enough to do when the researchers and the people participating in the study have a face-to-face relationship; doing it for tens of thousands of people through an app presents another level of difficulty. This task is the job of Vanessa Barone, a research scientist for outreach and engagement at Sage Bionetworks. She holds a master’s degree in public health and has spent much of her career in clinical research. Before she started at Sage, Barone’s professional experience had been on clinical trials in which she could develop “a pretty personal relationship with the research participants.” Her job was encouraging people, in person and with email or phone follow-up, to get involved. She spent time listening to their concerns, talking them through the protocols, and answering their questions. She was attracted to the job at Sage because she knew mobile health would “be an interesting way to engage with people on a different level.” She also wanted to wrestle with some of the challenges that recruiting researchers poses, “whether they be ethical challenges or just overall recruitment and retention, which is a beast in itself that no one can really solve completely.”
Barone, a Black woman, knows the field’s history of harm and has committed her career to trying to develop trustworthy ways to diversify research participation. Mobile health doesn’t solve either trust or diversity issues; rather, the familiar difficulties persist and take on new forms. Recruiting digitally and on a national scale means Barone can no longer reach out to people in person. Sage could use the same digital marketing tools that advertisers use to get specific in targeting its recruitment efforts. But Sage’s focus on privacy protection and individual consent makes those tools suspect for research purposes.
Digital technologies and data create and complicate the promise of mHealth. They also generate new problems. Not everyone has a smartphone, so recruiting via app stores introduces a new form of discrimination. Widespread contemporary data practices and uses of artificial intelligence are deeply entwined with white supremacy, and medical research has its own unrepaired history in this regard. The links between white supremacy and AI have been documented by scholars Safiya Noble, Ruha Benjamin, and Timnit Gebru, among others, and the history of medical ethics includes horror stories of racialized and gender-based violence that continues into the present day.7 Applying these systems as they exist to mHealth would cause harm, not progress.
We can learn a lot from the field of medical research, but we’ll want to put some of it aside. Both Barone’s experience recruiting people of color for mHealth studies and the work vaccine hunters do remind us that human relationships are critical. Digital data might augment these relationships, making it easier for grandchildren to help grandparents find and access a vaccine, or for skeptical potential participants in a research study to have their questions answered. But the networks of relationships and the decision not to use certain data in her job ultimately help Barone succeed.
The examples we’ve examined come from different domains, use wildly different types of data, and are structured in a range of organizational options. But all of them put people first. The questions of data governance—what data are being used, what will be done with them, how will people come to understand them, how can people change their minds—are essential to the choices these groups make, but they are all in service of creating something that people will choose to use. Data are not taken from people, no one is automatically signed up, and exit signs are clear—you can stop and leave at any point. These steps may seem small, but they differentiate a voluntary contribution from an extractive practice—and you won’t find them in most data interactions with corporate or government websites. The idea of informed consent is complicated, and the digital world makes it more so. But all of our examples, from VHC to Sage Bionetworks, show that the answer to the question “How might people trust us enough to participate?” drives the answers to later questions about data governance.
The examples we’ve examined come from different domains, use wildly different types of data, and are structured in a range of organizational options. But all of them put people first.
Important characteristics of networked digital data distinguish them from time and money as donatable things. First, data are relational—not only your digitized DNA, but also your texts, emails, and contact lists. They can exist in many places at once, which is what happens when you send copies of digital photos from your phone to your friends. They can be stored in enormous quantities in many places and mixed and matched with other types of data. It’s currently very hard to know who has access to what data, where they are, and what anyone is doing with them. This difficulty is by design by those collecting digitized data. These characteristics mean that “giving” is really about granting access to them, not about relinquishing your hold on them. This idea challenges our assumptions about ownership—as well as our assumptions about philanthropy, giving, and control. Sharing data is often easy and painless—and that’s precisely why people are fearful of what’s being done with them and why it’s not like sharing time or money.
For these reasons, the topic of digital data has reinvigorated discussions about governance of a commons and data stewardship. Given how hard it is to define who owns data, what if we thought about the question differently? Counter to traditions of intellectual property and individual ownership, commons governance practices and public trusts can inspire alternatives for governing digital data. As we imagine ways in which we might share our data to find new insights, we can also imagine new ways of governing data—not as things one person owns and another doesn’t, but perhaps as things they can both access, under mutually agreeable and sustainable rules.
A larger context about data and power confronts us when we approach giving data. Every day we experience the damage—to individuals, communities, democratic governance, and planetary health—of the concentration of power and wealth promoted by the corporatist approach to digital data. Collective harms—global warming, viral pandemics, rising authoritarianism, a decline in shared truths—can open our eyes and imagination in ways that inspire us to pursue alternative approaches to governance, knowledge, finance, and power. The need to define and set rules for data donations presents a rare opportunity to imagine and implement new systems on a global scale. We can apply the wisdom of some Indigenous knowledge traditions that center on relationships and reciprocity, two values that “fit” networked data more easily than they do private ownership.8 Designing systems for facilitating data donations is part and parcel of larger efforts to develop data governance processes in the public interest more broadly—especially to the extent to which such systems privilege people and communities, not corporations.9 We can align efforts to repair past harms and pursue equitable futures with the challenges of deciding if, when, and how to enable, allow, incentivize, or prevent data donations, and when not to do so, as well as deciding whose data to donate in the first place. People who have experienced the most harm from our existing practices of economic, political, and social power and extraction should lead in imagining and designing systems for data donations—as those who are closest to the harms are wisest about alternative solutions.
 Donating data is rife with moral questions and multigenerational timelines. It raises immediate and long-term equity issues. We can see this predicament in debates about technologies that have been built on racially discriminatory training data. The question of whether to improve the ability of facial recognition systems to identify all people equally well might “solve” the short-term racially discriminatory aspect of these systems, but doing so will also embed racism ever more deeply into the surveillance apparatus of governments and companies. If we hope to create systems for donating data—especially if we envision the use of those donations as steps toward remedying shared harm—the first step must be to assume that already oppressive systems do not have the answer. Structures to facilitate data donations will entrench already powerful and dangerous extractive relationships unless they are intentionally designed to counter the racialized and gendered, surveillant, and discriminatory dynamics that have been designed into and define our current data economy.
There are other reasons to focus on people whom the current data economy has harmed in any planning about giving data. We know some uses of digitized data that cause harm. These uses should be banned, as is beginning to happen with facial recognition and predictive policing data collection and use. Philanthropy and civil society assume that people have a choice, that their participation in these activities is something they control. We need to bring this mindset of careful choice—instead of forced extraction—to considering both whether and how we give our data. We’ll need to think about timelines and uses for the donations, as well as degrees of control and choice assigned to different actors. Fortunately, many of these questions have already been answered within the system of financial philanthropy. When it comes to giving money to nonprofits or setting up philanthropic organizations, financial donors choose whether, when, what, and how much to give; for what purposes and for how long; and whether they want to be identified or not. They also have numerous avenues of recourse should they be dissatisfied with the results. Somewhat surprisingly, these rules about donating money (which were developed largely to benefit the wealthy) offer us scaffolding for thinking about donating data (which could benefit everyone).
We need to reimagine much about our current systems of philanthropy, and potential reforms can reach far beyond the tax code. Those who are seeking a more just society, and those interested in using philanthropy to do so, might use the opportunity presented by data donations to privilege different philanthropic dynamics altogether. As individuals, we’re all rich in digitized data—we generate it in every interaction with our phones or tablets, and as we drive our cars, read our ebooks, and move through our cities. But as a society, we lack frameworks for if, when, and how data donations should work. We need to create new organizational requirements, responsibilities, and laws to guide the processes of data donations. To the extent to which we root these questions in an imagination shaped by modern-day financial philanthropy, we limit ourselves. We can expand our horizon of options by considering traditions of community care, relational understandings about information and knowledge, and leaders and communities most familiar with both.
We confront many unknowns about giving data, but we are familiar with the harms of extractive data relations. We should assume that these harms will continue—or, at best, morph into new forms—if we rely on the systems and structures that created our current data ecosystem to attempt to craft a new one. Therefore, the first step in imagining systems for data donations is to put different people in charge. Social movements, history, theory, and even the best practices of financial philanthropy teach us that those closest to the problems have the most invested in finding solutions. Those most harmed by extant data economics should lead us toward creating new systems. This approach would center on Black people, Indigenous communities, people of color, women, queer people, people with disabilities, refugees, and people with low incomes, as well as people from a multitude of faith traditions.
Imagine a truly global effort to develop new practices for using digital data for shared public challenges. Insights about consent, agency, control, relationality, and representation raised by scholars across disciplines as diverse as communications, African American studies, Indigenous studies, and engineering would take on practical applications.10 Protective tactics developed by organizers fighting against caste discrimination, labor abuses, disenfranchisement, or environmental injustice could serve as frameworks for safety protocols. Organizational structures such as open collectives, mutual-aid associations, cooperatives, and civic trusts would find new resonance. Accountability could take on new meaning as the lines of relationships run horizontally and across time, rather than in a simple hierarchical fashion. And the power of pluralistic practices—for different people, different data, different communities, and different public purposes—would gain recognition as a way to protect people and provide choice.11
In our age of ubiquitous digital data, figuring out how we might choose to donate it should interest all of us. Giving data will require us to develop new rules for philanthropy. Just over a century ago, during the Progressive Era’s reaction to Gilded Age excess, the United States crafted new rules for philanthropy and established its modern foundation. We face a similar challenge today, as people far and wide seek ways to contribute digital data safely and voluntarily for public purposes.
The same coercive dynamics that companies exert over people who use their services extend to data philanthropy. That the companies then claim credit for their largesse adds insult to injury.
The structure and practice of what some now call data philanthropy mangle some of the most basic assumptions of giving. In the UN’s current vision of data philanthropy, big companies control everything—the purpose for which the data will be used, what is included in the data set, who gets access to it, and what insights or findings might be made available to partners or the public. This approach gives people no choice about the causes, no recourse about the donation, and no insights into or even sense of satisfaction from having helped out in some way. The same coercive dynamics that the companies exert over the people who use their services extend to this use of the data. That the companies then claim credit for their largesse adds insult to injury.
One important step away from this model is to shift the decision-making power from companies to people and communities as both donors and users of the data. This tack gives civil society an opportunity to lead in the digital age by establishing safe, equitable practices that might unlock real change for communities. Figuring out where the limits are, what data should not be donatable, and what predictable harms we must prevent is work being done by the people using and making iNaturalist, those at Sage Bionetworks, and those powering vaccine hunters. Other nonprofits, people, and community groups can step in with their own contributions to defining data donations—not only because the rules for doing so will reshape their own domain, but because doing so can open our eyes to more equitable and just philanthropic practices overall. The nonprofit Mozilla Foundation recently introduced a new website tool called Rally, which gives people a way to donate to research studies that interest them, directly through the Firefox browser; donor choice and control are the central features.
As with all philanthropy, how we design data philanthropy will say a lot about the broader society we want. For example, in the United States, the privileges afforded to large, private foundations reflect the codification of preferences for private action. Intentionally or not, these privileges allow those who would starve public coffers by minimizing the tax responsibilities of the wealthy to reify a preference for private choice, instead of democratic public action. If we allow commercial companies to dictate the rules of data philanthropy, we should anticipate the same dynamic. Leaving to companies the choice of sharing data for public purposes will further cement corporate preference over public need. In addition, it will weaken nascent efforts of societies around the globe to assert necessary public oversight over companies whose data practices have harmed public discourse, democratic participation, and the very lives of individuals and communities. Alternatively, relying on civil society to design the bounds of data philanthropy, especially associations of people most imperiled by concentrated commercial data extraction, reveals a commitment to individual and communal freedom and agency.
The possibilities for using digitized data for public benefit are rife with political and moral questions, short- and long-term equity issues, and opportunities to further entrench already powerful and dangerous extractive relationships. They also offer the possibility of imagining very different futures, with positive potential for more equitable approaches to digital data in every sphere of life. People and communities interested in economic systems that value collective health over individualistic advancement have the expertise to use digitized data for human betterment. For any of us to be able to decide how our digital data are used, all of us must come to recognize the importance of this opportunity. One step toward data practices that respect people as people is for each of us to see that we all have a stake in defining how we give data.
Read more stories by Lucy Bernholz & Brigitte Pawliw-Fry.
1 Discord is an app that allows people to communicate about specific issues using dedicated servers. People can communicate on the app using either voice or text. It was originally intended to help online gamers talk to each other; it is now widely used for hosting discussions of many types, moderated and organized by communities. The “places” where these discussions occur and where the data that drive them are hosted are called Discord servers.
2 Courtney Shea, “‘At This Point, We’ve Probably Helped Thousands of People Book Shots’: Q&A with Joshua Kalpin of Vaccine Hunters Canada, the Viral Website for Vaccine Appointment Intel,” Toronto Life, April 15, 2021.
3 Anoush Rima Tatevossian, “Data Philanthropy: Public & Private Sector Data Sharing for Global Resilience,” UN Global Pulse, September 16, 2011.
4 Davey Alba, “Facebook Sent Flawed Data to Misinformation Researchers,” The New York Times, September 10, 2021.
5 Shea, “At This Point.”
6 For a growing list of examples, see the annex to René Mahieu and Jef Ausloos, “Recognising and Enabling the Collective Dimension of the GDPR and the Right of Access,” LawArXiv, July 2, 2020.
7 See also Yarden Katz, Artificial Whiteness: Politics and Ideology of Artificial Intelligence, New York: Columbia University Press, 2020; Jesse Daniels, “The Manifest Destiny of Computing,” Public Books, July 27, 2021; and Harriet A. Washington, Medical Apartheid: The Dark History of Medical Experimentation on Black Americans from Colonial Times to the Present, New York: Doubleday, 2007.
8 Aaron Perzanowskl and Jason Schultz, The End of Ownership: Personal Property in the Digital Economy, Cambridge, Massachusetts: MIT Press, 2016.
9 Jathan Sadowski, Salomé Viljoen, and Meredith Whitaker, “Everyone Should Decide How Their Digital Data Are Used—not Just Tech Companies,” Nature, July 1, 2021.
10 For example, see the work of Ruha Benjamin, Safiya Noble, Marissa Duarte, Joy Boulamwini, Deb Raji, Sabelo Mlhambi, and Jasmine McNealy. This syllabus from NYU’s Center for Critical Race and Digital Studies provides additional resources: CriticalRaceDigitalStudies.com/syllabus.
11 See Matt Prewitt, “A View of the Future of Our Data,” Noe¯ma, February 23, 2021.
Lucy Bernholz (@p2173) is a senior research scholar and the director of the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society. Her latest book, How We Give Now: A Philanthropic Guide for the Rest of Us, discusses data donations in more depth.
Brigitte Pawliw-Fry (@brigittepfry) works as a researcher at the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society and hosts the podcast Queer Devotions.
“Centering Disability,” a new supplement sponsored by @DisPhilanthropy in SSIR’s Winter 2022 issue, offers critical… twitter.com/i/web/status/1…
“We must seize this moment to advance and recognize the promise of disability justice in America.” @AyannaPressley… twitter.com/i/web/status/1…
“Allowing corporations to set the agenda for giving #data tramples the free-choice aspect of donations.” @p2173 (Lu… twitter.com/i/web/status/1…
An alternative to corporate models of data giving, active listening techniques for social change, centering equity… twitter.com/i/web/status/1…
“One of the most practical ways to handle uncertainty is to get good at putting your work out there.” In her new bo… twitter.com/i/web/status/1…
By Lucy Bernholz 1
By Allison Fine & Beth Kanter 2
By Adrienne Day
New and in-depth explorations of solutions to social, environmental, or organizational problems
New approaches to social change
Profiles of innovative work
An inside look at one organization
Insights from the front lines
Highlights from scholarly journals
Reviews of new and notable titles
Perspectives on the print magazine issues from SSIR’s editor-in-chief
Takeaways from SSIR’s digital channels
Images that inspire
Collections of articles on a single topic, funded by a sponsor
Copyright © 2021 Stanford University. Designed by Arsenal, developed by Hop Studios
SSIR.org and/or its third-party tools use cookies, which are necessary to its functioning and to our better understanding of user needs. By closing this banner, scrolling this page, clicking a link or continuing to otherwise browse this site, you agree to the use of cookies.

source