Written by Chirag Arora and Juan Duran (Technology, Policy and Management (TPM) Faculty, TU Delft)
Computational social science (CSS) holds the promise to revolutionize our ability to predict and explain social behavior using vast datasets and advanced algorithms. While these technical advancements hold great potential, the ethical stakes are equally significant. Much-needed attention has been deservedly focused on issues like fairness, bias, and privacy. However, another crucial element of ethical significance has received less attention: conceptual clarity and transparency. Just as in other fields, poorly defined concepts can undermine the integrity of research (Bringmann et al., 2022). While the methodological significance of consistent conceptual definitions, for example, to build solid theoretical foundations, may already be well received – the ethical salience of such concepts may require more explicit attention. The way researchers define and operationalize key terms such as fairness, bias, and privacy doesn’t just affect research outcomes—it fundamentally shapes the societal impacts of CSS, making this ethical consideration critical.
Concepts Are Not Ethically Neutral
In CSS, terms like “fairness” and “bias” are commonly used, but their meanings are not fixed or universally agreed upon. Depending on how these concepts are defined, the ethical implications of a model’s outcomes can vary significantly. The challenge is not necessarily that CSS researchers overlook these distinctions, but that the ethical significance of selecting one definition over another, and of communicating this choice explicitly, is often underappreciated. Conceptual clarity and transparency are crucial not only for ensuring methodological rigor, but also for facilitating deliberation and reflection over how research findings are applied in decision-making. Without conceptual clarity, the societal impacts of CSS risk being misaligned with the intended ethical norms and standards.
The Case of COMPAS and Multiple Definitions of Fairness
An example that highlights the importance of conceptual choices in CSS is the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, used in the context of the criminal justice system in the United States to predict the likelihood of recidivism (reoffending) (Angwin et al., 2016). The algorithm became the center of controversy when ProPublica’s investigative journalism revealed that it disproportionately flagged Black defendants as high risk for reoffending compared to white defendants. This discrepancy raised important ethical concerns, but the ensuing debate has since revealed itself to be rooted in underlying differences over how we should understand and operationalize concepts like fairness.
In this case, at least two main fairness criteria were in conflict. One criterion, often called separation, suggests that an algorithm is fair if the risk scores it produces are independent of demographic variables like race, given actual outcomes (Hummel, 2023). In other words, people from different groups should be treated similarly by the algorithm if they have similar actual behaviors (e.g., recidivating or not). Another fairness criterion, called sufficiency, focuses on whether the predicted risk scores are equally accurate across different groups, meaning that if two people from different demographic groups receive the same risk score, they should have the same likelihood of reoffending.
In the COMPAS case, the creators of the algorithm argued that it satisfied sufficiency, as the risk scores it produced were equally accurate across Black and white defendants (Hellman, 2020; Hummel, 2023). However, critics, such as from ProPublica, pointed out that the algorithm failed to meet the separation criterion, as Black defendants who did not reoffend were more likely to be labeled high risk than their white counterparts (Hummel, 2023).
This example illustrates how different conceptualizations of fairness lead to different ethical conclusions. Both perspectives may be defended as legitimate but reflect distinct ethical priorities: one focuses on ensuring that outcomes are equally accurate across groups, while the other emphasizes minimizing bias in how risk is assigned to different groups. The COMPAS case demonstrates that fairness is not a single, universal concept but a plural one, where multiple definitions can coexist, each with its own ethical implications. The choice of which definition to prioritize should not be taken lightly, as it affects how we view fairness in outcomes and processes.
The Ethical Stakes of Conceptual Choices
The COMPAS case is not an isolated example, nor is fairness the only conceptual value that can be used pluralistically. As CSS research becomes increasingly influential in areas such as healthcare, education, and public policy, the ethical implications of how key concepts are defined become even more significant. Concepts like fairness, bias, and privacy are not purely technical terms but are shaped by ethical values. Different definitions reflect different priorities, such as balancing equity with efficiency, or accuracy with fairness. This means that CSS researchers are making ethical decisions when they choose which conceptual definitions to apply, even if they are not always aware of it or explicit about it.
For instance, when researchers design predictive algorithms to minimize false positives (predicting something will happen when it does not), they might inadvertently increase false negatives (failing to predict something that does happen). In a criminal justice context, prioritizing fewer false positives (e.g., incorrectly predicting someone will reoffend) could reduce unnecessary incarceration but might also lead to higher rates of recidivism. These are not just technical trade-offs but ethical decisions that impact real lives.
Recognizing and reflecting on these conceptual choices can ensure that the ethical implications are taken seriously. When researchers choose one definition of fairness over another, they are implicitly deciding which outcomes matter more—whether it is the fair treatment of individuals across different groups or the accuracy of predictions for everyone. The question is not just about which model performs better, but about what values are being embedded in these models.
The Need for Transparency and Reflexivity
Given the ethical complexity of conceptual choices in CSS, it is essential that researchers are transparent about how they are making such conceptual choices (and why). Transparency helps ensure that research is reproducible and that others can critically engage with the methods and conclusions. Additionally, by clearly stating the conceptual foundations of their work, researchers provide important guidance for policymakers and other stakeholders who might rely on their findings to make decisions.
For example, if the creators of COMPAS had explicitly acknowledged that their definition of fairness prioritized accuracy over the separation of racial groups, it would have been easier for deployers, critics, and policymakers to understand the ethical trade-offs involved and decide whether the chosen fairness criterion was appropriate for the criminal justice system. This kind of transparency benefits not only the research community but also those who must make real-world decisions based on this research.
Moreover, transparency paves the way for both individual and collective ethical reflection. Researchers must consistently question how their definitions of fairness, bias, and other key terms influence the outcomes of their work, but this reflection should not be limited to the individual researcher. Transparency enables the broader research community to engage in an ongoing and open dialogue about these definitions, allowing for a deeper understanding of how different conceptual choices function across contexts and over time. This shared reflection could help make visible the conceptual diversity within the field and reveal the varied ethical consequences of different approaches.
Moving Forward
As CSS continues to influence vital aspects of society, the ways in which researchers define and communicate key concepts will only grow in importance. We can’t expect terms like fairness or bias to have fixed, universally accepted meanings. For better or worse, researchers are tasked with the responsibility of defining these terms in ways that make their values and priorities explicit. The ethical standards we value—individually, socially, and culturally—are woven into the choices researchers make, whether explicitly or otherwise. While conceptual transparency may not resolve all ethical debates, it’s an essential step in fostering a meaningful process of deliberation over these values. CSS practitioners must approach their work with intention, be transparent about their methods, and remain open to dialogue.
Ethics, moreover, should not be seen as an endpoint where we evaluate whether the “right” definitions were chosen. Instead, it’s a process—a continuous effort to make choices grounded in thoughtful, contextual considerations and communicated transparently. There are still open questions about how to cultivate these reflective skills, especially in early-career researchers. Moreover, this task isn’t solely for CSS experts; insights from ethics, philosophy, and other disciplines are invaluable in helping CSS practitioners navigate these complex issues. Building a framework that supports this kind of interdisciplinary collaboration and skill-building will be crucial for the future of ethical, responsible computational social science.
References
- Bringmann, L. F., Elmer, T., & Eronen, M. I. (2022). Back to Basics: The Importance of Conceptual Clarification in Psychological Science. Current Directions in Psychological Science, 31(4), 340–346. https://doi.org/10.1177/09637214221096485
- Hellman, D. (2020). Measuring Algorithmic Fairness. Virginia Law Review, 106(4), 811–866.
- Hummel, P. (2023). Algorithmic Fairness as an Inconsistent Concept. American Philosophical Quarterly, 62(1).
- Julia Angwin, Mattu, S., Larsson, J., & Kichener, L. (2016, May 23). Machine Bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Picture by Tahir osman on Unsplash