Algorithms–rules written into software–shape key moments in our lives: from who gets hired or admitted to a top public school, to who should go to jail or receive scarce public benefits. Today, high stakes software is rarely open to scrutiny, but its code navigates moral questions: Which of a person’s traits are fair to consider as part of a job application? Who deserves priority in accessing scarce public resources, whether those are school seats, housing, or medicine? When someone first appears in a courtroom, how should their freedom be weighed against the risks they might pose to others?
Policymakers and the public often find algorithms to be complex, opaque and intimidating—and it can be tempting to pretend that hard moral questions have simple technological answers. But that approach leaves technical experts holding the moral microphone, and it stops people who lack technical expertise from making their voices heard. Today, policymakers and scholars are seeking better ways to share the moral decisionmaking within high stakes software — exploring ideas like public participation, transparency, forecasting, and algorithmic audits. But there are few real examples of those techniques in use.
In Voices in the Code, scholar David G. Robinson tells the story of how one community built a life-and-death algorithm in a relatively inclusive, accountable way. Between 2004 and 2014, a diverse group of patients, surgeons, clinicians, data scientists, public officials and advocates collaborated and compromised to build a new transplant matching algorithm – a system to offer donated kidneys to particular patients from the U.S. national waiting list.
Drawing on interviews with key stakeholders, unpublished archives, and a wide scholarly literature, Robinson shows how this new Kidney Allocation System emerged and evolved over time, as participants gradually built a shared understanding both of what was possible, and of what would be fair. Robinson finds much to criticize, but also much to admire, in this story. It ultimately illustrates both the promise and the limits of participation, transparency, forecasting and auditing of high stakes software. The book’s final chapter draws out lessons for the broader struggle to build technology in a democratic and accountable way.
Background David Robinson is a visiting scholar at the Social Science Matrix at the University of California, Berkeley. He is also a member of the faculty at Apple University, where he teaches seminars on the intersection of technology and the liberal arts—a concept that aligns with Steve Jobs' vision of Apple. At Apple, Robinson advises on creating inclusive products. Previously, he co-founded and led Upturn, an NGO collaborating with civil rights organizations to advance equity and justice in the design, governance, and application of digital technologies. Robinson also contributed to the White House's recent crafting of the AI Bill of Rights.
Voices in the Code examines how one community created a life-and-death algorithm with inclusivity and accountability at its core—a rare, positive example in the field.
Setting the Scene The story begins at an airport Marriott in Dallas, Texas, on a chilly February morning in 2007. Hundreds gather for a preview of a life-saving software product developed over several years: a new version of an algorithm that determines which of the roughly 100,000 patients on the U.S. transplant waiting list should receive a donated kidney. Although a prior version is in use and already saving lives, many attendees criticize it, believing they can improve it both ethically and technologically. They recognize that public trust in this technology is essential for its success.
History and Development of the Algorithm The Dallas meeting is part of a public process examining the ethics embedded within the software for kidney allocation. The need to involve a broader community in these moral decisions has historical roots going back to 1960, when Dr. Belding Scribner faced a dilemma. At the University of Washington in Seattle, he invented a shunt that allowed patients to receive dialysis indefinitely, but he had only four dialysis machines and faced overwhelming demand.
To resolve the issue, Scribner made a bold decision: he handed over the responsibility of choosing which patients to save to a community committee, believing the choice should be shared beyond just medical professionals. This lay committee, though controversial for its perceived biases, set a precedent for involving a broader community in ethically challenging decisions.
This ultimately led the development of the “Kidney Allocation System” (KAS) that was the outcome of the entire process.
Evolution of the National Transplant System In 1972, Congress mandated that dialysis be available to all who needed it, sidestepping the need to choose patients for dialysis. But kidney transplants, unlike dialysis, became the preferred treatment for kidney failure, and there weren’t enough kidneys available to meet demand. Congress subsequently called for a national system to allocate organs via a computerized algorithm, directing public input to guide the allocation priorities.
Initially, the system prioritized time on the waiting list, a seemingly fair approach. But in 2007, the proposed shift was to prioritize saving the most life years—meaning the goal would be to maximize the years of life gained from each transplant.
The Life Years from Transplant (LYFT) Proposal The Dallas proposal was called LYFT (Life Years from Transplant) and aimed to allocate kidneys to those predicted to benefit most in terms of additional life years. This required complex calculations, considering factors like how long each patient might survive with or without a transplant, and the probability of needing dialysis again if the transplant failed.
Medical experts favored LYFT, seeing it as an efficient way to maximize benefit. The proposed algorithm was set to increase life years saved by over 10,000 annually. However, this approach would shift transplants toward younger patients who had more potential life years to gain, reducing the proportion of transplants for older patients.
Public Response and Debate: Clive Raw’s Objection During the meeting, Clive Raw, a Los Angeles traffic engineer with polycystic kidney disease (PKD), voiced his concerns. He argued that age, used as a proxy for health in the algorithm, was not always accurate, and that the system’s reliance on it was unfair. Clive’s deeper moral objection was that patients like him, who had maintained their health to extend the time before needing a transplant, would be penalized, while those who deteriorated earlier might benefit more. He worried that his daughter, who also had PKD, might feel pressured to neglect her health for a better chance at a transplant when she was younger.
Equity and Social Determinants of Health Clive’s objection touched on broader issues. The preference for healthier candidates inherently favored people with better healthcare access, who tended to be wealthier and, on average, whiter. The higher incidence of kidney failure in African-American communities, due to social determinants like diabetes and high blood pressure, raised additional equity concerns. The proposal to prioritize life years could, in effect, marginalize disadvantaged communities.
The Compromise Solution The eventual compromise (KAS) balanced maximizing life years with equity, offering the healthiest kidneys to those likely to live the longest while giving all patients a reasonable chance for a transplant. This new approach improved race equity and allowed for older patients to have a fairer chance. The compromise followed nearly a decade of debate and reflects the challenge of balancing efficiency and fairness.
Broader Implications for AI and Algorithmic Governance Robinson sees kidney allocation as an example of a well-governed system, contrasting with other high-stakes algorithms that often lack inclusive, ethical oversight. He explores four governance strategies for ethical AI in his book:
1. Participation: Engaging a diverse set of stakeholders. 2. Transparency: Providing clear information on how algorithms work. 3. Forecasting: Anticipating the impact of algorithms before implementation. 4. Auditing: Regularly assessing outcomes to ensure alignment with ethical standards.
These strategies are particularly useful when public trust and equitable resource allocation are essential, as in public school admissions or public-sector AI.
Lessons Learned Robinson’s research offers lessons for other high-stakes areas:
•Algorithmic Influence on Moral Decisions: Algorithms can shape ethical debates by highlighting specific values. • Deliberative Participation: Engaging the public can lead to shared understanding, but it requires time, resources, and a commitment to education. • Challenges of Transparency: Mere data disclosure is insufficient; building public understanding requires meaningful communication. • The Limits of Perfect Fairness: Quantification provides an efficient but imperfect approach to allocating limited resources, balancing moral considerations with practical limitations. • Knowledge Isn’t Always Power: While stakeholder engagement is crucial, ultimate control often resides in top-level governance structures like legislation and court rulings.
Conclusion Robinson concludes that AI governance can draw valuable insights from this model, balancing technological efficiency with inclusivity and accountability in ethical decision-making. The seminar ends with a lively question and answer session, emphasizing the practical challenges and moral complexities of creating fair, transparent, and accountable systems in AI and beyond.