Back in medical school, I needed a research project and chose to use the PHQ-20, a questionnaire endorsed by the World Health Organization¹. It's designed to assess somatic symptoms in primary care and has been translated into numerous languages, validated across continents². Think of it as a universal thermometer for physical distress, especially useful when people lack the language or freedom to discuss mental health.
At the time, I was in Addis Ababa, Ethiopia, working with individuals affected by leprosy—now more politely referred to as Hansen's disease³. The stigma surrounding it is so entrenched that securing funding is challenging, and finding someone who remembers it still exists is even harder. I approached the Leprosy Ex-Patients Association, a modest yet resilient organization, and asked the woman in charge if I could administer the PHQ-20 to residents. She agreed and added, "No one cares about them anyway."
That was it. That was my ethics board.
While I felt commendable for using a validated instrument with careful explanations and good intentions, I was also participating in a long-standing practice: using the overlooked and discarded to glean useful information⁴. There were no consent forms, no translators, no oversight. The ethical rules that protect people elsewhere had no local enforcement here. This ethical vacuum didn't liberate anyone; it erased them.
What struck me most was how visible suffering was in public. In many lower-income cultures, including Ethiopia, there's often a higher level of visible pathology in the streets, including untreated medical conditions, psychiatric disorders, physical disfigurement, suffering that would be quickly hidden or institutionalized in wealthier nations⁵. Whether this tolerance stems from necessity, desensitization, or different cultural values, it alters our perceptions of care, urgency, and who counts when making ethical decisions.
Why should Americans care? Because this isn't just about Ethiopia, leprosy, or IRBs. It's about the ethical infrastructure, or lack thereof, that determines how all of us are treated, informed, manipulated, marketed to, and diagnosed. It's the same operating system, whether you're in a rural clinic in Tigray or an urgent care in Tampa.
Consider how your genetic data is sold by companies like 23andMe to pharmaceutical firms without your full understanding⁶. Or how your child's attention span is harvested by TikTok algorithms designed to bypass consent entirely⁷. Google's AI tools learn from your emails, voice recordings, and YouTube habits without explicit permission. Facebook has run experiments on user moods without disclosure⁸, and Amazon adjusts prices based on perceived income. In the U.S., Black women die in childbirth at three times the rate of white women, even when living in the same zip codes⁹. All of this ties back to how we define consent, value dignity, and institutionalize, or ignore, protection.
Take Robert F. Kennedy Jr., for example. As the son of the late Senator and nephew of President John F. Kennedy, he serves as the chief potato of the Department of Health and Human Services. He proposed reintroducing placebo-controlled vaccine trials for diseases with existing safe, effective vaccines¹⁰. The idea of giving one group nothing, just to observe outcomes, masquerades as scientific rigor but lacks ethical backbone. However, RFK Jr. is merely a symptom of a deeper issue: the persistent fantasy that retesting everything will rebuild trust, forgetting that trust is built through relationships, transparency, and historical reckoning.
Institutional Review Boards (IRBs) might sound like something from a spy movie, but they're essentially groups ensuring that research projects don't harm participants. If you're conducting research on real, living humans, not robots or hypothetical people, you must present your idea to an IRB to ensure you're not causing harm in the name of knowledge¹¹. In the U.S., IRBs are typically housed within academic medical centers, universities, hospitals, and research institutions, overseen by the Department of Health and Human Services through the Office for Human Research Protections (OHRP)¹². Globally, the system is more fragmented. The European Union has ethics committees regulated by national laws and EU directives. Canada uses Research Ethics Boards. Countries like India, South Africa, and Brazil have national guidelines and local boards, though enforcement varies widely. In many lower-income nations, where oversight mechanisms are still developing, ethical review can be informal, inconsistent, or absent—making them attractive for foreign research.
In theory, this system is beneficial. No one wants unethical experiments reminiscent of horror movies. But in practice, IRBs can be cumbersome. You might want to ask people about their sleep habits or vegetable preferences and find yourself entangled in paperwork, defending choices like the color blue on your survey form, in case it triggers unexpected trauma.
This bureaucracy didn't arise from boredom. IRBs were established in response to a grim history of human experimentation, where ethics were treated as optional add-ons. The core idea here is consent, specifically, informed consent. It's not just about getting a "yes." It's about ensuring the person understands what they're agreeing to, has the opportunity to decline, and isn't coerced.
Modern bioethics, particularly through the work of Tom Beauchamp and James Childress, built this into the foundation of ethical research¹³. Their four principles—autonomy, beneficence, non-maleficence, and justice, provided a moral compass for clinicians. Autonomy, especially, reinforced informed consent, protecting individuals' rights to make their own decisions.
Hans Jonas, a philosopher who fled Nazi Germany, argued that research ethics must prioritize responsibility to the vulnerable and the future, not just scientific discovery¹⁴. Paul Ramsey, a theologian and ethicist, emphasized that consent isn't merely a contract but a covenant: a relational promise not to treat people as mere data points. Ruth Faden, a bioethicist and historian, highlighted that consent is an ongoing process, influenced by language, literacy, power, and history.
Then there's CRISPR, a gene-editing technology with the precision of a scalpel. In 2018, Chinese scientist He Jiankui announced he had created the first genetically edited babies, allegedly immune to HIV¹⁵. He faced global condemnation, imprisonment, and professional disgrace. Yet, the science continues. Should we use gene editing to eliminate diseases like sickle cell or Huntington's? Perhaps. But what happens when we start editing for traits like height, intelligence, or musical ability? And what if only the wealthy can afford it? CRISPR isn't just a tool; it's a moral test, and IRBs are among the few institutions attempting to slow progress before we venture into ethically questionable territory.
Over the past century in the U.S., human experimentation has evolved from reckless practices to stringent protocols. IRBs weren't federally mandated until 1981¹⁶. Before that, researchers tested chemical weapons on 4,000 U.S. soldiers without consent. The Tuskegee Study observed 600 Black men with and without syphilis for 40 years, resulting in 100 deaths from treatable illness. Twenty thousand civilians were exposed to radioactive substances unknowingly. The CIA's MKUltra program conducted experiments at 80 institutions. Over 60,000 people, predominantly poor women of color, were forcibly sterilized under eugenics programs.
These are just the highlights. The shadow of unethical research is long. Consider Eduard Pernkopf, whose Nazi-era anatomical atlas remains detailed because it was based on executed or vivisected prisoners, mainly Jews. J. Marion Sims, dubbed the "father of modern gynecology," perfected surgical techniques on unanesthetized enslaved women. Hans Asperger, the Austrian pediatrician associated with Asperger’s syndrome, collaborated with Nazi programs, referring disabled children for euthanasia.
Do we utilize medical knowledge obtained through such horrific means? Does potential benefit outweigh original harm? These questions haunt every ethical review. Possessing data doesn't absolve us from the obligations of memory, justice, or reparative honesty¹⁷.
Consider the case of a brain-dead woman in Georgia kept on life support until childbirth. What were her rights? What does consent mean when someone is physically present but legally and medically deceased? Was maintaining her body for pregnancy preservation ethical, or was it exploitation? Who decides?
The phrase "Do no harm" often serves more as a screensaver than an active directive. Harm is contextual, political, economic, and unevenly distributed. If we continue to invoke it, we must specify whose harm we're minimizing and whose we're overlooking.
A 2019 study revealed that 20% of U.S. medical schools still use ethically controversial anatomy texts. Simultaneously, society continues to consume content from artists with problematic histories. We separate art from the artist due to the product's beauty. But when the product is medical knowledge, do we use it to heal or bury it in shame, or both?
Now, consider the IRB, the gatekeeping committee designed to prevent future ethical transgressions. A 2021 report indicates it takes 34 days to approve minimal-risk research, with more complex studies taking months. Review outcomes vary by 20 to 30% across institutions. Some boards are overly cautious; others rubber-stamp approvals swiftly. Many fear lawsuits. Sixty-eight percent of IRB chairs cite legal liability as the primary reason for overprotection, even when risk is low.
Thus, your TikTok survey might undergo the same scrutiny as an organ transplant protocol, not because IRBs are excessive, but because the history they're trying to avoid is egregious.
In 2020, the U.S. conducted over 50,000 human subjects research studies, each requiring review. This isn't mere bureaucracy; it's accountability. It matters, especially in communities that have historically borne the brunt—Black Americans, Indigenous communities, immigrants, disabled individuals, pregnant women, the mentally ill, the poor. A 2023 Pew survey showed only 36% of Black Americans trust the healthcare system. For Black women, it's 29%. This mistrust isn't paranoia; it's memory¹⁸.
Agencies like the Department of Homeland Security, the Department of Defense, and the Department of Justice have been implicated in large-scale data collection and surveillance programs, sometimes without warrants or transparency. Programs like PRISM and XKeyscore harvest vast amounts of data from emails, phone records, and social media. By IRB standards, this isn't ethical, not just data, but people, their lives, and emotions, tracked without consent or oversight¹⁹.
Then there's Palantir, co-founded by tech billionaire Peter Thiel. Palantir provides data analytics to law enforcement, intelligence agencies, and immigration enforcement. Its tools draw from extensive data pools to construct detailed profiles of individuals and communities. This isn't abstract; it's predictive policing. It's ICE using Palantir tech to track undocumented immigrants. It's surveillance targeting the vulnerable, packaged as innovation. By any ethical measure rooted in consent and transparency, it's problematic. The IRB wasn't designed to handle this—but perhaps it should²⁰.
We care when RFK Jr. proposes "transparency" that resembles reenacting Tuskegee. We care when data is extracted from our lives without consent. We care when pregnant women are excluded from clinical trials due to liability fears. We care when an app like TikTok might know more about our children’s mental health than we do. We care when medical ethics start to feel like a luxury instead of a standard. And we care when tools like CRISPR hold the power to edit the future, but only for the people who can afford to patent it.
Science, at its best, is about discovery. About progress. But it’s not just about finding the truth in a petri dish or a spreadsheet. It’s about how we treat the people who help us uncover that truth — whether they’re research participants, patients, or entire communities that carry the weight of past abuses.
If we lose sight of that, if we ignore the stories behind the data, or the humanity behind the statistics, then we’re not really doing science anymore. We’re just repeating history. Not necessarily with evil intent, but often with thoughtless momentum. Sometimes that repetition comes with sleek branding and polished interfaces. Sometimes it’s wrapped in the language of freedom or innovation. But at its core, it’s the same mistake in a new package.
We don’t need ethics to be perfect. But we do need them to be real, ongoing, and specific. Not vague promises, not slogans, not forms we file and forget. We need ethical systems that are resilient enough to handle new technology, compassionate enough to center human dignity, and humble enough to admit past failures without repeating them.
Because when we talk about “do no harm,” it should be more than a passive ideal. It should be a question we keep asking ourselves, who might be harmed by this, and are we truly doing everything we can to prevent it?
That’s not red tape. That’s responsibility. And it’s time we stop treating it like an optional extra.
FOOTNOTES:
Kroenke, K., Spitzer, R.L., Williams, J.B.W., & Löwe, B. (2010). The PHQ-15: Validity of a new measure for somatic symptom severity. Psychosomatic Medicine, 68(2), 258–266.
Gureje, O., Simon, G.E., Ustun, T.B., & Goldberg, D.P. (1997). Somatization in cross-cultural perspective: a World Health Organization study in primary care. American Journal of Psychiatry, 154(7), 989–995.
World Health Organization. (2021). Leprosy (Hansen’s disease).
Farmer, P. (2003). Pathologies of Power: Health, Human Rights, and the New War on the Poor. University of California Press.
Kleinman, A. (1995). Writing at the Margin: Discourse Between Anthropology and Medicine. University of California Press.
Regalado, A. (2018). 23andMe has sold the rights to use its data to more than 13 drug companies. MIT Technology Review.
Chou, Y. (2021). TikTok’s Algorithm and the Exploitation of Youth Attention. Journal of Media Ethics, 36(2), 93–102.
Kramer, A.D.I., Guillory, J.E., & Hancock, J.T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. PNAS, 111(24), 8788–8790.
Petersen, E.E., Davis, N.L., Goodman, D., et al. (2019). Racial/Ethnic Disparities in Pregnancy-Related Deaths — United States, 2007–2016. MMWR Morb Mortal Wkly Rep, 68, 762–765.
Gollust, S.E., Saloner, B., Hest, R., & Blewett, L.A. (2021). Placebo-controlled trials: Ethical concerns and public health implications. The Hastings Center Report, 51(2), 7–9.
U.S. Department of Health & Human Services. (2018). IRB Review Types. Office for Human Research Protections (OHRP). https://www.hhs.gov/ohrp
OHRP. (2022). Federalwide Assurance (FWA) for the Protection of Human Subjects. https://www.hhs.gov/ohrp/register-irbs-and-obtain-fwas/fwas
Beauchamp, T.L., & Childress, J.F. (2019). Principles of Biomedical Ethics (8th ed.). Oxford University Press.
Jonas, H. (1979). Philosophical Essays: From Ancient Creed to Technological Man. University of Chicago Press.
Cyranoski, D. (2018). He Jiankui’s CRISPR babies scandal. Nature,
Moreno, J.D. (2001). Undue Risk: Secret State Experiments on Humans. Routledge.
Caplan, A.L. (1992). How did medicine go so wrong? From Nazi atrocities to contemporary bioethics. The Hastings Center Report, 22(6), 6–11.
Pew Research Center. (2023). Black Americans’ Views of the U.S. Health Care System.
Greenwald, G. (2014). No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State. Metropolitan Books.
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.