From Harm to Healing: Reimagining AI Through the Lens of Global Health and Moral Clarity
As a clinician and someone who’s spent years working across health systems—from the operating room to global policy—I find myself returning, again and again, to a simple question: what does it really mean to “do no harm”?
For decades, this principle—primum non nocere—has anchored the moral foundation of medicine. And yet, the reality on the ground often diverges sharply from that ideal. In the United States alone, an estimated 250,000 people die each year from preventable medical errors. The scale of this is staggering. It’s as if a fully loaded jumbo jet were crashing every day—and somehow, we’ve normalized it.
But these are only the most visible wounds. The deeper, systemic harm stems from absence—not from negligence in treatment, but from the sheer lack of access, equity, and dignity in care. A child with a cleft lip born in Bihar may face a lifetime of preventable suffering, while another child with the same condition born in Boston receives coordinated, multidisciplinary care within weeks. Not because their needs are different, but because their context is.
This, to me, is where the conversation about artificial intelligence must begin—not with abstract forecasts or fears, but with a clear-eyed look at what already exists: unjust systems, broken pipelines, and profound disparities. AI doesn’t enter a blank slate; it enters this reality. It can mirror injustice or help us see our blind spots—depending entirely on how we choose to engage with it.
Too often, we’re caught between two poles: inflated optimism that sees AI as the cure for all ills, or defensive perfectionism that stalls progress until every risk is eliminated. Meanwhile, patients continue to wait. Clinicians continue to burn out. Communities continue to carry burdens they did not choose and cannot control.
I don’t believe the core challenge is technical. It’s moral. We have not yet aligned our extraordinary technological capacity with an equally coherent moral and emotional framework. The real deficit in healthcare is not innovation—it’s integration. Integration of values. Of systems. Of people.
When I’ve worked with communities—whether in rural clinics, urban hospitals, or through global partnerships—I’ve seen firsthand that care isn’t just about diagnosis and treatment. It’s about accompaniment. It’s about building trust. It’s about honoring the dignity of those we serve. AI, if rightly deployed, must be rooted in this same ethic. It must help us uplift human capacity—not bypass it.
This perspective requires us to step away from a model of top-down implementation and toward one of co-creation. We must ask not only what AI can do, but who it serves. Who decides? Who is accountable? Who is left behind?
Technology must not be imposed upon communities as a finished product. It must emerge from a process that invites participation, cultivates local ownership, and builds the capacity of people to use it meaningfully. This is not a romantic ideal. It is the only path to sustainability.
And that means thinking differently about scale. I often hear calls to “scale innovation,” but I’ve come to believe that the more urgent work is scaling relationships. Scaling trust. Scaling shared frameworks for consultation, reflection, and action.
In global health, we’ve too often confused funding with development. But lasting change rarely begins with money—it begins with vision. A vision that sees communities not as passive recipients but as protagonists. A vision that measures progress not just by throughput or efficiency, but by our ability to foster self-reliance, resilience, and collective well-being.
AI can support this vision—but only if we design with humility. Only if we lead with emotional and spiritual intelligence. Systems don’t have empathy, but we do. And that is where governance begins: not in regulatory documents, but in the hearts and minds of those who lead and build.
This means including impact assessments that ask human questions before we ask technical ones. Training clinicians not just in prompt engineering or workflow redesign, but in ethical discernment and community-centered leadership. And above all, it means resisting the forces of fragmentation—those pressures that pit sectors, ideologies, or disciplines against one another.
I’ve seen how division, especially in times of stress, erodes our ability to collaborate. The most meaningful advances in global health have not come from silos or blame—they’ve come from coalitions. From the hard work of building unity across difference. From shared service, not shared ideology.
As we confront the growing complexity of health systems—rising costs, climate shocks, fragile infrastructures—our response cannot be reactive. It must be constructive. Constructive action means funding models that prioritize long-term resilience over short-term visibility. It means forging trustworthy, values-driven partnerships—including in the private sector—that align with the public good. And it means ensuring that AI doesn’t widen the gap between rich and poor, connected and unconnected, but instead helps close it.
This is not a passive hope. It is a choice. A moral imperative.
I believe we are standing at a threshold. The road ahead will not be easy. But the tools are here. The knowledge is here. The communities are ready. What remains is the will to act—with integrity, with consultation, and with courage.
If we choose to meet this moment with clarity and unity, AI will not save us—but it will walk with us. It will help us see what we’ve overlooked. It will amplify the wisdom already present in communities. And it will become not a symbol of dominance, but of our shared commitment to heal—not just bodies, but systems, and ultimately, the social fabric itself.
Dr. Salim Afshar