AI and society specialist Mutale Nkonde spoke on why ophthalmologists need an ethical lens when deploying AI—and how to develop one.
When Mutale Nkonde took the stage at The Association for Research in Vision and Ophthalmology 2025 Annual Meeting (ARVO 2025) in Salt Lake City, she wasn’t there to discuss the latest breakthroughs in retinal imaging or glaucoma treatment.
Instead, the CEO of AI for the People and Cambridge University PhD candidate offered something equally valuable to the vision scientists gathered on Day 1 of ARVO 2025 in Salt Lake City: a framework for thinking about the ethical implications of AI in their field.
READ MORE: AI Tools in Ophthalmology: From Virtual Scribes to Surgical Planning
“The big theme was always this idea that the science that we advance should at least, at the very least, do no harm and at the very best improve the human experience,” Ms. Nkonde explained during her packed-to-the-rafters keynote address. “But this isn’t an ‘AI is all bad’ conversation. This is an ‘AI is a great opportunity if we’re prepared to think about how to do this well.'”
The race for responsible AI
As AI applications proliferate in ophthalmology, and especially the investigational variety on display at ARVO 2025, Ms. Nkonde’s message couldn’t be more timely. She began by laying out a social-technical framework that offers vision scientists a structured approach to considering the broader implications of their work with AI.
The framework begins with awareness. “Whenever AI products come into a field or we’re thinking about AI considerations, cognitively, we need to be thinking about the fact that these products are designed in a social context,” she said.
According to Ms. Nkonde, this means acknowledging that AI systems inherit the biases, assumptions and limitations of their creators and the data they’re trained on—and what they inherit depends largely on dominant sociocultural factors.
For vision scientists, she explained that this has particular relevance when developing systems that analyze diverse populations. An algorithm trained primarily on light-colored eyes, for example, might perform poorly on darker irises, potentially leading to misdiagnoses in certain ethnic groups. Similarly, facial recognition systems used in ocular biometrics have demonstrated troubling disparities in accuracy across racial groups.
The second element of Ms. Nkonde’s framework is research culture itself. “It can feel very awkward, it can feel very challenging to bring up social considerations of technology,” she acknowledged.
“How do we create the cultures within labs and within research that allow for these social considerations to be given the same type of weighting as the scientific questions at hand?”
Critical ethical challenges
Such a cultural shift is essential because, as Ms. Nkonde put it, “AI is neither good, nor bad—nor is it neutral. How we use AI, how we integrate AI into our science will really be testament to the ways in which we can either capitalize on opportunities or create deeper inequalities.”
Ethical challenges abound for AI in vision science, and as in all fields, privacy concerns and data licensing loom large. Ms. Nkonde referenced Henrietta Lacks to illustrate this point, a woman of color, whose cells were harvested without consent.
READ MORE: AI in Ophthalmology: Maximizing Potential while Ensuring Data Safety
These cells, the immortalized HeLa line, have become fundamental to countless medical advances. Neither she, nor her family, have reportedly ever been compensated.
The parallel to modern data collection is clear. “We need data to develop our models,” said Ms. Nkonde. “And in medical settings, at least in the United States and I’m sure in other countries, there are privacy issues around medical data. So one of the big questions is: How can we ethically get that?”
For ophthalmologists collecting retinal scans or other biometric data, Ms. Nkonde thinks there could be one key avenue: trust. “How can we create the trust needed in these systems?” she asked. “How can we educate patient groups? How can we encourage doctors and other scientists to consider not just these implications, but move us towards an area which we trust and that we are being ethical around how we’re using information?”
Beyond ethical data harvesting, environmental sustainability presents another, often-overlooked, challenge. The computational power required for advanced AI systems comes at a significant environmental cost in terms of electricity and water usage—and who ends up bearing this cost is a key issue in the evolving field of AI and social justice that Ms. Nkonde champions.
“I was horrified to find out, but not surprised, that [data centers] sit in the global south. They sit in poor communities in the U.S.,” Ms. Nkonde revealed.
For vision scientists developing resource-intensive AI systems, this raises important considerations about when and how to deploy such technologies. “If we’re prompting everything all the time without ceasing, we’re actually going to get to a point in the very close future where we’re creating climatic challenges for ourselves,” Ms. Nkonde warned.
Looking twice at computer vision
Computer vision applications, a field particularly relevant to ARVO members and researchers in the audience, also received special attention in Ms. Nkonde’s address. Using AI-powered doorbell cameras as an example, she showed how technologies with roots in vision science can have far-reaching social implications.
“In the field of robotics, there’s a huge amount for vision sciences, because the ways we’re thinking about robotics at the moment is through a humanized experience. We want a seeing machine—computer vision,” she explained. “At the base of so many products like doorbell cameras is computer vision, and it would be the science that you all do that that’s drawn upon.”
Ms. Nkonde used the doorbell camera example to highlight privacy and surveillance concerns that emerge when public (or even supposedly private) data is shared with law enforcement—a practice recently ruled unconstitutional. The possibilities for what happens when any such data is harvested by malicious actors are unprecedented—and amplified by the presence of artificial intelligences that are purpose-built for analyzing this kind of data at scale.
READ MORE: What We Need to Know about AI and Big Data Analytics in Ophthalmology
There is a solution, however, and one that speaks volumes about how the people working with this technology feel about it. “I don’t have this kind of technology for privacy reasons. So many people who work in tech are low tech like this. Now that’s a fun fact,” she mused.
A call for collaboration
In the end, the solution to the social, ethical and environmental concerns Ms. Nkonde raised during her time at the podium is one that should be familiar to a room full of brilliant minds and visionary researchers: stop, collaborate and think.
“There is a rush to integrate AI into everything all the time. And that could be a very exciting world, but it could also be potentially a less safe world,” Ms. Nkonde says. “It’s not just where the chatbots can go wrong, but where are they best deployed? Where do we think that they’re going to move our science or move our clinical practice forward? And how do we balance that against these larger issues?”
Nkonde’s call to action honed in on collaboration to make AI work for all. “We have to collaborate. We have to collaborate with each other within borders. We have to collaborate with each other outside borders,” she urged.
Equally important is interdisciplinary collaboration. “Beyond collaborating within the eye science field, we all need to collaborate with people outside of traditional fields—people like me, a social scientist, and others. We don’t get to social-technical solutions without reaching across fields.”
She also encouraged greater participation in policy discussions, a truly hot-button topic considering the current political climate and cuts to science funding.
“The scientists are not in the room when we think about governing science, and they should be.” Vision scientists have a unique contribution to make to these conversations about governing AI and beyond. “Particularly for clinicians in this community, the Hippocratic Oath creates an ethical framework by which we can develop these technologies that do not exist in other fields.”
As AI continues to transform ophthalmology and vision science, Nkonde’s framework offers an intriguing approach to ensuring these technologies fulfill their promise of improving human well-being while minimizing potential harm. And for the vision scientists gathered in Salt Lake City for ARVO 2025, it was a reminder that the most powerful innovations often emerge at the intersection of technical excellence, ethical foresight—and difficult conversations.
READ MORE: Get daily ARVO 2025 updates from Utah on PIE and CAKE Magazine websites.
Editor’s Note: Reporting for this story took place during the annual meeting of The Association for Research in Vision and Ophthalmology (ARVO 2025) being held from 4-8 May in Salt Lake City, Utah, United States.