Alliance Magazine | Beyond efficiency: Philanthropy has a Duty to Confront AI’s Vulnerabilities
By Natasha Joshi is chief strategy officer at Rohini Nilekani Philanthropies.
Artificial intelligence is hailed as both a catalyst for a future utopia and a harbinger of societal collapse. The truth likely lies somewhere in the middle.
The use case for AI in the business world is clear—optimise profits, expand the customer base, and increase efficiency. For governments, the path is more complex, fraught with questions around regulation, potential harm, and equitable access. But for philanthropy, the conversation seems to be dichotomous, with one side seeing AI as a powerful catalyst for social programmes, while the other side feels something radioactive has leached into the water.
The vulnerability I am most concerned with is psychological and intellectual. In ‘Examining the Harms of AI Chatbots’, a written testimony from Dr Mitchell J Prinstein, Chief of Psychology, American Psychological Association, Dr Prinstein states the following:
The conversation surrounding AI often is dominated by discussions of code, processing power, and economic disruption. However, to view AI as a purely technological issue is to miss its most fundamental characteristic: AI is a tool built by humans, to be integrated into human systems, with profound and direct effects on human cognition, behavior, emotion, and interaction.
The 23-page testimony, supported by research citations, explains in detail how children and adolescents are particularly vulnerable to developing social-emotional maladaptation as a result of exposure to unregulated chatbots. Recent reporting by Reuters exposed an internal Meta memo which plainly stated, ‘It is acceptable to engage a child in conversations that are romantic or sensual.’
Psychological harms are not limited to children and adolescents. Many adults are using chatbots to cope with loneliness, and while short term results seem to be positive, longitudinal work indicates that over time, interacting with chatbots can exacerbate the feeling of loneliness and isolation. The Collective Intelligence Project, tracking Human-AI relationships across 70 countries, says its data reveals ‘an emotional underground economy whereby people are regularly outsourcing their vulnerability to algorithms,’ to the extent that in 2025, the most popular use of AI is for emotional support and therapy (just a year ago, in 2024, it was for predominantly being used to generate ideas).
It is crucial to note that when it comes to this kind of vulnerability, the traditional lens we use in the development sector, that of income, gender, or geography does not seem to be relevant. We are seeing troubling accounts of individuals from all walks of life getting influenced by leading, and at times hallucinating, chatbots. The consequences range from delusions and unhealthy relational patterns to, in the most tragic cases, death, suicide, and even murder.
While these accounts are troubling, it’s fair to ask how widespread this harm truly is. Compared to the number of people using AI, and the benefits they are deriving from it, how alarming is this harm?
The answer is that most people, including children, are likely to ride this societal shift well. Adults have always thought the next generation is not ok, invariably the generation turns out ok, and grows up to lament the fates of their own children.
The point is that AI, while benefiting many, stands to hurt some, and it is that some philanthropy has always rooted for. Charities, foundations, aid organisations and non-profits exist to advocate for people who are suffering or ‘at risk’. Yet, when it comes to AI, we’re not entirely clear who to account for, how to define harm, and how to protect.
The gift of hindsight also tells us that transformative technologies of the past—for example, plastics, DDT, processed foods, etc—create negative externalities that increase with the passage of time. Plastic is a good example of what happens when we let something proliferate unthinkingly based only on its upside. Plastic continues to be one of the most useful materials for human living, yet its historic free rein has led to a situation where we now live with waste all around and inside us.
The past has so many lessons; with all our human intelligence, is it not desirable to address what we can predict as likely harms of Artificial Intelligence? If markets and governments are unable to prioritise this at the moment, can philanthropy play a bigger role here?
We are in an arms race, but it’s a lopsided one. The forces pushing AI innovation forward are exponentially better resourced than those trying to understand its consequences. Research is a slow, deliberate process; technological development is accelerating non-linearly.
Three areas for philanthropy
Philanthropy must fund the critical work that can keep pace, and we can do this in the following ways.
First, direct significant funding toward participatory and interdisciplinary research, surveys, and field programmes. There is a need to build a body of work that helps us see a bit into the future and avoid making the obvious mistakes. For examples, we have supported the Humans In The Loop project—a cross-sectoral initiative that is using storytelling as a tool to examine unintended consequences of AI integration into social programmes.
Second, create space for founders and implementers on the front lines to iterate, learn, and share their findings freely, including failures and cautions. Many non-profits are already incorporating Safety by Design, which focuses on the ways technology companies can minimise online threats by anticipating, detecting and eliminating online harms before they occur. Through existing work, we know that technology development and safety do not have to be either/or.
Third, traditional philanthropy needs to stop thinking of AI as ‘tech’. Most of the capital available to nonprofits for AI-related work is coming from big tech companies, where the expectation is to run lighting pilots and deploy at scale. What the sector actually needs is core development funders to put up patient capital so as to allow non-profit teams to test, reflect, and consider the results of AI integration properly before taking it to scale.
Technology is rarely ever just a tool. It mediates social processes, engenders culture, and produces novel longings. For philanthropy to remain true to its purpose, we must look at the question of AI through a wider aperture.
First Published in Alliance Magazine
This article is part of a series exploring the intersection of philanthropy and technology, published in partnership with Luminate, which also supports Alliance’s ongoing monthly column on the same subject: Philanthropy Wired.
You may also want to read
Pukar’s Event | Journey of Transformation
At the PUKAR Research Fellowship graduation, Natasha Joshi, spoke about working in Samaaj and the power of collective action. She reflected on how grassroots efforts can drive meaningful social change.
India Civic Summit 2025 | Concluding Address by Gautam John (CEO, RNP)
The India Civic Summit 2025, held at the Bangalore International Centre by the Oorvani Foundation, was themed around citizen-led action for climate-resilient cities. Through discussions and workshops, speakers and[...]
ವಿಶ್ಲೇಷಣೆ: 2024ರ ಚುನಾವಣೆಗೆ 2014ರ ಕಥೆ
ಸರಿಯಾಗಿ ಹತ್ತು ವರ್ಷ ಗಳ ಹಿಂ ದೆ ನಾವು ನನ್ನ ಪತಿ ನಂ ದನ್ ನಿಲೇ ಕಣಿ ಅವರಿಗಾಗಿ ಬೆಂ ಗಳೂರು ದಕ್ಷಿಣ ಲೋ ಕಸಭಾ ಕ್ಷೇ ತ್ರದಲ್ಲಿ ಚುನಾವಣಾ ಪ್ರಚಾರಕ್ಕೆ ಇಳಿದಿದ್ದೆವು. ಆ ಕಥೆ ಕೊನೆಗೊಂಡಿದ್ದು ಹೇ ಗೆ ಎಂ[...]
