IDR | The limits of AI in social change

September 25, 2025
Article

Share

– Gautam John, CEO, Rohini Nilekani Philanthropies

More actors—from grantmaking to service delivery—are exploring the use of AI. However, the excitement around scale and efficiency often overshadows a critical question: What does it mean to bring machine-generated abstraction into systems built on trust, context, and relationship?

In systems of social change, we grapple with an enduring tension: connection versus abstraction. Connection is slow, human, and relational. It thrives on trust, listening, and collaboration. Abstraction, on the other hand, simplifies complexity into patterns, insights, and models. It is fast, scalable, and efficient.

Both serve a purpose, but they pull in opposite directions. And now, with the rise of AI tools like large language models (LLMs), this tension has reached new heights. LLMs thrive on abstraction; they reduce human interaction into data points, surface patterns, and generate outputs.

While LLMs are not intelligent in the sense of reasoning or self-awareness, they can serve as tools that reframe, rephrase, and reorganise a person’s ideas in ways that feel expressive. This can enable creativity and reflection, but let’s be clear: It’s not agency. The tool reshapes inputs but does not make meaning.

In market-based systems, where efficiency is paramount, this might work. But in social systems, where relationships, context, and trust are everything, abstraction risks losing what makes systems real and resilient.

This essay is a case for vigilant embrace. It asks how we can keep tools in service to relationship, not the other way round. It draws from our country’s experience of the self-help group (SHG) movement and its microfinance offshoots, tests it against the new frontier of LLMs in the social sector, and distills a few design rules for keeping the work human in an age of speed.

Connection as infrastructure

Decades ago, India’s SHG movement reframed finance as a relationship first, and a product second. Groups formed through affinity; members saved together; rules emerged from context; repayment schedules matched rhythms of life and livelihood; and trust was the collateral. Over time, SHG–bank linkage became a way to bring formal finance into places where formal institutions had no legitimacy of their own. It only worked because process mattered.

As Aloysius Prakash Fernandez (long‑time leader in the SHG movement with MYRADA and a key architect of its practice) has argued, SHGs built economies of connection. The time it took to form an SHG was not friction to be eliminated, but rather the formation and cadence of months of meetings, savings discipline, conflict resolution, and learning to keep books and hold each other accountable. That slow work created legitimacy and resilience so that when crisis struck, the relationship fabric held.

Then came the turn. As microfinance commercialised, much of the field shifted from SHG thinking to microfinance (MFI) thinking—from affinity to acquisition, from place to product, from presence to process compliance. Loans became standardised, repayment cycles rigid, and growth a KPI. Speed, greed, and standardisation (to borrow Aloysius’s pithy phrasing) took what was relational and made it transactional.

The results were predictable. Repayment rates looked spectacular—until they didn’t. In many places, risks were accumulating: multiple lending without visibility on household cash flows, incentives that pushed volume over suitability, and the slow erosion of trust with lenders treating people as portfolios rather than participants. Products scaled, but belonging did not. The social infrastructure that had once underwritten financial inclusion was being displaced by numbers that looked like progress.

It is tempting to narrate this simply as a story of ‘bad actors’, but that misses the deeper point. Even well‑meaning institutions slide here because their structures privilege the measurable: gross loan portfolio, on‑time repayment, and cost to serve. The things that make SHGs work—mutuality, ownership, repair—resist instrumentation, and become, quite literally, less valuable.

If this sounds familiar to those working at the intersection of LLMs and social systems, it’s because we’re watching the same film again.

The question, then, is this: Where, if at all, do LLMs belong in the work of social change? And what can we learn from the SHG/MFI shift?

LLMs and the mechanistic view of wisdom

There are now many LLM-based tools designed to abstract and synthesise insights from human interactions, promising to amplify collective wisdom. In social change systems, where resources are stretched and problems are vast, this promise is tempting and does have some strengths.

  • It organises and systematises human insights into building blocks.
  • It surfaces diverse perspectives, tracing inputs back to their sources to ensure inclusion and accountability.
  • It accelerates decisions, offering actionable outputs at scale.

But these strengths are also its greatest weaknesses because they abstract the human process of turning messy, situated conversations into neat patterns. This comes at a cost.

  1. Loud voices and flattened complexity: They risk over-representing frequent or louder perspectives while erasing nuance, dissent, and marginal views.
  2. Loss of relational insight: Wisdom doesn’t arise from patterns alone. It comes from the trust, tension, and emotional connection born of human interaction.
  3. Hollow consensus: Outputs that bypass relational work may appear actionable, but they lack the trust and shared ownership that give decisions their power.

The result? Systems that look efficient but feel hollow because tools, frameworks, and processes sever the relational ties that make systems real.

Recent empirical evidence seems to confirm what we sense intuitively about these limits. When researchers systematically tested LLM reasoning capabilities through controlled puzzles, they discovered something profound: As problems grow more complex, these models don’t just struggle but collapse entirely. Even more telling, when complexity increases, they begin to reduce their effort, as if giving up. They find simple solutions but then overthink them, exploring wrong paths.

Perhaps this is a window into the fundamental nature of these systems. They excel at pattern matching within familiar territories but cannot genuinely reason through novel complexity. And social change? It lives entirely in that space of the new and the complex, where contexts shift, where culture matters, where every community brings unprecedented challenges. If these models collapse when moving discs between pegs, how can we trust them with the infinitely more complex work of moving hearts, minds, and systems?

Apply the narrow versus wide lens

To navigate this challenge, the tension between connection and abstraction must be examined through another dimension: narrow versus wide. While connection and abstraction often feel like irreconcilable opposites, the narrow–wide lens helps bridge this gap by revealing how different kinds of tools can play meaningful roles in social change.

  • Narrow tools are specific and targeted, solving well-bounded problems.
  • Wide tools are generalised and scalable, seeking to address large systems.

Combining this in a 2×2 framework gives us four distinct spaces where LLMs can, or cannot, play a meaningful role.

1. Narrow connection (Relational amplifiers)

  • What it is: Tools that deepen human relationships by enhancing context-specific, targeted work.
  • Example: A frontline caseworker uses an LLM to synthesise notes across multiple user visits in order to personalise their follow-ups. The LLM helps amplify memory and insight, but the relationship remains human.
  • Why it works: These tools augment human connection by surfacing insights without replacing relational work. They stay rooted in the specific, bounded context of their application.
  • Key use case: Tools for case management in social services. For instance, LLMs help social workers tailor interventions to individual users based on their unique needs and histories.
  • Key question: Does this tool augment connection, or does it replace it?

2. Wide connection (Relational ecosystems)

  • What it is: Tools that map and visualise relationships across broader ecosystems, enabling collaboration without eroding the human work of trust-building.
  • Example: Stakeholder mapping tools that reveal community networks and power dynamics.
  • Why it works: Wide connection tools respect the complexity of human systems, helping actors navigate and strengthen relationships without reducing them to transactions.
  • Key use case: Network mapping for advocacy coalitions. LLMs can surface insights about overlapping efforts, potential collaborators, or areas of conflict, but the work of building those connections remains human.
  • Key question: Does this tool illuminate relationships, or does it flatten them into transactions?

3. Narrow abstraction (Efficiency tools)

  • What it is: Tools that automate repetitive, bounded tasks, freeing up time for relational or contextual work.
  • Example: A grant officer uses an LLM to scan 100 applications for missing documentation or budget inconsistencies and flags files for review, but leaves decisions to humans.
  • Why it works: Narrow abstraction tools stay within well-defined parameters, ensuring that the abstraction doesn’t undermine human judgement or erode trust.
  • Key use case: Administrative automation in nonprofits. AI can handle routine data entry or flag missing information in grant proposals, allowing staff to focus on strategic decisions and relationships.
  • Key question: Has the process of abstraction removed necessary details that deserve human consideration?

4. Wide abstraction (Context flatteners)

  • What it is: Broad, generalised tools that prioritise scale and efficiency, but risk erasing context and relationships.
  • Example: A philanthropic CRM tool employs LLMs to rank grantees on ‘impact potential’ using prior grant reports that reward well-written or funder-aligned language, not those doing contextually important work.
  • Why it fails: Wide abstraction tools produce outputs that are disconnected from the lived realities of the people and systems they aim to serve. They often impose generic solutions that lack local resonance or trust.
  • Key risk: Policy recommendations generated by LLMs that ignore cultural nuance, power dynamics, or local histories.
  • Key question: Does this tool flatten complexity, producing solutions no one truly owns?

Wide abstraction tools fail social systems because social systems are built on trust, context, and relationships. Change doesn’t emerge from patterns or averages; it emerges from the slow, messy, human work of showing up, listening, and building together.

This requires moral discernment, cultural fluency, and the ability to hold space for uncertainty. Even the most sophisticated tools are not capable of these things. A tool cannot sense the difference between a pause of resistance and a pause of reflection. It cannot understand silence or the weight behind a hesitant request.

LLMs can play a role in social change, but must stay narrow, supportive, and grounded in connection. They canamplify relationships (narrow connection), reveal patterns in systems (wide connection), or automate tasks that don’t require human judgement (narrow abstraction). But they cannot replace the relational processes that make systems real.

Designing for a human age

The promise of LLMs is seductive. It offers speed, efficiency, and a sense of control—qualities we crave in complex, uncertain systems. But if we think of connection as the foundational infrastructure and abstraction as a tool, how do we build (and fund) accordingly?

Four clusters of practice follow from the analysis:

1. Placement and scope

  • Keep it narrow (bounded contexts) when automating.
  • Hold it wide and human when mapping relationships.
  • Avoid wide abstraction in relational domains (welfare, justice, health, community development). If you must use it, treat outputs as hints, never decisions.
  • Assume drift; design for it.

2. Process and ownership

  • Process matters. If a ‘consensus’ tool removes dissent and dialogue, it is producing hollow agreement.
  • Ownership signals reality. If a decision is not of the group but about it, expect distance and eventual resistance.
  • Messiness test. Did we stay in the mess to listen, disagree, compromise? If not, the outcome may travel poorly. Consensus that bypasses repair will not hold.

3. Measurement and accountability

  • Measure what you can while protecting what you can’t. Build explicit guard rails so that unmeasurable goods (trust, belonging, repair) are not crowded out.
  • Use AI where failure is acceptable. Drafting, summarising, data hygiene: yes. Decisions about dignity, safety, or entitlements: no.
  • Allow override without justification. People closest to the context must be free to resist machine outputs.
  • Capture moments of failure. Document not just technical bugs, but also when people forget how to act without the tool.

4. Funding and institutional practice

  • Finance the foundational layers. Budget for convening, accompaniment, group formation, and follow‑through, and not just transactions.
  • Reward stewardship, not throughput. Celebrate organisations that prune, pause, and repair, not just those that scale.
  • Create collision spaces. Funders should host containers for connection—open‑ended gatherings where practitioners make meaning together, not just report up.
  • Reframe accountability. Shift from counting outputs to honouring conditions: psychological safety, trust density, and role clarity across the network.

The work we do in the sector is the work of belonging, and it does not scale by flattening. It scales like a forest: root by root, mycelium by mycelium, canopy by canopy, alive and adaptive, held together by relationships we cannot always see and must never forget.

India Development Review

You may also want to read

August 7, 2025
Panel

Purposeful Capital for a Livable Future | AndPurpose Forum, Bengaluru

At the AndPurpose Forum held in Bengaluru in July 2025, Gautam John (CEO, RNP) spoke about how philanthropic capital and collective action can drive systemic change. The panel included Shobha[...]

July 17, 2025
Article

Alliance Magazine | What if we funded justice differently?

Justice has often been philanthropy’s stepchild. In numerous donor forums I’ve attended, we’ve eagerly rallied around education, health, and livelihoods. However—when the conversation turns to justice, ensuring people can access[...]

June 1, 2025
Article

AARP International | Rohini Nilekani writes: A Commentary on Aging in India

In 2025, India had the largest population of young people of any country in the world. Within a couple of decades, it will have the largest cohort of people age[...]