#333 The AI Debate
Two perspectives on AI and India, and Policy Reasons for the Anaemic Box Office in India
India Policy Watch #1: The Coming P(AI)N
Insights on current policy issues in India
—RSJ
The US market has been roiled by fears of AI-led disruption in the past two weeks. Much of the commentary last year focused on massive upstream investments in LLMs, computing power, and data centres, which fuelled fears of an AI bubble.
Instead of the bubble bursting, the mania seems to have spread downstream on how AI could disrupt traditional data or information-centric industries. The traditional (or perhaps, last year’s) narrative was that creating enterprise-ready AI agents that can scale will be a time-consuming affair. AI companies will have to work with their clients to understand traditional industries, learn the workflow, protect sensitive information and then develop enterprise-wide agentic platforms that can automate a lot of it - so the thinking went.
Well, in the past month or so, the upstream AI-model builders like Anthropic (Cowork), Gemini (project Genie) and OpenAI in the US and Alibaba (Rynnbrain) and Bytedance (Seedance 2.0) unveiled models that are aimed at directly disrupting sectors and functions as diverse as software services, legal and compliance work and manufacturing.
Now there are two ways of looking at it. One is how the markets have responded, with sector after sector selling off because of news of an AI upstart showing agentic capabilities that couldn’t have been imagined even a couple of quarters back (Altruist wealth advisory and Insurify being two good examples of this).
There is no method to this. The punters can’t keep going long on the usual Big Tech forever. So the next best option is to short traditional industries. That’s the trade playing out now. One day it is traditional IT services (SaaSpocalypse as it is being called), the next it is wealth and investment management, and then another day it is ratings agencies or even construction equipment companies. After a while, this too shall pass, and we will realise things aren’t as disruptive as they were thought to be.
The other way of looking at it is to actually take these developments at face value and test things for yourself. Forget all the opinion pieces about how electricity put candlemakers out of a job, but then created a whole new industry of entertainment, or that downstream AI application kind of work will create newer kinds of jobs that we can’t imagine now. Leave all that aside and just go out and use the latest plug-in from Claude Cowork or Code Assist.
You won’t need to be too farsighted to see where these tools could eventually take you once you have used their current, already fairly impressive versions. Now that Claude Cowork has changed the game in how AI agents can directly interact with enterprise workflow and automate routine knowledge work without any extensive training on native datasets, the focus will be on how many similar generative AI apps will be launched that will be considered category killers across the services space in the next 12 months. Everybody will want to build a universal Cowork like plug-in for the industry or function they understand.
There is a degree of scepticism among Indian policymakers and businesses about both the speed and scale of impact that this will have on both Indian IT and business process services and on Indian corporates. I have spent a fair amount of time in the past month seeing firsthand the scale of impact already in organisations in the Indian subcontinent. These organisations aren’t in some kind of “proof of concept hell” where they have multiple little successful experiments that aren’t scaling up, as many would like to believe. For proof, look at the financials of the larger BFSI firms in the previous two quarters, where the balance sheet growth is upwards of 10 per cent, but the employee headcount is down, and wage cost has remained flat (excluding the one-time impact of the labour code).
The differences between the previous tech hype cycle and this are evident from my interactions.
First, the nature of AI disruption is such that you don’t need an intermediary to set you up to use it. Almost every previous big idea (cloud, digital, virtualisation, et al) needed organisations to reach out to integrators or providers to help them get off the ground. As much as there is a belief among IT service providers that AI will create similar demand for them, I find this notion too optimistic. Most large organisations have already enabled their engineering teams with access to AI tools faster than larger IT services firms. So, in many cases, it is the client organisation that’s ahead of the service providers in thinking through the possibilities of using AI.
Second, more than any other tech disruption in the past, the AI tools have first targeted the individual user to demonstrate their capabilities. The CXO is now using a Copilot, seeing the productivity gains firsthand and then asking the question why can’t this be scaled up faster. This is quite unlike, say, a large ERP implementation, where you have 2 years of painful implementation, where you only see demos and wireframes before an end user really starts using it for a transaction (by which the world has moved on, and it’s time for an upgrade). This inversion of individual users getting comfortable with the tools and then looking for enterprise use cases is speeding up adoption and creating more demand for enterprise-wide plug-ins to which the AI players are now responding.
Third, there is a notion that eventually the cost of using tokens, training staff and streamlining data architecture will mean large-scale AI adoption in organisations will fail the business case test till the price of tokens falls significantly. That will take time, considering the huge upstream investments already made by AI players, whose rationale is being questioned. I think there is some merit in this argument, but only up to a point. The reason why global organisations outsourced technology services was because they didn’t have access to a large pool of engineers in their local market and any attempt at hiring large numbers would increase their costs prohibitively. This giving away of control on key programs never sat well with them, but who could argue with the business case of outsourcing to India?
As the global delivery model matured, the clients realised they could now set up their own centres in India and get the same economic benefit while retaining control. That’s what has led to the boom in GCCs in the past decade. AI coding tools now allow them to take out the average engineer out of the equation almost entirely. Most early users of these tools have seen up to 30-40 per cent reduction in human coding effort (if not more). That kind of saving without having to cede control of IT development is compelling enough already to take the high token cost in stride. It will only get easier and more compelling in the near future.
We are in the first stage of disruption right now. Most large IT services players are walking the fine line between protecting their existing revenue lines (by resisting AI as much) while pitching for new lines of work that will be AI enabled. This explains the random optimism we see in some quarters among select players based on a few wins.
The mid-sized IT services firms that don’t have as much existing revenue to protect are going out and cannibalising the revenue streams of larger players by pitching AI-assisted programs for the same work to those clients. This explains the relatively faster growth seen by these players. Soon, smaller players and startups will start eating the lunch of these mid-sized players, too.
The past four quarters have made it clear that whichever way this plays out, the net headcount of the IT services sector will only come down in the medium term. This, by itself, is a significant headwind for the Indian economy for the next couple of years. The second stage of disruption is already upon us. AI firms will only speed up in coming up with enterprise-enabled tools, which will make adoption easier for large service organisations or in the horizontal functions (finance, legal, professional services) of organisations. This will be significantly margin accretive for these firms while reducing their headcount. There is possibly a third stage filled with autonomous agents, AI reimagined workflows etc that I think we can leave aside for the moment.
What’s already out there and getting adopted is disruptive or transformative (take your pick) for the economy. This impact (negative in the short term) is not modelled into any forecast for the Indian economy. I sense that by the end of this calendar year, we will be forced to contend with it.
Another Perspective: Code as Handicraft
— Pranay Kotasthane
This was the week when the penny dropped for me after using Claude Code for some pilot projects. Going beyond the now-ubiquitous chatbox, LLMs and their many wrappers are becoming amazing tools. One still must know how to code to guide these systems effectively, but it is no longer a gating condition. Anyone can quickly spin off a hobby website, an applet, or a project.
Would this be the new end of coding? Here’s how I think of it.
As machine learning becomes more capable, “hand-crafted” code could become rarer and command a premium. Coding might quite literally join the ranks of other handicrafts at the higher end. But what about the low-end? Everyone would be able to code half-decent stuff with basic knowledge. This is a major productivity gain. The flip side will be what Amit Varma calls the “Cavendish Banana” problem—everyone’s code will start looking similar, and the variance and novelty might reduce.
Let me explain this AI boon with another analogy from public finance. When one level of government gives grants to another level, there can be two types of effects. The receiving government could substitute its own spending in that domain with the money it receives (why pay attention if someone else is paying for the work to be done). On the other hand, buoyed by additional financial support, grants can stimulate further spending by the recipient government.
Whether grants have a substitution or a stimulation effect depends on the grant design and conditionalities. Similarly, today’s LLM tools have both substitution and stimulation effects from an individual’s perspective. We can use them to substitute for our own intelligence by relying exclusively on the summaries of books, histories, and ideas that these tools excel at. By letting go of the process of forming connections across disciplines, which happens non-linearly over time through a diversified knowledge base, we will lose something vital. This effect again has the Cavendish Banana problem. At the lower-end, all of us can know something about everything—I can give a full presentation on quantum computing (of which I understand almost nothing) tomorrow. But at the higher end, those who do not use LLMs to replace their thinking will command a premium.
On the other hand, these tools can stimulate our intelligence by allowing us to push the boundaries of what we can do on our own. For example, a non-coder can easily create their own apps and websites. Sure, you won’t be a good coder this way, but you will be able to add to the repertoire of things you can make. As a knowledge worker, the most valuable skills are asking the right questions and planning how to approach the answers, rather than going down the rabbit hole of implementation.
The challenge for us would be to balance these stimulating and substituting effects. Ideally, we would use these tools to stimulate outcomes in areas we enjoy working on and to substitute our efforts in areas we don’t like. But these tools will tempt us to substitute our cognition and learning in all domains. Figuring this balance will be a personal journey.
On that note, I have also noticed that some very smart people are caught in an ethical dilemma: should I use these tools for unserious tinkering, given their energy-intensive footprint and the resulting adverse environmental impact? This is a fair concern—the current implementation is indeed energy-hungry. But we needn’t look at this from an Ehlrichian angle of scarcity. There is a lot of research going into alternative compute architectures, cleaner energy sources, and better models. Just as fossil-fuel-powered vehicles replaced horses, thereby enabling previously unimaginable things, AI tools can have a stimulating effect on what we do today.
The real challenge is to remember the handicraft analogy. By refusing to use these tools, which will commoditise many skills, some people can still enjoy the prestige that “hand-crafted” clothes makers do today. But that prestige will be reserved for a small number of people who demonstrate excellence and long-term commitment to their craft.
Now, back to the macro questions that RSJ has written about. I have a more optimistic take to offer from an Indian perspective. Some SaaS companies might indeed feel the heat (hail creative destruction), but SaaS isn’t going away. Deploying production-ready enterprise AI will still need integrators, and that’s where SaaS comes in. Yes, the labour arbitrage model, which counts billable work hours, might well take a hit. But India’s software services industry has weathered many changes in the past and plays this game at a global level without any protectionism. Even if some lagging companies fall away, there is enough Indian talent to create new AI SaaS models relevant to smaller, more nimbler customers of tomorrow.
The point about the client organisation being ahead of the service providers in thinking through the possibilities of using AI is quite interesting. It does feel that companies might well rely more on in-house teams that co-team with AI rather than hand over wholesale implementation to service providers. But here again, staying ahead of their customers was never the value proposition of India’s software services firms; it was high-fidelity implementation. SaaS contractors are often embedded in bigger employee teams at client locations. I don’t think that is going away anytime soon.
For more, check out the latest Puliyabaazi on this very theme.
India Policy Watch #2: Filmy Anomaly
Insights on current policy issues in India
—Pranay Kotasthane
Earlier this week, I was intrigued to read this Hindustan Times report, quoting the film actor Aamir Khan, who rues the lack of film screens in India. This shortage results in smaller box office collections compared with other major markets like the US and China. Whereas India’s many film industries produce more than twice as many films as Hollywood, their total collections are less than 20 per cent of the American market.
Now, comparing with a country 30 times richer than India is somewhat unfair. The comparison with China is more instructive. Chinese industries are renowned worldwide for manufacturing most of the things in our lives—except movies. Perhaps hobbled by excessive self-censorship, well-known Chinese films tend to be set either in the distant past or to focus on super-safe contemporary themes. And yet China has roughly 90,000 movie screens while India has about 9,000. Chinese box office collections dwarf India’s, even after accounting for differences in living costs.
The contrast becomes starker when you examine trajectories. Between 2010 and 2016, China added an average of 16 screens per day, bringing the total to over 40,000. During the same period, India’s screen count declined from around 10,000 to 8,500.
To a large extent, this difference can be attributed to income levels. But as is often the case, there is a public policy story here too.
A majority of India’s screens are single-screen theatres. They comprise more than 60 per cent of the total screens and over 80 per cent of the total seats. They were struggling even before the pandemic, but COVID-19 broke their back entirely. A 2018 Deloitte report, Screen Density – Key to Unlocking the Hidden Box Office Potential, explains the origins of the problem:
“The film exhibition industry in India is mainly comprised of single screen and multiplexes. Most single screen properties in India are more than 50 years old and run by the second or third generation entrepreneurs. These properties were built in the pre-digital era when the analogue cinematography equipment and the physical reels posed exorbitant costs, upwards of INR 50,000. These together with the distribution costs necessitated a huge audience in order to be profitable per show. The result was that, theatres (single screens) were built with huge capacities – 500 to 1,000 seats per screen. Such properties also required huge catchment area to drive footfalls to such a huge capacity theatre. However, with the advent of digital cinema, the print costs reduced to almost one fifth making it profitable to have smaller theatres. As a result, most of the theatres that have come up in the last decade are small (up to 250seats per screen) theatres. The reduced distribution costs, growing supply of content and appetite of the Indian consumer have also allowed them to have more than one screen per property.” [Deloitte, 2018]
Technology changed, but the old theatres remained stranded assets from a bygone era. You have likely seen them in your city: dilapidated, forlorn single-screens that have been closed down. Why don’t their owners simply sell the land and move on?
Because they can’t. Here is the Deloitte report again:
On one hand, in a bid to prevent the theatres from closing down, certain states such as Maharashtra have introduced exit regulations restricting single screen properties from redeveloping into commercial properties. To redevelop a cinema theatre a capacity of 1,000 seats, it is mandatory for exhibitors to build a smaller cinema theatre of 330 seats at the same place prior to re-development.
On the other hand, the single screen theatres fail to qualify the infrastructure requirements for multiplexes such as parking space. Hence it is difficult to convert most of these single screen theatres into multiplexes. Renewal of licenses for single screen theatres is also challenging as they are expected to comply with the requirements for modern theatre properties.
These theatres can’t close, can’t convert, and can’t compete. They simply decay and become zombie properties on valuable urban land, trapped by regulations meant to protect them.
Other self-destructive policies play supporting roles. Several states impose price caps on movie tickets, compressing margins and reducing incentives for new investment. Linguistic protectionism in many states means restrictions on how many screens can show films in other languages. And the standard Indian malaise of regulatory delays means multiplex chains routinely have to wait two years or more for operating licenses after construction is complete.
The cumulative effect is an industry struggling to build the infrastructure it needs. India produces more films than any country on earth, but has fewer screens per capita than almost any major economy. Cinema is one domain where India should be flexing its muscles. Instead, we have chosen to tie our own hands.
HomeWork
Reading and listening recommendations on public policy matters
[Article] Nitin Pai makes the case that there are multiple paths to AI power. Money quote:
“Yet much of the global debate is framed in narrow terms. It focuses obsessively on frontier innovation: who builds the biggest models, who owns the most advanced chips, who dominates the performance benchmarks. These questions matter, but they do not exhaust the sources of technological power. Power also flows from being an agile adopter, an efficient diffuser and a competitive enabler of adoption. You do not need to invent engines to run a great airline. Over the long term, especially for a general-purpose technology like AI, adoption can matter as much as invention.” [Nitin Pai]
[Paper] This Takshashila discussion document proposes ideas for plurilateral blocs that India can co-construct to manage the world disorder.
[Book] Richard D Brown’s The Strength of a People: The Idea of an Informed Citizenry in America, 1650-1870 has several insights to offer on today’s debates about democracy, civil society, and free speech.


I guess because IT firms typically bill by time and material, there is a perverse disincentive to adopting AI in their day to day work! Since the time savings need to be passed on to the customers !
And so I guess they use AI only in their AI practices ? As an outsider this sounds weird to me!
Like Ravi Shastri says, something’s got to give.
Strong piece. The most important shift you highlight is the inversion of adoption AI winning at the individual user level first and then forcing enterprise scale-up. That is structurally different from cloud/ERP cycles and does compress timelines.
That said, two constraints may slow the “P(AI)N” curve in India more than markets currently assume:
• Wage arbitrage still mattersAI must beat already low offshore costs, not US salaries.
• Enterprise reality (data hygiene, auditability, exception handling) remains the hidden friction layer.
Near term, this looks more like headcount deceleration + margin expansion rather than outright disruption. Medium term is where the real pressure builds especially in commoditized coding, testing, and L1/L2 work.
The signal to watch is simple:
Revenue growth vs. net headcount over the next 4 to 6 quarters.
If that gap keeps widening, the thesis moves from narrative to inevitability.