#218 TechTalk
Regulating AI, China's export controls on Gallium and Germanium, and India's Electronics Manufacturing Needs a Helpful Trade Policy.
Course Advertisement: Admissions for the Sept 2023 cohort of Takshashila’s Graduate Certificate in Public Policy programme are now open! Apply soon for a 10% early bird scholarship. Visit this link to apply.
World Policy Watch: Regulating AI
Insights on global issues relevant to India
— RSJ
How should the State regulate artificial intelligence (AI) has become an important policy question across the world in the past two months. The AI boom that followed OpenAI’s decision to make their large language model (LLM) based chatbot, ChatGPT, available to any end-user has led to intense debate on how one should think of the human future as these tools become more sophisticated. There is a camp that believes we need to control the speed at which we are developing more powerful AI systems because we don’t fully appreciate the risk of such systems going out of control or being used by bad actors to hurt humanity. Those seeking this kind of a pause aren’t Luddites, really. They include some of the biggest names in AI research like Yoshua Bengio, Stuart Russell, and many pioneers of the field working in large silicon valley companies or in labs across universities. In the other camp are the proponents of AI who believe in its transformative power to improve human lives and who suggest that the risks of AI can be managed without stifling its progress. To them, we seem to have read or watched too many sci-fi books and films to imagine a ‘Terminator: Rise of Machines’ kind of scenario is possible in reality.
Either way, governments have become interested in regulating AI. In May, a US Senate subcommittee for privacy, technology, and the law held a hearing on the potential and risks of AI that had Sam Altman (chief executive of OpenAI), Christina Montogomery (chief trust officer, IBM), and professor Gary Marcus. Altman made headlines with his suggestion that there should be government regulations on AI with some kind of licensing requirement for any player building foundational AI models. This was sweet music to senate members who have seen the downside risks of unregulated social media platforms in the past decade. Altman seemed to suggest that the regulations should focus on the safety requirements of the AI models and some kind of auditing of the model rather than get into the details of the technology itself. So long as there is some kind of foundation model. In June, the Senate majority leader Chuck Schumer held three classified briefings for Senate members on artificial intelligence, including a specific session on American leadership in this space. The European Union is a couple of steps ahead already in this. Last month, The European Parliament approved the EU AI Act to "build safeguards on the development and use of these technologies to ensure we have an innovation-friendly environment for these technologies such that society can benefit from them.” This is the first step towards a comprehensive AI law across the member states. The Act requires Generative AI companies to submit their systems or foundational models for review with the government with details of data sources used, time spent on training of models, and performance benchmarks. Further, AI applications will be classified by the level of risk of their end-use into four categories. The highest-risk application that involves personal data, privacy, and surveillance uses cases are banned. The Act has also placed huge restrictions on any third party using an API to access LLMs or other open AI software for their own use case. Even this will need certification from the government of the underlying technology, training models used, and other details, which will be quite difficult for the almost anarchic nature of open-source software. Quite simply, this means open-source LLMs like ChatGPT will need to disclose a huge amount of information about its software for it to run in Europe. There’s the expected pushback already from these companies about this.
As always, it will be good to approach this problem of regulating AI using first principles. The first question to ask is, what is the risk of unregulated AI that we want to control? Is the alarmism of the kind, where we think the future of humanity is at stake if this is left unregulated, justified? That intelligent machines will learn faster and get better than their human masters and take over the world one day. Or, are there more real and immediate issues of AI deepening the malaise of misinformation, widening the divide between digital haves and have-nots, and allowing state and non-state actors to prey on personal data for commercial or political benefits? What is it that we should be worried about at this moment? I thought it would be useful to get the answer to this question from a very high-profile name in AI who recently warned about the dangers of the recent development in this field. Geoffrey Hinton is widely regarded as the godfather of AI, whose pioneering work on neural networks and deep learning is the foundation for many current AI systems. A cognitive psychologist and a computer scientist, Dr. Hinton is a Turing Award winner who spent his last decade at Google (or Alphabet) helping software learn from huge amounts of digital text and pictures and building neural networks that could predict text as humans do. And while the machines got better at predicting, he could also sense where the arc of this progress was leading up, which got him very worried. His interview with NYT, where he outlines his concerns, is a good starting point to understand what the problem AI regulations should address. This NYT article summarises it well:
His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.” He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behaviour from the vast amounts of data they analyse. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation. But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Broadly three problems - a) misinformation because the bots will get better at fake news, morphing deep fakes and propagating lies; b) job losses because they will wipe out the mediocre provider of services with mass-produced work that will be better and faster; and c) they will get better than humans.
The next question is, what should the governments look to regulate? The foundation platforms like LLMs or their use cases. The problem of trying to regulate the foundation platforms is twofold. It is almost impossible in the open source world where new changes are made unsupervised, data gets added at unprecedented speed, and algorithms are being refined almost continuously for any regulator to keep track or stay abreast of the changes in the platform. Even Altman, who seems to be advocating some kind of licensing regime, wants it to be done once and then have some annual audit for renewing the license rather than some kind of continuous monitoring of the platform. My suspicion is he is also championing the license model to create artificial entry barriers for newer players to enter his space. Being a kind of an incumbent, he will probably get the license first, and then it is in his interest to make things onerous for others. And that will be a bad idea because what we have learned from our experience with social media platforms is that these tend to be natural monopolies, and it is in the interest of regulators to make it as easy for newer players to enter and upend the incumbents. So, regulating the foundation platforms too tightly will be counterproductive. But keeping it light will have the other problem that Hinton is worried about. Who will make sure these platforms don’t go rogue in the kind of algorithm they are building?
I can think of two ways to manage this. One, take out the commercial incentive from the providers of the foundation platform. Think of generative AI or LLMs like disciplines. Like physics or biology. Nobody really owns it. There are global bodies of scientists, networks of universities, or some agencies that work with non-profit motives to continue with the research in them. They could be funded by governments, trusts, and philanthropic organisations to sustain them. Taking the commercial interest out of the foundation platform will ensure there is no quest for monetisation that will lead them down the same path as social media platforms. These platforms can then be lightly regulated on technical aspects keeping the field open to newer players keen to disrupt.
Two, let the monetisation be left to the use cases of AI done by players who build APIs to use the foundation platform. This can be tightly regulated to ensure there is a clear fit and proper criteria for those using the APIs and building the use cases, the performance benchmarks they anticipate and the way they will use user data and maintain privacy standards. There can be an international body that drafts guidelines for monitoring the foundation platforms, and all countries that are signatories to it will have access to use these platforms. It is likely that different countries will want to build their own foundation platforms (China will, India might). If these platforms then want to have a footprint in other countries for companies to use APIs to connect to them, they will have to certify their platforms to global standards. The companies working on the use cases on top of these platforms can continue to be regulated by the framework specific to their domiciled countries. At a macro level, this approach keeps the foundation platforms open, light touch, and free to innovate while placing the onus of good behaviour on those building the use cases that can be tightly regulated.
The second concern of Dr. Hinton on job losses because of AI is not new. The market system has mediated and adopted multiple breakthrough technologies over many centuries that have eventually been a net positive for humanity. There will be a loss of jobs as generative AI gets adopted widely, but it is almost certain that this will free up humanity to solve a different set of problems. In the worst case, there will be job losses, but there will be a simultaneous increase in productivity which will mean there will be an ability to support the surplus capacity that’s freed for some time. I don’t think this is a concern that needs regulation to be allayed.
Lastly, the scenario of singularity, where machines take over our world and become our overlords, is a bit of a science fiction fantasy at the moment. Generative AI will become very good at what it does and produce work of tremendous originality. But the leap from there to turning against us is quite big. To build regulations today for a dystopian future of that kind will stifle the tremendous potential of AI to help solve the many problems that beset us today. We should be in no hurry to pause today because of these fears.
Matsyanyaaya: China’s Counter to High-tech Export Controls
Big fish eating small fish = Foreign Policy in action
— Pranay Kotasthane
This week, China imposed export controls on two elements — Gallium and Germanium. This came two days after Japan’s export controls on semiconductor manufacturing equipment came into force and three days before US Treasury Secretary’s incoming visit.
Both Gallium and Germanium are used to make semiconductors for specific applications in critical areas such as space and defence. Chinese companies—as is the case with many materials—hold a dominant position in extracting these elements from bauxite and zinc ores, respectively.
By July 8, Gallium prices had jumped 27% to reach $326/kg, while global Germanium prices only rose by 1.9%. This has led many analysts to do two things: first, revisit the periodic table in their browser tabs. And two, to conclude that these export controls are a reminder of China’s strength in the high-tech domain.
I disagree with the consensus view. I’ll go out on a limb and say that these controls will not significantly impact the critical sectors in other countries. Moreover, these controls expose China's weaknesses rather than strengths.
The US export controls on AI chips differ qualitatively from Chinese export controls over Gallium and Germanium. Cutting off the supply of the former is a much bigger deal. It means that China has to innovate on its own without significant international support across various stages of the semiconductor supply chain. A favourable geopolitical climate was a key reason behind China’s tech catch-up over the last two decades. That being no longer the case, China will find it difficult to tackle simultaneous export controls on EDA tools, semiconductor manufacturing equipment, and Intellectual Property design, all at once. While achieving a comparative advantage in any one of these stages is challenging enough, creating an entire parallel supply chain is nearly impossible.
Import substitution of each of the components (top-end EDA tools, AI training chips, ASML EUVs, Advanced Packaging, etc.) needs huge investment and several "knowledge decades" together. Now, investment has never been a big problem for China, given the resources at hand with the party-state. However, it is a different challenge having to innovate in all these stages simultaneously. You can't turn up tomorrow with another Big Chip Fund and import substitute all these areas, even if you are the Chinese Communist Party.
On the other hand, the dominance of China in the production of Gallium and Germanium is impressive but not difficult to substitute if China opts for stricter export restrictions. China dominates the production of these upstream materials because it is able to absorb the costs - labor, environmental, etc.—while other countries were willing to let their production decline.
With China imposing export controls on these items, prices of these commodities will rise. Consumers will be adversely affected globally. But from a strategic angle, countries will prioritise the production of these materials. The increase in prices will automatically incentivise diversification.
In fact, China’s own Gallium production prowess is a result of a similar phase ten years ago. By 2011, the boom in smartphones had skyrocketed Gallium prices to $1000/kg. Immediately more suppliers came up, including in China, to fill the gap. Today, Gallium prices are hovering at $300/kg. The same story played out with Ukrainian Neon and Russian Palladium once the war started. Prices rose, and alternative supplies also came up in other countries.
China does not control any specific secretly-guarded know-how that can prevent other countries from extracting these elements elsewhere. I expect other countries to develop alternative production sites and techniques soon. But the fact that China chose to impose controls on upstream elements rather than knowledge products shows that it has a weak hand.
P.S.: Read these two primers on Gallium and Germanium before you make up your mind.
India Policy Watch #1: Trade as an Aid for Domestic Manufacturing
Insights on issues relevant to India
— Pranay Kotasthane
I came across an update in Business Standard earlier this week:
A report released by the Indian Cellular and Electronics Association (ICEA) and Mobile and Electronics Devices Export Promotion Council on Thursday said the increase in import tariffs on components to make mobile devices between 2020 and 2023 had led to an escalation in the cost of materials by 5.59 per cent and the cost by 3.6 per cent.
It said the government had provided a financial incentive scheme ranging from 4 per cent to 6 per cent on a sliding scale for five years to mobile device manufacturers, and that was “being supported by indirect revenue from increased indirect taxes from the same sector, thereby increasing the costs for the same”.
Readers might relate to this argument. We have written earlier on many occasions about how the money paid through industrial policy instruments like PLIs goes back to the government on account of rising import tariffs. The high tariffs mean that Indian firms have less money to spend on building R&D capabilities and increasing export competitiveness.
So a big gaping hole in India’s push for manufacturing is its stance on the Information Technology Agreement of the WTO. Anupam Manur and I wrote about this issue in an article for MoneyControl, assessing India’s position on trade in electronics. What follows is an unedited version.
(This article, co-written with my colleague Anupam Manur, was first published in MoneyControl, on June 26)
India is caught up in a quarrel over tariffs on information and communication technology (ICT) goods. The EU filed a WTO dispute that India has applied tariffs up to 20 percent on certain ICT goods, such as mobile phones and accessories, which is against the Information Technology Agreement-1 (ITA-1), to which India is a signatory.
Signatories to ITA-1 are obliged to levy a maximum tariff of zero percent on a set of pre-agreed ICT goods. India claims that the goods on which it levies a tariff are not covered under ITA-1. Besides the EU, Japan and Taiwan also filed similar cases against India. The WTO has ruled against India in all three disputes.
India approached the appellate body (which has been dysfunctional for some time now) in the dispute filed by Japan on May 25, while the EU is threatening to apply retaliatory tariffs.
Keeping this tariff squabble aside, there is a bigger question of whether India should join the ITA-2, which expands on the scope of the original agreement to include software and digital content, touch screens, GPS navigation equipment, etc.
The Indian government seems to have taken a firm stance against joining the expanded agreement and justifying it by highlighting India’s “most discouraging” experience with ITA-1, in which “the real gainer from that agreement has been China”. It further states that given the government’s current manufacturing push, this is the “time for us to incubate our industry rather than expose it to undue pressures of competition.”
It is indeed the case that China’s electronics industry gained more from ITA-1 than India. Particularly, Chinese companies dominated the industry for cheap mobile phones and electronic products, and the Indian domestic manufacturers could not compete and disappeared from the market by 2017.
This causality is the underlying argument behind India’s adamance on ITA-1 and reluctance with respect to ITA-2. However, the government is drawing the wrong lessons from prior experience for three reasons.
First, the government is mistaking correlation for causation. It is wrong to attribute the dismal performance of the Indian electronic manufacturing sector to the ITA-1. The domestic phone makers’ business model was to rebrand imported phones from China, which would not be profitable for long.
In fact, it is the ITA that enabled companies to import electronic components cheaply, giving them an opportunity to move up the assembly value chain. These companies quickly captured nearly 30 percent of the smartphone market share.
However, the lack of investment in R&D or industrial innovation meant these companies had no competitive advantage. It was not so much a failure of an infant industry but the inability of incumbents to catch up technologically.
This story isn’t unique to electronic products. Even in segments to which the ITA doesn’t apply, such as machine tools, textiles, or toys, Indian companies couldn’t stand international competition. Surely, the problem lies in India’s large-scale manufacturing troubles, not the ITA. Fundamental – and tougher to resolve – factors such as the unfriendly business climate, archaic land and labour laws, complicated tax system, and lack of financing options underlie India’s past manufacturing underperformance.
Second, if India wants to be a manufacturing and exporting powerhouse, it cannot do so by protecting “champions” from “undue pressures” of competition, as we did before 1991. The electronics industry heavily relies on the frictionless flows of goods, capital, and human resources across borders.
Aatmanirbharta in the electronics sector is a myth, as competitive exports require cheap imports. By disregarding the ITA, products manufactured in India will not be able to compete in the international market. Even industrial policy and targeted subsidies are ineffective due to high import tariffs.
An analysis by the industry body of phone manufacturers shows that higher import tariffs have meant that a large portion of the money companies receive under PLI gets re-routed to pay these tariffs, ultimately making production cost-prohibitive. This is why companies such as Apple have been seeking duty exemptions for some electronic components.
Tariffs are also a major sticking point in the India-Taiwan Free Trade Agreement. A unilateral reduction in tariffs by following ITA is thus in India’s interest.
Third, it’s important to derive the right lessons from China’s success under the ITA-1 regime. China joined ITA-1 in a position of strength in 2003, with a fairly well-established electronics manufacturing sector. By then, China was the third-largest exporter of ITA products and the fourth-largest importer.
The ITA supercharged this advantage and propelled China to become a global leader. Twenty years later, India is in a similar position to China in 2003. It has managed to kickstart its electronics assembly industry, and the global winds are in India’s favour.
Moreover, by virtue of its manufacturing strength, China could enter the ITA-1 and negotiate favourable exemptions – a delayed phase-out period and exemptions on certain goods. It has also been able to negotiate some exemptions under the expanded ITA-2.
By categorically dismissing the ITA-2, India has lost its seat at the table and negotiating rights. Given that India is both a large market and is more deeply integrated with global value chains with the manufacturing presence of Samsung and Apple, it can negotiate for important waivers and co-shape the ITA-2 product list.
Finally, reducing tariffs helps in greater adoption of ICT products that play a key role in increasing productivity and efficiency not just in the electronics sector but also in all other sectors that use ICT as an input, such as the wider digital economy.
Cheaper ICT products also increase consumer choice and welfare and spur economic growth. A liberal import policy on electronic items will help India attract domestic and foreign investment in electronics manufacturing and embed Indian companies in global value chains.
A liberal, rules-based trading order is beneficial to India. Instead of a doctrinaire opposition to ITA, India must carefully weigh the costs and benefits of staying out of ITA-2.
HomeWork
Reading and listening recommendations on public policy matters
[Podcast] On Puliyabaazi, Manoj Kewalramani critiques China’s three major globalscale initiatives — GCI, GSI, and GDI.
[Book] If you want to understand Indian Federalism, check out this classic The Political Economy of Federalism in India, by M Govinda Rao and Nirvikar Singh.
I've been appreciating your writing for a while. I have to say though that handwaving AI x-risk as "science fiction fantasy" is a bit disappointing, especially as you quote someone like Hinton explicitly warning about it. Either something more substantial as to why x-risk isn't a worry or something less committal as to it's possibility would have felt more on-point
HI Pranay,
Good article. However, some thoughts....
The "take out the commercial incentive from the providers of the foundation platform" - won't that just kill innovation? AI is one field where the industry has been streets ahead of academia. Unlike physics or biology, the cost to innovate in AI has been brought down drastically. The good thing about AI, the technology is fast getting open-sourced. In fact, the We Have No Moat memo, purportedly from someone in Google, alluded to the tech giants playing catch up with Opensource.
My belief, regulating AI, has to be based on outcomes. Can regulation be applied equitably across domains? I see that as challenge. The use cases of financial loss vs identity loss vs poor medical care have different outcomes and different impacts and each have to be dealt with separately.
Also, it would be intriguing to figure out how the states intend to regulate all AI code. Assuming that all tech will use AI in some form or the other, that would require both state and compute capacity beyond what is possible now.