Why tech giants want to strangle AI with bureaucracy | spcilvly

One of the joys of writing about business is that rare moment when you realize that conventions are changing right in front of you. It sends a chill down your spine. With vainglory, you begin to scribble every detail of your surroundings, as if you were writing the first lines of a best seller. It happened to your columnist recently in San Francisco, sitting in the pristine offices of Anthropic, a darling of the artificial intelligence (AI) scene. When Jack Clark, one of Anthropic’s co-founders, drew an analogy between the Baruch Plan, a (failed) effort in 1946 to put the world’s atomic weapons under UN control, and the need for global coordination to prevent proliferation of harmful atomic weapons. AI, I felt that old familiar tingle. When entrepreneurs compare his creations, even tangentially, to nuclear bombs, he feels like a turning point.

Since ChatGPT burst onto the scene late last year, there has been no shortage of angst over the existential risks posed by AI. But this is different. Listen to some of the pioneers in this field and you’ll see that they are less concerned about a dystopian future in which machines are smarter than humans, and more about the dangers lurking within the things they are making now. ChatGPT is an example of “generative” AI, which creates human-like content based on its analysis of text, images and sounds on the Internet. Sam Altman, CEO of OpenAI, the startup that created it, said at a Congressional hearing this month that regulatory intervention is critical to managing the risks of the increasingly powerful “large language models” (LLM) behind the bots.

In the absence of rules, some of their counterparts in San Francisco say they have already set up backchannels with government officials in Washington, D.C., to discuss potential harms discovered by examining their chatbots. These include toxic material, such as racism, and dangerous capabilities, such as babysitting or bomb-making. Mustafa Suleyman, co-founder of Inflection AI (and a board member of The Economist’s parent company), plans in the coming weeks to offer generous rewards to hackers who can discover vulnerabilities in his company’s digital talking companion, Pi.

Such caution makes this nascent tech boom seem different from the past, at least on the surface. As usual, venture capital is coming. But unlike the “move fast and break things” approach of yesteryear, many startup pitches are now about security first and foremost. The old Silicon Valley adage about regulation: It’s better to ask Startups like OpenAI, Anthropic, and Inflection are so keen to convey the idea that they won’t sacrifice security just to make money that they have put in place corporate structures that limit maximization. of benefits.

Another way this boom looks different is that the startups creating their proprietary LLMs are not out to overthrow the existing Big Tech hierarchy. In fact, they can help consolidate it. This is because its relationships with the tech giants leading the race for generative AI are symbiotic. OpenAI is linked to Microsoft, a large investor that uses the former’s technology to improve its software and search products. Alphabet’s Google has a sizable stake in Anthropic; On May 23, the startup announced its latest funding round of $450 million, which included more investments from the tech giant. To further strengthen their business ties, young companies rely on Big Tech’s cloud computing platforms to train their models on oceans of data, allowing chatbots to behave like human interlocutors.

Like startups, Microsoft and Google are keen to show that they are serious about security, even as they fiercely battle each other in the chatbot race. They also argue that new rules are needed and that international cooperation to supervise LLMs is essential. As Alphabet CEO Sundar Pichai said, “AI is too important not to regulate, and too important not to regulate well.”

Such proposals may be perfectly justified by the risks of disinformation, electoral manipulation, terrorism, employment disruption and other potential dangers that increasingly powerful AI models can generate. However, it is worth keeping in mind that regulation will also bring benefits to the tech giants. This is because it tends to reinforce existing market structures, creating costs that traditional operators find easier to bear and raising barriers to entry.

This is important. If Big Tech uses regulation to strengthen its position at the commanding heights of generative AI, there is a trade-off. Giants are more likely to implement technology to improve their existing products than to replace them entirely. They will seek to protect their core businesses (enterprise software in Microsoft’s case and searches in Google’s). Rather than ushering in an era of Schumpeterian creative destruction, it will serve as a reminder that the big incumbents currently control the innovation process, what some call “creative accumulation.” The technology may end up being less revolutionary than it could be.

loose LLaMA

That outcome is not a foregone conclusion. One of the wild cards is open source AI, which has proliferated since March, when LLaMa, the LLM developed by Meta, was leaked online. It’s already rumored in Silicon Valley that open source developers are capable of building generative AI models that are almost as good as existing proprietary ones, and cost a hundredths as much.

Anthropic’s Clark describes open source AI as a “very worrying concept.” While it is a good way to accelerate innovation, it is also inherently difficult to control, whether in the hands of a hostile state or a 17-year-old. ransomware maker. These concerns will be put to rest as the world’s regulators grapple with generative AI. Microsoft and Google (and, by extension, their initial positions) have much deeper pockets than open source developers to handle whatever regulators come up with. They also have more at stake in preserving the stability of the information technology system that has made them titans. For once, the desire for security and profit can be aligned.

© 2023, The Economist Newspaper Limited. All rights reserved.

From The Economist, published under license. Original content can be found at www.economist.com

“Exciting news! Mint is now on WhatsApp channels 🚀 Subscribe today by clicking the link and stay up to date with the latest financial insights.” Click here!

Check out all the business news, market news, breaking news events and latest news updates on Live Mint. Download The Mint News app for daily market updates.

More less

Updated: October 3, 2023, 12:44 pm IST

Leave a Reply

Your email address will not be published. Required fields are marked *