Elon Musk's $6 billion xAI fundraise shows he's serious about building an AI contender (2024)

Musk raised the money from a set of usual suspects who have backed his other projects, including his purchase of Twitter. These include Saudi Prince Alwaleed Bin Talal’s Kingdom Holdings, Valor Equity Partners, Vy Capital, Andreessen Horowitz, and Sequoia Capital. Perhaps these investors see xAI, which is leveraging live data from X and being closely integrated with it, as a way of reviving the business prospects of the struggling social network. Whatever their motivation, what’s more interesting is what it says about Musk’s intentions: The billionaire is really serious about building a major AI contender.

Until recently, there’s been some doubt about this. For one thing, xAI’s only product to date, the open-source chatbot Grok, seemed more like political posturing than a serious commercial effort. Grok was born out of Musk’s contention that other tech companies were imposing too many politically correct guardrails on their AI chatbots. In contrast, he pledged to create “anti-woke” AI and he trained Grok on X posts and who knows what else. (This fit Musk’s libertarian streak and his apparently sincere belief that AI could become a dangerously powerful tool for anyone wishing to police not just speech but thought. The only way to guard against this, he has said, is for every person to have their own personal AI, free from any restrictions on the kinds of discourse in which it can engage.) The result was Grok, a chatbot that was marginally more provocative than its competitors—but which does not score at the top of the LLM leaderboard in terms of other capabilities, such as reasoning, translation, summarization, and the like. It was just an occasionally racist chatbot. This may suit Musk’s politics and brand, but it hardly makes him an AI pioneer.

Then there was the fact that Musk’s whole xAI project seemed largely driven by sour grapes. Musk seemed miffed that he wasn’t getting enough credit for having been instrumental in OpenAI’s founding—and bitter that he’d walked away from the startup in 2018 after losing a bid to gain more direct control over the lab. Musk has said he’s alarmed by the for-profit, product-focused direction in which cofounder Sam Altman has pushed OpenAI since Musk’s departure from OpenAI’s nonprofit board and by the fact that OpenAI, which was founded to prevent a single big tech company (Google at the time) from controlling superpowerful AI, has now become intimately bound to a single big tech company, Microsoft. Musk has also said he’s disturbed by the fact that OpenAI, once dedicated to being as transparent as possible about its research, now publishes few details about the AI models it creates. Musk has even sued OpenAI along with Altman and Greg Brockman, an OpenAI cofounder and president, claiming that they breached promises made to him when setting up what was initially a nonprofit AI research lab.

But Musk is vulnerable to accusations of hypocrisy, given that he seems interested in having xAI create products too. And with outside investors’ money at stake now, it’s likely that xAI’s efforts will also be serving commercial ends, such as helping to power new features for X, and perhaps Tesla, too. One can’t help feeling that what Musk really resents is not OpenAI’s commercial turn or its lack of openness, but simply Altman’s success, especially since most of it has come from decisions he made after Musk parted ways with OpenAI.

Now, using the massive chip on your shoulder as the foundation for a company is not an entirely unheard-of path to business success. But as a mission to attract the best and brightest, it might not be so compelling. (Google DeepMind: “Solve intelligence, and then use it to solve everything else.” OpenAI: “Build AGI for the benefit of all humanity.” xAI: “Help repair Elon’s bruised ego.” Where would you rather work?)

Shortly after announcing the new funding for xAI, Musk got into a spat on X with Yann LeCun, Meta’s chief scientist and Turing Award-winning “godfather of AI.” Musk had used the news of the announcement to post a call for AI researchers to join xAI. LeCun pointed out Musk’s reputation for being an extremely difficult boss and Musk’s inconsistency in having signed the 2023 letter calling for a six-month pause in further AI development, while now pushing xAI to create superpowerful AI models. He also noted that Musk had said xAI’s mission was to seek the truth, while Musk himself endorsed conspiracy theories on X. Musk threw shade back at LeCun, implying he was simply doing the bidding of Meta CEO Mark Zuckerberg and implying LeCun’s days of doing cutting-edge AI research were behind him.

The spat got lots of attention. But it’s silly and misses an important point. All of the leading AI efforts are now closely linked to big financial interests—whether it is Microsoft, Google, Meta, Amazon, or X and Musk’s newfound funders. Can we really trust any of these companies to have humanity’s best interest at heart?

That’s the point former OpenAI nonprofit board members Helen Toner and Tasha McCauley made in an editorial they published over the weekend in The Economist. The two used last week’s Scarlett Johansson-OpenAI-voice controversy as a jumping-off point to say they remain convinced that the board had been in the right when it tried to fire Altman last November. They said the entire chain of events—which saw Altman reinstated as CEO and ultimately back on the board and, as the ScarJo incident shows, continuing to be “less than fully candid,” at least with the public—demonstrated that corporate governance structures and self-regulation were too weak to protect the public from AI risks. There was too much money at stake for any of the AI labs to ever put purpose ahead of profit, they argued. So what was needed was government regulation and oversight.

I agree. We desperately need a regulator with enough expertise and authority to look over the shoulder of these companies and ensure they aren’t building systems that pose extreme risks—whether that’s because they are too capable (supercharging cyberattacks, automating fraud, or making it easier to produce bioweapons, for instance) or not capable enough (providing dangerously inaccurate medical advice, for example.) The new Safety Institutes in the U.S., and particularly the U.K., are a step in that direction, but they need more power than they have currently. If that means slowing down AI development slightly or making it more difficult to release AI models as open-source software, that is a price worth paying.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

AI IN THE NEWS

OpenAI confirms it's training its next frontier AI model and announces new safety board. The AI company announced in a blog post that it has begun training a successor model to GPT-4, which is currently the most capable AI model in the market across many metrics. The company said the new model would bring “the next level of capabilities,” but did not specify what exactly those might be. It was also unclear if this new model is GPT-5, or a different model that goes even further. Having been stung recently by the departure of several of its AI safety researchers, at least one of whom accused the company of prioritizing product launches over safety research, OpenAI also announced the formation of a new board-level Safety and Security Committee that will be responsible for evaluating and further developing the company’s safety and security processes, with initial recommendations expected in 90 days. The new committee will include board members Nicole Seligman, Adam D’Angelo, Bret Taylor, and CEO Sam Altman, along with several key AI researchers and scientists from the company.

OpenAI signals that maybe just getting to AGI is hard enough, without needing to worry about “superintelligence.” The company’s vice president of global affairs Anna Makanju gave an interview with the Financial Times in which she reaffirmed OpenAI’s mission was to achieve artificial general intelligence, or AGI, which it has defined as AI as capable at cognitive tasks as humans. But she said, “I would not say our mission is to build superintelligence,” which she defined as AI that would be many times more intelligent than all human beings combined. Altman had told the paper back in November that he spent half his time researching how to build superintelligence. And the company had said a year ago that figuring out how to control superintelligence was so important that it was creating a whole team dedicated to researching that task and devoting 20% of its computing power to that goal. But in the past few weeks, amid the departure of both co-leads of that team, this research group has been disbanded and its remaining researchers assigned to other teams. Meanwhile, I reported last week that OpenAI never provided the team with anything close to 20% of its computing power, and repeatedly turned down the team’s requests for access to computing resources.

Google AI Search feature plagued by inaccurate information. The internet and social media last week were chock-a-block with users posting the most laughable or disturbing instances of Google’s new generative AI-powered search feature, called AI Overviews, providing inaccurate information. In a few instances, such as misidentifying a toxic mushroom as an edible one, the errors could be dangerous. Google defended the product saying that in the majority of cases, the information it provided was “high-quality” and that some of the examples of inaccuracies were from “uncommon queries.” In other cases, the company said it suspected the answers had been doctored and it had been unable to reproduce them. But it also said it was working to improve the product in cases where it had found inaccuracies. You can read more in this story in Quartz.

Softbank plans to commit $9 billion annually to AI deals. That’s according to the Financial Times, which interviewed the Japanese tech conglomerate’s CFO Yoshimitsu Goto. The company’s founder Masayoshi Son believes AI will be one of the most transformative technologies in history and has been willing to spend big to back companies in the space, even as the group has cut back on investments in other areas.

EYE ON AI RESEARCH

How to get humans and AI software to work together to achieve a common goal. As we increasingly work with AI copilots in our professional lives and as these copilots are given more and more “agentic” properties (meaning they can take actions for us beyond just generating content), the challenge of how we can work together to achieve a common goal becomes more and more important. The most obvious way for a human and AI system to communicate is with natural language. And there have been some interesting efforts in recent years to couple a large language model with a planning and strategizing AI model. This is how Noam Brown, when he was at Meta’s Fundamental AI Research lab, developed Cicero, the AI system that could compete with top human players at the strategy game Diplomacy. But in that case, Cicero was competing against humans and not working alongside them. (Brown, who is now at OpenAI, is believed to be working on AI models that will combine language and planning abilities for realms beyond just strategy games.)

A group of researchers at the University of Texas at Austin and Carnegie Mellon have just published a paper on the non-peer-reviewed research repository arxiv.org showing a more general way of getting an AI system to coordinate with people. A bit like Cicero, it also involves a large language model combined with a planning module. But in this case, the language model interprets human written communication and uses it to select one of a set number of intentions, which the planning engine then reasons about. The researchers tested this system using a search-and-find maze game called Gnomes at Night and found that the AI was able to discern human intentions far faster with the help of language (which is a fairly intuitive finding). The reason to highlight this research though is simply to applaud researchers for looking specifically at ways humans and AI can complement one another, rather than always framing AI as a direct substitute for, and competitor to, humans. This human-machine teaming is going to be increasingly relevant to all of our lives over the next few years, so it is good that researchers are working on generalizable ways to create AI software that can be a partner to humans, allowing both to achieve a shared goal.

FORTUNE ON AI

AI skills that pay the bills: Tech-savvy knowledge workers looking at significant wage boosts —by Sheryl Estrada

Elon Musk is recruiting xAI staffers who are ‘without regard to popularity or political correctness’ a day after raising $6 billion from investors —by Christiaan Hetzner

You don’t have to be a programmer to cash in on artificial intelligence. AI skills in these non-tech professions come with massive wage increases —by Jason Ma

Top VC Kai-Fu Lee says his prediction that AI will displace 50% of jobs by 2027 is ‘uncannily accurate’ —by Jason Ma

AI CALENDAR

June 5:FedScoop’s FedTalks 2024 in Washington, D.C.

June 25-27:2024 IEEE ConferenceonArtificialIntelligencein Singapore

July 15-17: Fortune Brainstorm Tech in Park City, Utah (registerhere)

July 30-31:Fortune Brainstorm AI Singapore (register here)

Aug. 12-14:Ai4 2024 in Las Vegas

BRAIN FOOD

What the discovery of a possible Chinese military dataset means. Researchers at the University of California at Berkeley stumbled upon an unusual dataset that someone had left on Roboflow, a site run by a U.S. company that allows people to host machine learning data and models. The dataset was titled Zhousidun, which means “Zeus’s Shield” in Chinese and consisted of 608 overhead images of U.S. and NATO destroyers and frigates with bounding boxes drawn around their radar antenna and missile launching tubes. Whether the dataset was mistakenly left by AI developers working on a Chinese military AI system—which seems likely—or whether it might simply be the work of some ship-spotting enthusiasts, or even a deliberate attempt to plant erroneous data or data containing adversarial examples (images subtly manipulated in ways to fool AI systems into misclassifying objects) in a dataset that someone hoped the Chinese would use, the point is that these systems are a reminder that as much as we write about the battle in the consumer and enterprise space over AI capabilities, there’s also an extremely high-stakes race taking place largely in the shadows over which country will be able to equip its military with the most cutting-edge AI capabilities. This arms race gets far less coverage than it should. It's likely to be at least as consequential as the one taking place in the commercial arena. You can read more about the dataset and what researchers have been able to determine about it in this paper on the research repository arxiv.org.

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.

Elon Musk's $6 billion xAI fundraise shows he's serious about building an AI contender (2024)

References

Top Articles
Latest Posts
Article information

Author: Domingo Moore

Last Updated:

Views: 5825

Rating: 4.2 / 5 (53 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Domingo Moore

Birthday: 1997-05-20

Address: 6485 Kohler Route, Antonioton, VT 77375-0299

Phone: +3213869077934

Job: Sales Analyst

Hobby: Kayaking, Roller skating, Cabaret, Rugby, Homebrewing, Creative writing, amateur radio

Introduction: My name is Domingo Moore, I am a attractive, gorgeous, funny, jolly, spotless, nice, fantastic person who loves writing and wants to share my knowledge and understanding with you.