Read the casual emails between Elon Musk and Sam Altman that kicked off OpenAI
- Elon Musk amended his lawsuit against OpenAI and Sam Altman, adding Microsoft as a defendant.
- Musk included emails between himself and Altman from 2015 in which they discussed starting OpenAI.
- Musk said OpenAI and Microsoft have formed a "de facto merger."
Elon Musk unveiled several emails between himself and fellow billionaire Sam Altman this week as evidence in the Tesla CEO's lawsuit against Altman and OpenAI, shining a light on the casual beginnings of the now-$157 billion company.
On Thursday, Musk amended his suit against the AI startup and its CEO, adding Microsoft as an additional defendant and accusing the two companies of forming a "de facto merger" and engaging in anticompetitive practices in the AI sector.
A spokesperson for OpenAI did not immediately respond to a request for comment from Business Insider.
Musk helped found OpenAI in 2015 but has frequently criticized the company and its leaders in the years since. The SpaceX CEO left the AI startup in 2018.
In his amended complaint filed in federal court this week, Musk outlined the beginnings of OpenAI, including the initial emails Altman sent him proposing the idea in 2015.
Lawyers for Musk characterized Altman's initial outreach to Musk as him "testing the waters" and trying to convince the Tesla CEO to bring his funding and connections on board.
In early March 2015, the two tech leaders drafted an open letter to the US government highlighting the need for regulation in the safe creation of AI, the complaint says.
Altman sensed an "opportunity" following the letter, Musk's lawyers allege, leading him to email Musk the following:
Been thinking a lot about whether it's possible to stop humanity from developing AI. I think the answer is almost definitely not. If it's going to happen anyway, it seems like it would be good for someone other than Google to do it first. Any thoughts on whether it would be good for YC to start a Manhattan Project for AI? My sense is we could get many of the top ~50 to work on it, and we could structure it so that the tech belongs to the world via some sort of nonprofit but the people working on it get startup-like compensation if it works. Obviously we'd comply with/aggressively support all regulation. Sam
Musk responded:
Probably worth a conversation
A month later, Altman followed up on Musk's "noncommital" response and offered Musk a detailed proposal for a new AI lab, according to the court documents.
1.) The mission would be to create the first general AI and use it for individual empowerment—ie, the distributed version of the future that seems the safest. More generally, safety should be a first-class requirement. 2. ) I think we'd ideally start with a group of 7-10 people, and plan to expand from there. We have a nice extra building in Mountain View they can have. 3.) I think for a governance structure, we should start with 5 people and I'd propose you, Bill Gates, Pierre Omidyar, Dustin Moskovitz, and me. The technology would be owned by the foundation and used "for the good of the world", and in cases where it's not obvious how that should be applied the 5 of us would decide. The researchers would have significant financial upside but it would be uncorrelated to what they build, which should eliminate some of the conflict (we'll pay them a competitive salary and give them YC equity for the upside). We'd have an ongoing conversation about what work should be open-sourced and what shouldn't. At some point we'd get someone to run the team, but he/she probably shouldn't be on the governance board. 4.) Will you be involved somehow in addition to just governance? I think that would be really helpful for getting work pointed in the right direction getting the best people to be part of it. Ideally you'd come by and talk to them about progress once a month or whatever. We generically call people involved in some limited way in YC "part-time partners" (we do that with Peter Thiel for example, though at this point he's very involved) but we could call it whatever you want. Even if you can't really spend time on it but can be publicly supportive, that would still probably be really helpful for recruiting. 5.) I think the right plan with the regulation letter is to wait for this to get going and then I can just release it with a message like "now that we are doing this, I've been thinking a lot about what sort of constraints the world needs for safefy." I'm happy to leave you off as a signatory. Sam
Musk responded:
Agree on all
And thus, OpenAI was born.
Musk stepped down from OpenAI's board of directors in 2018. Years later, Semafor reported that Musk wanted to run the company on his own to beat Google. When his request was rejected, Musk pulled his funding and left, the outlet reported.
He first sued OpenAI and its cofounders in March, alleging the company had shirked its nonprofit mission by partnering with Microsoft, which has invested more than $13 billion in the company.
OpenAI at the time called the suit "incoherent" and "contradictory." Musk dropped the lawsuit but filed a new complaint against the company in August, alleging executives at the startup "deceived" him into joining the startup by playing on his concerns about the existential risks of AI.