Last week, Meta and IBM officially launched a new group called the ‘AI Alliance’, which aims to promote open-source AI development.
Although the formation of the group is in itself a talking point, much of the chatter around the announcement has centred on who isn’t involved, including Microsoft, OpenAI and Google.
In fact, the AI Alliance’s advocacy of an ‘open-science’ approach to AI actually puts it at odds with these firms. So, why are big tech companies taking sides and who is likely to win? Let’s take a look.
What’s the difference between the two camps?
Now, AI is divided into two camps. These are the ‘open’ camp who believe that the underpinning technology should be publicly available for free, and the ‘closed’ camp who believe that such a move isn’t safe.
Advocates for open-source AI favour an approach that is ‘not proprietary and closed’. This means that these companies believe that AI codes and data should be available for anyone to examine, modify and build upon.
This is the view of the new AI Alliance, which includes IBM, Meta, Dell, Sony, AMD, Intel and several universities and startups.
According to Darío Gil, a senior vice-president at IBM, this group is “coming together to articulate, simply put, that the future of AI is going to be built fundamentally on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies”.
Members of the group, including Meta’s chief AI scientist Yann LeCun, have previously taken swipes at OpenAI, Google and Anthropic for lobbying lawmakers in favour of the ‘closed’ approach.
Earlier this autumn, LeCun stated on social media that these companies were taking part in “massive corporate lobbying” to write the rules in a way that benefits their high-performing AI models and could concentrate their power over the technology’s development.
LeCun said on X, formerly Twitter, that he worried that fear mongering from fellow scientists about AI “doomsday scenarios” was giving ammunition to those who want to ban open-source research and development.
“In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we need the platforms to be open source and freely available so that everyone can contribute to them,” LeCun wrote. “Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture.”
What do the ‘closed’ side believe?
OpenAI, Google and Anthropic, along with OpenAI’s key partner Microsoft, have formed their own industry group called the Frontier Model Forum.
Despite the name ‘OpenAI’, the company behind ChatGPT and the image-generator Dall-E, builds AI systems that are decidedly closed.
This is unsurprising (if not confusing) because developing ‘closed’ programs has always been part of Microsoft’s business model. After all, the company used to oppose open-source programs that could compete with Windows or Office.
Companies in the closed-source AI camp believe that they should be able to maximise their near-term and commercial incentives by making money from their creations. This is according to Ilya Sutskever, OpenAI’s chief scientist and co-founder.
However, the debate about whether AI should be ‘open’ or ‘closed’ goes far beyond discussions about money. After all, these companies are also arguing about whether AI models that can be ‘mind-bendingly powerful’ should be made available to the public.
While those advocating for open-source development believe that the technology can be used for good and more can be achieved through collaboration and code inspection, those who believe in the ‘closed’ approach believe that current models could be manipulated and abused. They also believe that as models continue to get more advanced, the level of danger will also increase.
The Center for Humane Technology, a longtime critic of Meta’s social media practices, believes that “as there are no guardrails in place right now, it’s just completely irresponsible to be deploying these models to the public”.
Where does Elon Musk stand in all of this?
Good question. As you may have guessed, he’s largely doing his own thing. Although he was a co-founder of OpenAI in 2015, he left the company three years later.
He’s now launched his own AI startup that is ‘pro humanity’. He believes that his company, xAI, is building a system that is safe because it will be “maximally curious” about humanity rather than having moral guidelines programmed into it.
He’s previously said that “it’s actually important for us to worry about a Terminator future in order to avoid a Terminator future.” But he’s also confessed that it would be a “while” before xAI reaches the level of OpenAI or Google.
In order to advance progress and keep pace with rivals, he’s currently looking to raise $1bn in equity funding, according to a filing with the US Securities and Exchange Commission.
Last month, the company released its first AI model, a chatbot with a “rebellious streak” called Grok that’s inspired by The Hitchhiker’s Guide to the Galaxy.
Who will win?
Well, the answer to this question (somewhat boringly) is entirely dependent on which side regulators take.
After all, while an increasingly public debate has emerged over the benefits or dangers of adopting an open-source approach to AI development, regulators have so far attempted to stay away from discussions and are waiting for further information.
In a recent sweeping executive order on AI, President Joe Biden described open models with the technical name of “dual-use foundation models with widely available weights” and said they needed further study (weights are numerical parameters that influence how an AI model performs).
When those weights are publicly posted on the internet, “there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model,” Biden’s order said. He gave the commerce secretary, Gina Raimondo, until July to talk to experts and come back with recommendations on how to manage the potential benefits and risks.
Over in Europe, the EU has reached an agreement to regulate AI and has introduced the world’s first laws. However, although the deal has been described as ‘historic’ and extends beyond AI to cover social media and search engines, specifics have not been provided and the new law is not expected to come into force until at least 2025.
Why is this important?
This is important as the battle between open-source AI and closed AI is also a battle for the future of the technology.
Although a move towards open-source AI will foster greater collaboration, innovation and transparency in the AI community, there are drawbacks. For example, if users can inspect, verify and audit the systems they use, then certain individuals may be able to misuse or abuse the models.
Meanwhile, a move towards closed-source AI will lead to more security, privacy and profitability for owners and developers. If companies can keep their code a secret, they can also prevent unauthorised access and monetise their products. Added to this, a move further towards closed-source AI will also ensure more quality assurance and reliability, as strict protocols and guidelines will always be followed.
That said, there are also drawbacks associated with a move towards closed-source AI. For example, if code and data are always hidden, AI companies will miss out on valuable feedback, insights and improvements. Similarly, closed-source AI will also make it harder for users to understand, verify or audit the AI systems they use, which can reduce trust, accountability and ethical standards.
So, what’s the answer? Well, until regulators get involved more keenly, it’s likely we’ll continue to see a balance between openness and closedness. But, it remains undoubted that as AI becomes more powerful, we need to ensure that it is developed and used in a responsible and ethical manner.
This will likely require more collaboration and communication between open-source and closed-source AI stakeholders, as well as more regulation and governance from governments around the world.
Tom Brook
When he's not crafting content, Tom's obsessed with all things sport, particularly football, cricket, golf and F1.