With GPT-4, OpenAI released the long-awaited version of the language model, which shows remarkable progress – for example in the input of images and documents. A success that goes hand in hand with criticism and disappointment, because the developers are becoming more and more withdrawn.
OpenAI has provided a large number of benchmarks and demos for GPT-4, reports The forget What is missing, however, is information about the data with which the model was trained. Information on energy costs and the hardware and methods used to create the model are also missing. OpenAI does not obscure the reasons, but openly names the competition as an argument. Specifically, the Technical Report (PDF) says:
Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.
AI researchers criticize the approach. Ben Schmidt from Nomic AI explained on Twitter that there can be no more talk of “open” if OpenAI does not disclose any information about the training data. Other researchers explain that OpenAI should therefore change its name.
Recommended external content Twitter
At this point there is external content from Twitter that complements the article and from the editors is recommended. It can be loaded with one click and also hidden again.
Load Twitter Embeds I consent to loading Twitter Embeds. Personal data can be transmitted to Twitter. More on this in the data protection declaration.
Please @OpenAI change your name ASAP. It's an insult to our intelligence to call yourself "open" and release that kind of "technical report" that contains no technical information whatsoever. https://t.co/WdXAq4a309
— David Picard (@david_picard) March 14, 2023
Others, like Lightning AI boss William Falcon, show at least some understanding. For a company, that decision would be legitimate, he told VentureBeat. Just don't pass it off as research. In this way, OpenAI creates a precedent that could damage the industry if, for example, startups follow the procedure.
OpenAI: Competition leads to confidentiality
OpenAI started in 2015 as a non-profit organization. According to the mission, the general public should benefit from artificial intelligence and the findings of AI research. But there were already signs of a change of course in 2019. At that time, OpenAI founded a subsidiary with the OpenAI Limited Partnership (OpenAI LP), which can work profit-oriented. The step was justified with the enormous costs incurred in AI research. OpenAI should become more open to investors. With this construct there is also the option of rewarding employees with shares in the company.
However, the profits are capped, so investors can receive a maximum of 100 times their stake. Nevertheless, this decision paved the way that led to a partnership with Microsoft and an investment of up to 10 billion US dollars – and to the integration of the GPT language models into Microsoft's product portfolio. The most recent announcement was the Office 365 Copilot, an AI assistant based on GPT-4 that is intended to make everyday office life easier.
However, OpenAI had to distance itself from the demand for transparency. Even if the original mission should continue to be the top priority, the idealism of the past could not be maintained. The competitive pressure in the highly competitive environment is too strong, analyzed the MIT Technology Review back in 2020. The result was even then: OpenAI employees were no longer allowed to publish on certain topics – similar to Google or Meta – because these findings were considered a competitive advantage and should therefore remain secret.
“We were wrong”
With the GPT-4-Paper, this tendency is now reaching a new peak. It's a change of course that the company openly defends. “We were wrong,” says Ilya Sutskever, OpenAI's lead scientist and co-founder, in an interview with The Verge. If you believe – like OpenAI – in an extremely high-performance AI future, an open source approach would not be a good idea.
At least Sutskever admits that open source Models would have advantages when developing security measures. The more people work with it, the more you would learn. That's why OpenAI wants to give academic and research institutions access to the model.
CB-Funk Podcast Episode #11: Annoying account constraints and falling graphics card prices