GPT-4: What we know so far about OpenAI’s new language model

by Andrea Bergonzi, Data Scientist at Dataskills

We recently dedicated an article to Chat GPT, the generative language model considered the real avant-garde in the field of artificial conversational intelligence. In this context, we explained how the heart of the chatbot is today GTP3.5, an implemented version of the original GPT-3 which will soon be replaced by a new release: GPT-4.
Keeping track of possible evolutions related to the introduction of this new technology is not easy, since we live in a historical phase in which innovation is so sudden and detonating as to make its real effects almost unpredictable.

What we know is that the launch of GPT-4 by OpenAI (the formally non-profit organization that created GPT Chat and which specializes in friendly AI) will most likely take place in the coming months, in the wake of the revolutionary previous releases: from DALL-E 2, the text-to-image Artificial Intelligence made available in July 2022, to Whisper, the automatic speech recognition (ASR) model released even more recently. Both of these products amazed with their robustness, precision and innovation.

Even if there are no absolute certainties about the new GPT-4 features, it is easy to imagine that the release will respond to the needs of a market that now more than ever requires extremely precise and versatile linguistic models, but also characterized by better calculation optimization and greater security.

In this sense, the considerations made by Sam Altman, CEO of OpenAI, regarding what to expect from GPT-4 are very interesting. In an interview released a few weeks ago to StrictlyVC and subsequently relaunched by the webzine The Verge, Altman first of all explains that the new release will take place “when we are confident that we can launch it in a safe and responsible way”. Therefore, there is no certain date, not even with regard to the quarter of the year in which GPT-4 could appear on the market.

Altman then adds that the rumor mill that went viral on Twitter, and which compares the number of GPT-3 parameters (equal to 175 billion) with those of GPT-4 (100 trillion) is “completely ridiculous. I don’t even know where it comes from. People ask to be let down and they will be let down.”
The excessive hype would therefore be far from reality, although OpenAI’s intention to create an artificial intelligence model capable of generating videos in the future appears to be confirmed, on the basis of studies already carried out in this sense by giants such as Meta and Google. Again, however, the timing is completely unknown.

As regards the considerations on AI and “prejudices” (one of the current critical points of Chat GPT consists precisely in the involuntary perpetuation of misleading information and, indeed, prejudicial from a social point of view, such as racism and sexism, which are internalized on the basis of their training data), the CEO of OpenAI comments that, within a world-system with very broad absolute rules, people should have the opportunity to interact with an Artificial Intelligence capable of interpreting their point of view and their values.

“If you want the never offensive and super-safe model for work, you should be able to get it. Likewise, if you want one that is bolder, more creative and exploratory, and therefore also able to say things that might make you uncomfortable, you should be able to get it. My opinion is that many systems with different settings will hesitate in relation to the values they want to promote. In the long run, what a user should be able to do is write what he wants, what his values are and how he wants the AI to behave in relation to them, obtaining results consistent with his requests, so that the system is actually its artificial intelligence.”

This statement is fully in line with the core business of OpenAI and similar companies: mitigate prejudices preventing Artificial Intelligence systems from picking them up and repeating them, in order to create truly positive and constructive technologies for society as a whole.

And what does Altman have to say about the theory that Chat GPT will eventually oust Google from the podium?
“I think whenever someone talks about one technology as the end of another, they make a mistake. I also believe that there is a research shift underway that at some point this change will become dominant, but not as dramatically as people believe, and not in the short term.”

In summary, there is little or nothing that Altman reveals about GPT-4, at least in this pre-release phase. And, precisely for this reason, speculations continue.
We therefore summarize below what we know so far – and what we can instead only assume.


First of all, it seems that GPT-4 won’t be much bigger than GPT-3, again according to the CEO of OpenAI. It is therefore assumed that its parameters will fluctuate from 175B to 280B, i.e. that they will be completely similar to those of Deepmind’s language model, Gopher.

This speculation is related to the fact that Altman said that the focus of the OpenAI development team is improving the performance of smaller language models, since large ones require a very large data set, as well as extremely complex implementations and huge computing resources. If we consider that implementing large models would be incredibly onerous  – as well as ineffective – even for companies, it is easy to understand why OpenAI has chosen to move in the opposite direction.

Also with regard to the parameterization aspect, we know that large language models are still for the most part poorly optimized and very expensive to train, and that they would therefore require a considerable compromise between cost and accuracy. A practical example is precisely that of GPT-3, which was trained only once despite the errors: the optimization of the hyper-parameters was never performed precisely because of the unsustainable burdens that this process would have entailed.

The most accredited hypothesis regarding the optimal calculation model is instead the following: for its GPT-4, OpenAI could increase training tokens by a full five trillion, and therefore it will take 10-20X more FLOPs than expected to train GPT-3 to achieve minimum leakage.
Will GPT-4 be multimodal? Here, the answer already seems to be definitive: no. In fact, Altman explained that the model will be text-only, and the reason is linked to the fact that a good quality multimodal would provide for a very demanding combination of textual and visual information in order to guarantee superior performance compared to those of GPT-3 and DALL- And 2.

It is therefore advisable not to expect particular innovations even from this point of view. More attention will instead be devoted to safety and to the value aspect of the system, so as to resolve the already highlighted problems related to bypassing safety parameters, misinformation and prejudice, which today seem (understandably) central to OpenAI.

In summary, we can do nothing but wait – for a time not yet determined – for a model of which, in practice, we know nothing in terms of architecture (although it is rumored that it will remain identical to the previous one), dimensions and set of data.


GPT-4 will be used, like its predecessor, for usual linguistic applications such as code generation, text summary, tabulation, translations, classifications, chatbots and grammar correction, with performance less distorted, more aligned to human values, more accurate and more robust.
This perspective may sound in progress and perhaps be a little disappointing, but in fact it reflects the reality we find ourselves in today, in which language models are not yet completely reliable, they are not able to understand the physical world in all its nuances – not to mention the abstract one, which remains completely unknown to them – and they certainly don’t have the ability to understand or interpret the psychological aspects related to the human.

As well expressed by scientist and entrepreneur Gary Marcus in an article published in Communications of the ACM at the beginning of January, we should imagine GPT-4 as if it were an elephant in a china shop: clumsy, reckless, difficult to control, almost impossible to be foreseen at all. A system which, although capable of leveling several gaps that emerged in GPT-3 in terms of rational, medical and scientific reasoning, will continue to show important gaps when it comes to human psychology, mathematics and, in part, even science.
In practice, the time has not yet come for a complete alignment between what human beings expect from machines and what machines are actually capable of doing.

Therefore, leave room for moderate enthusiasm, without however forgetting that GPT-4 cannot replace man in rhetoric, diplomacy and reasoning skills and certainly cannot be entrusted with making arbitrary decisions.
Before there is an Artificial Intelligence that humans can truly trust, truly innovative architectures capable of incorporating both explicit knowledge and the innumerable models of the world in which we live must be born.


L’Istituto del “Whistleblowing” è riconosciuto come strumento fondamentale nell’emersione di illeciti; per il suo efficace operare è pero cruciale assicurare una protezione adeguata ed equilibrata ai segnalanti. In tale ottica, al fine di garantire che i soggetti segnalanti siano meglio protetto da ritorsioni e conseguenze negative, e incoraggiare l’utilizzo dello strumento, in Italia è stato approvato il D.Lgs. n.24 del 10 marzo 2023 a recepimento della Direttiva (UE) 2019/1937 riguardante la protezione delle persone che segnalano violazioni.

Il decreto persegue l’obiettivo di rafforzare la tutela giuridica delle persone che segnalano violazioni di disposizioni normative nazionali o europee, che ledono gli interessi e/o l’integrità dell’ente pubblico o privato di appartenenza, e di cui siano venute a conoscenza nello svolgimento dell’attività lavorativa.


(*) Campi obbligatori