ChatGPT is all anyone is able to talk about lately. Powered by the language model GPT 3 and GPT 3.5 (forPlus subscribers), the AI chatbot has grown by leaps and bounds in what it can do. However, a lot of people have been waiting with bated breath for an upgraded model that pushes the envelope. Well, OpenAI has now made that a reality with GPT-4, its latest multimodal LLM that comes packed to the brim with improvements and unprecedented tech in AI. Check out all the details below!
The newly announced GPT-4 model by OpenAI is a big thing in artificial intelligence. The biggest thing to mention is that GPT-4 is alarge multimodal model. This means that it will be able toaccept image and text inputsproviding it with a deeper understanding. OpenAI mentions that even though the new model is less capable than humans in many real-world scenarios, it can still exhibit human-level performance on various levels.
GPT-4 is also deemed to be a more reliable, creative, and efficient model than its predecessor GPT- 3.5. For instance: The new model could pass a simulated bar exam with a score around the top 10% of test takers (~90 percentile) while GPT 3.5 came in the bottom 10%. GPT-4 is also capable of handling more nuanced instructions than the 3.5 model. OpenAI compared both the models across a variety of benchmarks and exams and GPT-4 came out on top. Check out all thecool things ChatGPT can doright here.
As mentioned above, the new model can accept promotes of both text and images. Compared to a restricted text input,GPT-4 will fare much better at understanding inputsthat contain both text and images. The visual inputs remain consistent on various documents including text and photos, diagrams, and even screenshots.
OpenAI showcased the same by feeding GPT-4 with an image and a text prompt asking it to describe what’s funny about the image. As seen above, the model was able to successfully read a random image from Reddit and answer the user-asked prompt. It was then able to identify the humorous element. However, GPT-4’s image inputs are still not publicly available and are a research preview.
While GPT-4 is a sizeable leap from its previous iteration, some problems still exist. For starters, OpenAI mentions that it is stillnot fully reliable and is prone to hallucination. This means that the AI will make reasoning errors and its outputs should be taken with great care and with human intervention. It might also beconfidently wrongin its predictions, which can lead to errors. However, GPT-4 does reduce hallucination compared to previous models. To be specific, thenew model scores 40% higher than GPT-3.5in the company’s evaluations.
From the looks of it, GPT-4 will shape up to be anextremely appealing language modeleven with some chinks in its armor. For those looking for even more detailed information, we already have something in the works. So stay tuned for more.
Combining his love for Literature and Tech, Upanishad dived into the world of technology journalism with fire. Now he writes about anything and everything while keeping a keen eye on his first love of gaming. Often found chronically walking around the office.