GPT-4 Is Here: Multimodal AI That Passes the Bar Exam

GPT-4 OpenAI launch

OpenAI just unveiled GPT-4, and it’s a staggering leap beyond GPT-3.5. The new model is multimodal — it can accept both text and image inputs — and demonstrates human-level performance on a range of professional and academic benchmarks. GPT-4 scores in the top 10% on the Uniform Bar Exam, compared to GPT-3.5’s bottom 10%. It passes the AP Computer Science, Biology, and Calculus exams with ease.

The safety improvements are equally significant. GPT-4 is 40% more likely to produce factual responses and 82% less likely to respond to requests for disallowed content compared to its predecessor. Six months of safety training went into this release before public deployment.

ChatGPT Plus subscribers get immediate GPT-4 access. Microsoft confirmed that the new Bing AI has been running on GPT-4 since launch. Third-party developers can access GPT-4 via the OpenAI API. Crucially, image input (vision) remains in limited preview, but the demonstrations are extraordinary — GPT-4 can describe a complex diagram, explain a meme, or generate code from a hand-drawn sketch.

GPT-4 is not just an incremental update. It’s a paradigm shift in what AI can do.

Source: OpenAI Research

Leave a Reply