OpenAI’s enhanced GPT-4 turbo is “more direct, less verbose”

And, it can accept image inputs!

OpenAI has souped up its GPT-4 model called GPT-4 Turbo with Vision, which now includes advanced computer vision capabilities for analysing multimedia content. The model is accessible as an API for developers and is available to customers who subscribe to ChatGPT Plus priced at $20 (approximately ₹1,650).
OpenAI writes in a blog post on X, “When writing with ChatGPT [with the new GPT-4 Turbo], responses will be more direct, less verbose and use more conversational language.” This updated model is said to bring enhancements in writing, maths, logical reasoning, and coding, along with a more current knowledge base.

Trained on publicly available data up to December 2023, the latest version of GPT-4 Turbo offers significant enhancements over its predecessor, which had a data cut-off in April 2023. This upgraded AI model has a context window of 128,000 tokens.

As part of a new update, GPT-4 Turbo can accept multimedia inputs. With the integration of vision technology, the model can process and analyse images, videos, and other multimedia inputs, providing detailed and insightful responses. This expansion into computer vision opens up possibilities for developers, enabling them to create applications across a wide range of industries.

The latest update also introduces a standout feature known as JSON mode and function calls, which allows developers to automate tasks within their applications using JSON code snippets. This enhancement aims to streamline workflows and increase efficiency, making it easier to integrate GPT-4 Turbo with Vision into a variety of projects.