Google’s next-gen AI model Gemini outperforms GPT-4

.pp-multiple-authors-boxes-wrapper {display:none;}
img {width:100%;}

Google has unveiled Gemini, a cutting-edge AI model that stands as the company’s most capable and versatile to date.

Demis Hassabis, CEO and Co-Founder of Google DeepMind, introduced Gemini as a multimodal model that is capable of seamlessly understanding and combining various types of information, including text, code, audio, image, and video.

[embedded content]

Gemini comes in three optimised versions: Ultra, Pro, and Nano. The Ultra model boasts state-of-the-art performance, surpassing human experts in language understanding and demonstrating unprecedented capabilities in tasks ranging from coding to multimodal benchmarks.

What sets Gemini apart is its native multimodality, eliminating the need for stitching together separate components for different modalities. This groundbreaking approach, fine-tuned through large-scale collaborative efforts across Google teams, positions Gemini as a flexible and efficient model capable of running on data centres to mobile devices.

One of Gemini’s standout features is its sophisticated multimodal reasoning, enabling it to extract insights from vast datasets with remarkable precision. The model’s prowess extends to understanding and generating high-quality code in popular programming languages.

However, as Google ventures into this new era of AI, responsibility and safety remain paramount. Gemini undergoes rigorous safety evaluations, including assessments for bias and toxicity. Google is actively collaborating with external experts to address potential blind spots and ensure the model’s ethical deployment.

Gemini 1.0 is now rolling out across various Google products – including the Bard chatbot – with plans for integration into Search, Ads, Chrome, and Duet AI. However, the Bard upgrade will not be released in Europe pending clearance from regulators.

[embedded content]

Developers and enterprise customers can access Gemini Pro via the Gemini API in Google AI Studio or Google Cloud Vertex AI. Android developers will also be able to build with Gemini Nano via AICore, a new system capability available in Android 14.

(Image Credit: Google)

See also: AI & Big Data Expo: AI’s impact on decision-making in marketing

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, artificial intelligence, bard, deepmind, gemini, gemini nano, gemini pro, gemini ultra, google gemini, Model, multimodal