Google launches new Gemini AI model with interactive answers
Alphabet Inc.’s Google on Tuesday debuted an updated version of its artificial intelligence model, Gemini, that executives said represents a “massive jump” in reasoning and coding ability.
The new model, Gemini 3, will be available immediately across all of Google’s major products, including search, and can answer questions with interactive graphics. Gemini 3, like its predecessor, can process text, images, and other media as well as solve complex science and math problems, the company said. It has dramatically improved its ability to reason and respond based on that input, Chief Executive Officer Sundar Pichai said in a statement
In just two years, “AI has evolved from simply reading text and images to reading the room,” Pichai said.
Google is working to reassert leadership in the fast-moving generative AI race, where rivals like OpenAI and Anthropic PBC have recently rolled out major upgrades of their own models. By weaving Gemini 3 into core products and developer tools all at once, the company is betting that tighter integration of its newest AI will showcase a faster payoff from the company’s years of investment in the technology.
“This is our most intelligent model,” Koray Kavukcuoglu, Google DeepMind chief technology officer, said in a briefing with reporters. It will help people “bring any idea that they have to life.”
Gemini 3 is able to transform information between formats and generate visuals or apps from single prompts, Google executives said in the briefing. For instance, when asked for a travel plan, Gemini 3 can generate visualizations including clickable interactive components, they demonstrated.
“It’s not just about how Gemini 3 can understand input. It’s also how it can output things in entirely new ways,” said Josh Woodward, head of Google’s Gemini team and its product incubator, Labs.
The new model will be used to reply to the hardest queries in Google Search or AI Mode while simpler questions will rely on other Gemini models.
Google also unveiled Antigravity, a new development platform for building AI-driven coding agents, which is available on a preview basis. The system lets developers delegate tasks to autonomous agents that can write, test and verify code across an editor, terminal and browser. Google executives said on a call with reporters that when a developer wants to bring an idea to life — like building a flight tracking web application — the agent is able to complete the task by working across those tools.
Google is also introducing Gemini 3 Deep Think, an enhanced reasoning mode that tests multiple hypotheses in parallel and selects the best answer. The company says it can handle advanced, multistep problems such as coding, scientific research, or complex planning. Deep Think will be available first to subscribers of Google AI Ultra, the company’s highest-tier paid plan for its AI technology, which costs $249.99 per month.
The company said Google’s latest model is its most secure yet. It’s trained to resist prompt injection attacks, which occur when a user tricks the AI into ignoring its safety controls to force the leak of sensitive information or harmful content.




