Google is using Anthropic's Claude to improve its Gemini AI - Haywaa
Google has been leveraging Anthropic's Claude to test and enhance its Gemini AI model. According to a report by TechCrunch, the tech giant is conducting comparisons between the outputs of Claude and Gemini to refine the accuracy of its AI. Contractors are allocated up to 30 minutes per prompt to assess the responses based on various criteria including truthfulness and verbosity.
There have been instances where Claude's answers have been observed in internal tools used by Google, complete with explicit identifiers. Interestingly, Claude has shown a significant focus on safety, with a tendency to avoid responding to prompts considered unsafe. This differs from Gemini, which has faced issues related to safety violations.
Despite being a major investor in Anthropic, Google has not officially disclosed whether it has obtained permission from Anthropic to utilize Claude in this capacity. Google DeepMind has clarified that Gemini is not undergoing training on Anthropic models. However, comparisons of outputs are being conducted for evaluation purposes in line with standard industry practices.