Anthropic, an Alphabet-backed AI safety and research business, has released Claude, its next-generation AI assistant.
Claude is designed to do tasks similar to ChatGPT by reacting to instructions with human-like text output. It is also reported that the new AI is developed differently, with a concentration on being “helpful, honest, and harmless”.
Tech companies have been grappling with safety concerns because chatbots are unable to comprehend the implications of the language they produce.
To prevent creating harmful content, Claude’s creators took a new strategy to combat “Prompt engineering,” in which users speak their way past restrictions. They built the AI on a set of principles and trained it on massive volumes of text data. Claude is intended to explain its objections based on its principles, rather than focusing on how to avoid potentially dangerous topics.
“There was nothing scary. That’s one of the reasons we liked Anthropic,” Richard Robinson, chief executive of Robin AI, a London-based startup that uses AI to analyze legal contracts with early access to Claude, told Reuters in an interview.
Robinson also mentioned that they attempted to apply OpenAI’s technology to contracts but discovered that Claude is better at comprehending complicated legal jargon and is less prone to provide weird results.