https://venturebeat.com/wp-content/uploads/2023/03/Untitled-design-25.png?w=1200&strip=all
Original Source Here
Anthropic, a startup funded by Google and founded by ex-OpenAI employees, today launched Claude, Claude, a chatbot seen as a ChatGPT rival.
According to a blog post, the company has worked for the past few months with companies including Notion, Quora and DuckDuckGo to “carefully test out our systems in the wild.” After working for the past few months with key partners like Notion, Quora, and DuckDuckGo in a closed alpha, we’ve been able to carefully test out our systems in the wild.”
>>Follow VentureBeat’s ongoing generative AI coverage<<
Anthropic is now offering Claude through chat and API
Now the company is offering Claude more broadly to power use cases at scale. Organizations can request access, but pricing has not yet been detailed.


Anthropic said that Claude can help with use cases such as summarization, search, creative and collaborative writing, Q&A, and coding, and can take direction on personality, tone and behavior. They claimed that customers reported that Claude is “much less likely to produce harmful outputs, easier to converse with, and more steerable — so you can get your desired output with less effort.”
The company is offering two versions of Claude: Claude and Claude Instant. Claude is a high-performance model, while Claude Instant, lighter, less expensive, and much faster.
Anthropic gained attention last April due to SBF and FTX funding
Anthropic was founded in 2021 by several researchers who left OpenAI. It gained more attention last April when, after less than a year in existence, it suddenly announced a whopping $580 million in funding — which, it turns out, mostly came from Sam Bankman-Fried and the folks at FTX, the now-bankrupt cryptocurrency platform accused of fraud. There have been questions as to whether that money could be recovered by a bankruptcy court.
Anthropic, and FTX, has also been tied to the Effective Altruism movement, which former Google researcher Timnit Gebru called out recently in a Wired opinion piece as a “dangerous brand of AI safety.”
Anthropic, which describes itself as “working to build reliable, interpretable, and steerable AI systems,” created Claude using a process called “Constitutional AI,” which it says is based on concepts such as beneficence, non-maleficence and autonomy.
According to an Anthropic paper detailing Constitutional AI, the process involves a supervised learning and a reinforcement learning phase: “As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
AI/ML
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot
You must be logged in to post a comment.