Anthropic, one of many world’s largest AI distributors, has a robust household of generative AI fashions referred to as Claude. These fashions can carry out a spread of duties, from captioning photographs and writing emails to fixing math and coding challenges.
With Anthropic’s mannequin ecosystem rising so rapidly, it may be robust to maintain monitor of which Claude fashions do what. To assist, we’ve put collectively a information to Claude, which we’ll preserve up to date as new fashions and upgrades arrive.
Claude fashions
Claude fashions are named after literary artistic endeavors: Haiku, Sonnet, and Opus. The most recent are:
- Claude 3.5 Haiku, a light-weight mannequin.
- Claude 3.7 Sonnet, a midrange, hybrid reasoning mannequin. That is at the moment Anthropic’s flagship AI mannequin.
- Claude 3 Opus, a big mannequin.
Counterintuitively, Claude 3 Opus — the most important and costliest mannequin Anthropic affords — is the least succesful Claude mannequin in the mean time. Nevertheless, that’s positive to vary when Anthropic releases an up to date model of Opus.
Most just lately, Anthropic launched Claude 3.7 Sonnet, its most superior mannequin so far. This AI mannequin is completely different from Claude 3.5 Haiku and Claude 3 Opus as a result of it’s a hybrid AI reasoning mannequin, which can provide each real-time solutions and extra thought-about, “thought-out” solutions to questions.
When utilizing Claude 3.7 Sonnet, customers can select whether or not to activate the AI mannequin’s reasoning talents, which immediate the mannequin to “suppose” for a brief or lengthy time frame.
When reasoning is turned on, Claude 3.7 Sonnet will spend anyplace from a number of seconds to a few minutes in a “pondering” section earlier than answering. Throughout this section, the AI mannequin is breaking down the consumer’s immediate into smaller elements and checking its solutions.
Claude 3.7 Sonnet is Anthropic’s first AI mannequin that may “cause,” a way many AI labs have turned to as conventional strategies of bettering AI efficiency taper off.
Even with its reasoning disabled, Claude 3.7 Sonnet stays one of many tech business’s top-performing AI fashions.
In November, Anthropic launched an improved – and dearer – model of its light-weight AI mannequin, Claude 3.5 Haiku. This mannequin outperforms Anthropic’s Claude 3 Opus on a number of benchmarks, however it could actually’t analyze photographs like Claude 3 Opus or Claude 3.7 Sonnet can.
All Claude fashions — which have a normal 200,000-token context window — can even observe multistep directions, use instruments (e.g., inventory ticker trackers), and produce structured output in codecs like JSON.
A context window is the quantity of information a mannequin like Claude can analyze earlier than producing new information, whereas tokens are subdivided bits of uncooked information (just like the syllables “fan,” “tas,” and “tic” within the phrase “incredible”). 200 thousand tokens is equal to about 150,000 phrases, or a 600-page novel.
Not like many main generative AI fashions, Anthropic’s can’t entry the web, which means they’re not significantly nice at answering present occasions questions. In addition they can’t generate photographs — solely easy line diagrams.
As for the key variations between Claude fashions, Claude 3.7 Sonnet is quicker than Claude 3 Opus and higher understands nuanced and sophisticated directions. Haiku struggles with refined prompts, but it surely’s the swiftest of the three fashions.
Claude mannequin pricing
The Claude fashions can be found by Anthropic’s API and managed platforms reminiscent of Amazon Bedrock and Google Cloud’s Vertex AI.
Right here’s the Anthropic API pricing:
- Claude 3.5 Haiku prices 80 cents per million enter tokens (~750,000 phrases), or $4 per million output tokens
- Claude 3.7 Sonnet prices $3 per million enter tokens, or $15 per million output tokens
- Claude 3 Opus prices $15 per million enter tokens, or $75 per million output tokens
Anthropic affords immediate caching and batching to yield further runtime financial savings.
Immediate caching lets builders retailer particular “immediate contexts” that may be reused throughout API calls to a mannequin, whereas batching processes asynchronous teams of low-priority (and subsequently cheaper) mannequin inference requests.
Claude plans and apps
For particular person customers and corporations trying to merely work together with the Claude fashions by way of apps for the online, Android, and iOS, Anthropic affords a free Claude plan with price limits and different utilization restrictions.
Upgrading to one of many firm’s subscriptions removes these limits and unlocks new performance. The present plans are:
Claude Professional, which prices $20 per 30 days, comes with 5x greater price limits, precedence entry, and previews of upcoming options.
Being business-focused, Group — which prices $30 per consumer per 30 days — provides a dashboard to regulate billing and consumer administration and integrations with information repos reminiscent of codebases and buyer relationship administration platforms (e.g., Salesforce). A toggle permits or disables citations to confirm AI-generated claims. (Like all fashions, Claude hallucinates once in a while.)
Each Professional and Group subscribers get Tasks, a characteristic that grounds Claude’s outputs in information bases, which could be fashion guides, interview transcripts, and so forth. These prospects, together with free-tier customers, can even faucet into Artifacts, a workspace the place customers can edit and add to content material like code, apps, web site designs, and different docs generated by Claude.
For patrons who want much more, there’s Claude Enterprise, which permits corporations to add proprietary information into Claude in order that Claude can analyze the information and reply questions on it. Claude Enterprise additionally comes with a bigger context window (500,000 tokens), GitHub integration for engineering groups to sync their GitHub repositories with Claude, and Tasks and Artifacts.
A phrase of warning
As is the case with all generative AI fashions, there are dangers related to utilizing Claude.
The fashions often make errors when summarizing or answering questions due to their tendency to hallucinate. They’re additionally skilled on public internet information, a few of which can be copyrighted or below a restrictive license. Anthropic and lots of different AI distributors argue that the fair-use doctrine shields them from copyright claims. However that hasn’t stopped information homeowners from submitting lawsuits.
Anthropic affords insurance policies to guard sure prospects from courtroom battles arising from fair-use challenges. Nevertheless, they don’t resolve the moral quandary of utilizing fashions skilled on information with out permission.
This text was initially revealed on October 19, 2024. It was up to date on February 25, 2025 to incorporate new particulars about Claude 3.7 Sonnet and Claude 3.5 Haiku.