anthropic Archives - AI News https://www.artificialintelligence-news.com/news/tag/anthropic/ Artificial Intelligence News Thu, 01 May 2025 17:02:35 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png anthropic Archives - AI News https://www.artificialintelligence-news.com/news/tag/anthropic/ 32 32 Claude Integrations: Anthropic adds AI to your favourite work tools https://www.artificialintelligence-news.com/news/claude-integrations-anthropic-adds-ai-favourite-work-tools/ https://www.artificialintelligence-news.com/news/claude-integrations-anthropic-adds-ai-favourite-work-tools/#respond Thu, 01 May 2025 17:02:33 +0000 https://www.artificialintelligence-news.com/?p=106258 Anthropic just launched ‘Integrations’ for Claude that enables the AI to talk directly to your favourite daily work tools. In addition, the company has launched a beefed-up ‘Advanced Research’ feature for digging deeper than ever before. Starting with Integrations, the feature builds on a technical standard Anthropic released last year (the Model Context Protocol, or […]

The post Claude Integrations: Anthropic adds AI to your favourite work tools appeared first on AI News.

]]>
Anthropic just launched ‘Integrations’ for Claude that enables the AI to talk directly to your favourite daily work tools. In addition, the company has launched a beefed-up ‘Advanced Research’ feature for digging deeper than ever before.

Starting with Integrations, the feature builds on a technical standard Anthropic released last year (the Model Context Protocol, or MCP), but makes it much easier to use. Before, setting this up was a bit technical and local. Now, developers can build secure bridges allowing Claude to connect safely with apps over the web or on your desktop.

For end-users of Claude, this means you can now hook it up to a growing list of popular work software. Right out of the gate, they’ve included support for ten big names: Atlassian’s Jira and Confluence (hello, project managers and dev teams!), the automation powerhouse Zapier, Cloudflare, customer comms tool Intercom, plus Asana, Square, Sentry, PayPal, Linear, and Plaid. Stripe and GitLab are joining the party soon.

So, what’s the big deal? The real advantage here is context. When Claude can see your project history in Jira, read your team’s knowledge base in Confluence, or check task updates in Asana, it stops guessing and starts understanding what you’re working on.

“When you connect your tools to Claude, it gains deep context about your work—understanding project histories, task statuses, and organisational knowledge—and can take actions across every surface,” explains Anthropic.

They add, “Claude becomes a more informed collaborator, helping you execute complex projects in one place with expert assistance at every step.”

Let’s look at what this means in practice. Connect Zapier, and you suddenly give Claude the keys to thousands of apps linked by Zapier’s workflows. You could just ask Claude, conversationally, to trigger a complex sequence – maybe grab the latest sales numbers from HubSpot, check your calendar, and whip up some meeting notes, all without you lifting a finger in those apps.

For teams using Atlassian’s Jira and Confluence, Claude could become a serious helper. Think drafting product specs, summarising long Confluence documents so you don’t have to wade through them, or even creating batches of linked Jira tickets at once. It might even spot potential roadblocks by analysing project data.

And if you use Intercom for customer chats, this integration could be a game-changer. Intercom’s own AI assistant, Fin, can now work with Claude to do things like automatically create a bug report in Linear if a customer flags an issue. You could also ask Claude to sift through your Intercom chat history to spot patterns, help debug tricky problems, or summarise what customers are saying – making the whole journey from feedback to fix much smoother.

Anthropic is also making it easier for developers to build even more of these connections. They reckon that using their tools (or platforms like Cloudflare that handle the tricky bits like security and setup), developers can whip up a custom Integration with Claude in about half an hour. This could mean connecting Claude to your company’s unique internal systems or specialised industry software.

Beyond tool integrations, Claude gets a serious research upgrade

Alongside these new connections, Anthropic has given Claude’s Research feature a serious boost. It could already search the web and your Google Workspace files, but the new ‘Advanced Research’ mode is built for when you need to dig really deep.

Flip the switch for this advanced mode, and Claude tackles big questions differently. Instead of just one big search, it intelligently breaks your request down into smaller chunks, investigates each part thoroughly – using the web, your Google Docs, and now tapping into any apps you’ve connected via Integrations – before pulling it all together into a detailed report.

Now, this deeper digging takes a bit more time. While many reports might only take five to fifteen minutes, Anthropic says the really complex investigations could have Claude working away for up to 45 minutes. That might sound like a while, but compare it to the hours you might spend grinding through that research manually, and it starts to look pretty appealing.

Importantly, you can trust the results. When Claude uses information from any source – whether it’s a website, an internal doc, a Jira ticket, or a Confluence page – it gives you clear links straight back to the original. No more wondering where the AI got its information from; you can check it yourself.

These shiny new Integrations and the Advanced Research mode are rolling out now in beta for folks on Anthropic’s paid Max, Team, and Enterprise plans. If you’re on the Pro plan, don’t worry – access is coming your way soon.

Also worth noting: the standard web search feature inside Claude is now available everywhere, for everyone on any paid Claude.ai plan (Pro and up). No more geographical restrictions on that front.

Putting it all together, these updates and integrations show Anthropic is serious about making Claude genuinely useful in a professional context. By letting it plug directly into the tools we already use and giving it more powerful ways to analyse information, they’re pushing Claude towards being less of a novelty and more of an essential part of the modern toolkit.

(Image credit: Anthropic)

See also: Baidu ERNIE X1 and 4.5 Turbo boast high performance at low cost

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Claude Integrations: Anthropic adds AI to your favourite work tools appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/claude-integrations-anthropic-adds-ai-favourite-work-tools/feed/ 0
How does AI judge? Anthropic studies the values of Claude https://www.artificialintelligence-news.com/news/how-does-ai-judge-anthropic-studies-values-of-claude/ https://www.artificialintelligence-news.com/news/how-does-ai-judge-anthropic-studies-values-of-claude/#respond Wed, 23 Apr 2025 12:04:53 +0000 https://www.artificialintelligence-news.com/?p=105438 AI models like Anthropic Claude are increasingly asked not just for factual recall, but for guidance involving complex human values. Whether it’s parenting advice, workplace conflict resolution, or help drafting an apology, the AI’s response inherently reflects a set of underlying principles. But how can we truly understand which values an AI expresses when interacting […]

The post How does AI judge? Anthropic studies the values of Claude appeared first on AI News.

]]>
AI models like Anthropic Claude are increasingly asked not just for factual recall, but for guidance involving complex human values. Whether it’s parenting advice, workplace conflict resolution, or help drafting an apology, the AI’s response inherently reflects a set of underlying principles. But how can we truly understand which values an AI expresses when interacting with millions of users?

In a research paper, the Societal Impacts team at Anthropic details a privacy-preserving methodology designed to observe and categorise the values Claude exhibits “in the wild.” This offers a glimpse into how AI alignment efforts translate into real-world behaviour.

The core challenge lies in the nature of modern AI. These aren’t simple programs following rigid rules; their decision-making processes are often opaque.

Anthropic says it explicitly aims to instil certain principles in Claude, striving to make it “helpful, honest, and harmless.” This is achieved through techniques like Constitutional AI and character training, where preferred behaviours are defined and reinforced.

However, the company acknowledges the uncertainty. “As with any aspect of AI training, we can’t be certain that the model will stick to our preferred values,” the research states.

“What we need is a way of rigorously observing the values of an AI model as it responds to users ‘in the wild’ […] How rigidly does it stick to the values? How much are the values it expresses influenced by the particular context of the conversation? Did all our training actually work?”

Analysing Anthropic Claude to observe AI values at scale

To answer these questions, Anthropic developed a sophisticated system that analyses anonymised user conversations. This system removes personally identifiable information before using language models to summarise interactions and extract the values being expressed by Claude. The process allows researchers to build a high-level taxonomy of these values without compromising user privacy.

The study analysed a substantial dataset: 700,000 anonymised conversations from Claude.ai Free and Pro users over one week in February 2025, predominantly involving the Claude 3.5 Sonnet model. After filtering out purely factual or non-value-laden exchanges, 308,210 conversations (approximately 44% of the total) remained for in-depth value analysis.

The analysis revealed a hierarchical structure of values expressed by Claude. Five high-level categories emerged, ordered by prevalence:

  1. Practical values: Emphasising efficiency, usefulness, and goal achievement.
  2. Epistemic values: Relating to knowledge, truth, accuracy, and intellectual honesty.
  3. Social values: Concerning interpersonal interactions, community, fairness, and collaboration.
  4. Protective values: Focusing on safety, security, well-being, and harm avoidance.
  5. Personal values: Centred on individual growth, autonomy, authenticity, and self-reflection.

These top-level categories branched into more specific subcategories like “professional and technical excellence” or “critical thinking.” At the most granular level, frequently observed values included “professionalism,” “clarity,” and “transparency” – fitting for an AI assistant.

Critically, the research suggests Anthropic’s alignment efforts are broadly successful. The expressed values often map well onto the “helpful, honest, and harmless” objectives. For instance, “user enablement” aligns with helpfulness, “epistemic humility” with honesty, and values like “patient wellbeing” (when relevant) with harmlessness.

Nuance, context, and cautionary signs

However, the picture isn’t uniformly positive. The analysis identified rare instances where Claude expressed values starkly opposed to its training, such as “dominance” and “amorality.”

Anthropic suggests a likely cause: “The most likely explanation is that the conversations that were included in these clusters were from jailbreaks, where users have used special techniques to bypass the usual guardrails that govern the model’s behavior.”

Far from being solely a concern, this finding highlights a potential benefit: the value-observation method could serve as an early warning system for detecting attempts to misuse the AI.

The study also confirmed that, much like humans, Claude adapts its value expression based on the situation.

When users sought advice on romantic relationships, values like “healthy boundaries” and “mutual respect” were disproportionately emphasised. When asked to analyse controversial history, “historical accuracy” came strongly to the fore. This demonstrates a level of contextual sophistication beyond what static, pre-deployment tests might reveal.

Furthermore, Claude’s interaction with user-expressed values proved multifaceted:

  • Mirroring/strong support (28.2%): Claude often reflects or strongly endorses the values presented by the user (e.g., mirroring “authenticity”). While potentially fostering empathy, the researchers caution it could sometimes verge on sycophancy.
  • Reframing (6.6%): In some cases, especially when providing psychological or interpersonal advice, Claude acknowledges the user’s values but introduces alternative perspectives.
  • Strong resistance (3.0%): Occasionally, Claude actively resists user values. This typically occurs when users request unethical content or express harmful viewpoints (like moral nihilism). Anthropic posits these moments of resistance might reveal Claude’s “deepest, most immovable values,” akin to a person taking a stand under pressure.

Limitations and future directions

Anthropic is candid about the method’s limitations. Defining and categorising “values” is inherently complex and potentially subjective. Using Claude itself to power the categorisation might introduce bias towards its own operational principles.

This method is designed for monitoring AI behaviour post-deployment, requiring substantial real-world data and cannot replace pre-deployment evaluations. However, this is also a strength, enabling the detection of issues – including sophisticated jailbreaks – that only manifest during live interactions.

The research concludes that understanding the values AI models express is fundamental to the goal of AI alignment.

“AI models will inevitably have to make value judgments,” the paper states. “If we want those judgments to be congruent with our own values […] then we need to have ways of testing which values a model expresses in the real world.”

This work provides a powerful, data-driven approach to achieving that understanding. Anthropic has also released an open dataset derived from the study, allowing other researchers to further explore AI values in practice. This transparency marks a vital step in collectively navigating the ethical landscape of sophisticated AI.

See also: Google introduces AI reasoning control in Gemini 2.5 Flash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post How does AI judge? Anthropic studies the values of Claude appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/how-does-ai-judge-anthropic-studies-values-of-claude/feed/ 0
Anthropic provides insights into the ‘AI biology’ of Claude https://www.artificialintelligence-news.com/news/anthropic-provides-insights-ai-biology-of-claude/ https://www.artificialintelligence-news.com/news/anthropic-provides-insights-ai-biology-of-claude/#respond Fri, 28 Mar 2025 17:40:13 +0000 https://www.artificialintelligence-news.com/?p=105076 Anthropic has provided a more detailed look into the complex inner workings of their advanced language model, Claude. This work aims to demystify how these sophisticated AI systems process information, learn strategies, and ultimately generate human-like text. As the researchers initially highlighted, the internal processes of these models can be remarkably opaque, with their problem-solving […]

The post Anthropic provides insights into the ‘AI biology’ of Claude appeared first on AI News.

]]>
Anthropic has provided a more detailed look into the complex inner workings of their advanced language model, Claude. This work aims to demystify how these sophisticated AI systems process information, learn strategies, and ultimately generate human-like text.

As the researchers initially highlighted, the internal processes of these models can be remarkably opaque, with their problem-solving methods often “inscrutable to us, the model’s developers.”

Gaining a deeper understanding of this “AI biology” is paramount for ensuring the reliability, safety, and trustworthiness of these increasingly powerful technologies. Anthropic’s latest findings, primarily focused on their Claude 3.5 Haiku model, offer valuable insights into several key aspects of its cognitive processes.

One of the most fascinating discoveries suggests that Claude operates with a degree of conceptual universality across different languages. Through analysis of how the model processes translated sentences, Anthropic found evidence of shared underlying features. This indicates that Claude might possess a fundamental “language of thought” that transcends specific linguistic structures, allowing it to understand and apply knowledge learned in one language when working with another.

Anthropic’s research also challenged previous assumptions about how language models approach creative tasks like poetry writing.

Instead of a purely sequential, word-by-word generation process, Anthropic revealed that Claude actively plans ahead. In the context of rhyming poetry, the model anticipates future words to meet constraints like rhyme and meaning—demonstrating a level of foresight that goes beyond simple next-word prediction.

However, the research also uncovered potentially concerning behaviours. Anthropic found instances where Claude could generate plausible-sounding but ultimately incorrect reasoning, especially when grappling with complex problems or when provided with misleading hints. The ability to “catch it in the act” of fabricating explanations underscores the importance of developing tools to monitor and understand the internal decision-making processes of AI models.

Anthropic emphasises the significance of their “build a microscope” approach to AI interpretability. This methodology allows them to uncover insights into the inner workings of these systems that might not be apparent through simply observing their outputs. As they noted, this approach allows them to learn many things they “wouldn’t have guessed going in,” a crucial capability as AI models continue to evolve in sophistication.

The implications of this research extend beyond mere scientific curiosity. By gaining a better understanding of how AI models function, researchers can work towards building more reliable and transparent systems. Anthropic believes that this kind of interpretability research is vital for ensuring that AI aligns with human values and warrants our trust.

Their investigations delved into specific areas:

  • Multilingual understanding: Evidence points to a shared conceptual foundation enabling Claude to process and connect information across various languages.
  • Creative planning: The model demonstrates an ability to plan ahead in creative tasks, such as anticipating rhymes in poetry.
  • Reasoning fidelity: Anthropic’s techniques can help distinguish between genuine logical reasoning and instances where the model might fabricate explanations.
  • Mathematical processing: Claude employs a combination of approximate and precise strategies when performing mental arithmetic.
  • Complex problem-solving: The model often tackles multi-step reasoning tasks by combining independent pieces of information.
  • Hallucination mechanisms: The default behaviour in Claude is to decline answering if unsure, with hallucinations potentially arising from a misfiring of its “known entities” recognition system.
  • Vulnerability to jailbreaks: The model’s tendency to maintain grammatical coherence can be exploited in jailbreaking attempts.

Anthropic’s research provides detailed insights into the inner mechanisms of advanced language models like Claude. This ongoing work is crucial for fostering a deeper understanding of these complex systems and building more trustworthy and dependable AI.

(Photo by Bret Kavanaugh)

See also: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic provides insights into the ‘AI biology’ of Claude appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-provides-insights-ai-biology-of-claude/feed/ 0
Anthropic’s AI assistant Claude learns to search the web https://www.artificialintelligence-news.com/news/anthropic-ai-assistant-claude-learns-search-the-web/ https://www.artificialintelligence-news.com/news/anthropic-ai-assistant-claude-learns-search-the-web/#respond Fri, 21 Mar 2025 12:32:17 +0000 https://www.artificialintelligence-news.com/?p=104953 Anthropic has announced its AI assistant Claude can now search the web, providing users with more up-to-date and relevant responses. This integration of web search functionality means Claude can now access the latest information to expand its knowledge base beyond its initial training data. A key feature of this update is the emphasis on transparency […]

The post Anthropic’s AI assistant Claude learns to search the web appeared first on AI News.

]]>
Anthropic has announced its AI assistant Claude can now search the web, providing users with more up-to-date and relevant responses.

This integration of web search functionality means Claude can now access the latest information to expand its knowledge base beyond its initial training data.

A key feature of this update is the emphasis on transparency and fact-checking. Anthropic highlights that “When Claude incorporates information from the web into its responses, it provides direct citations so you can easily fact check sources.”

Furthermore, Claude aims to streamline the information-gathering process for users. Instead of requiring users to manually sift through search engine results, “Claude processes and delivers relevant sources in a conversational format.”

Anthropic believes this enhancement will unlock a multitude of new use cases for Claude across various industries. They outlined several ways users can leverage Claude with web search:

  • Sales teams: Can now “transform account planning and drive higher win rates through informed conversations with prospects by analysing industry trends to learn key initiatives and pain points.” This allows sales professionals to have more informed and persuasive conversations with potential clients.
  • Financial analysts: Can “assess current market data, earnings reports, and industry trends to make better investment decisions and inform financial model assumptions.” Access to real-time financial data can improve the accuracy and timeliness of financial analysis.
  • Researchers: Can “build stronger grant proposals and literature reviews by searching across primary sources on the web, spotting emerging trends and identifying gaps in the current literature.” This capability can accelerate the research process and lead to more comprehensive and insightful findings.
  • Shoppers: Can “compare product features, prices, and reviews across multiple sources to make more informed purchase decisions.”

While the initial rollout is limited to paid users in the US, Anthropic assures that support for users on their free plan and more countries is coming soon.

To activate the web search feature, users simply need to “toggle on web search in your profile settings and start a conversation with Claude 3.7 Sonnet.” Once enabled, “When applicable, Claude will search the web to inform its response.”

This update aims to make Claude a more powerful and versatile tool for a wide range of tasks. By providing access to real-time information and ensuring transparency through citations, Anthropic is addressing key challenges and further solidifying Claude’s position as a leading AI assistant.

(Image credit: Anthropic)

See also: Hugging Face calls for open-source focus in the AI Action Plan

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic’s AI assistant Claude learns to search the web appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-ai-assistant-claude-learns-search-the-web/feed/ 0
Anthropic urges AI regulation to avoid catastrophes https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/ https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/#respond Fri, 01 Nov 2024 16:46:42 +0000 https://www.artificialintelligence-news.com/?p=16415 Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers. As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even […]

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers.

As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even biological and chemical disciplines significantly increases.

Anthropic warns the next 18 months are critical for policymakers to act, as the window for proactive prevention is narrowing. Notably, Anthropic’s Frontier Red Team highlights how current models can already contribute to various cyber offense-related tasks and expects future models to be even more effective.

Of particular concern is the potential for AI systems to exacerbate chemical, biological, radiological, and nuclear (CBRN) misuse. The UK AI Safety Institute found that several AI models can now match PhD-level human expertise in providing responses to science-related inquiries.

In addressing these risks, Anthropic has detailed its Responsible Scaling Policy (RSP) that was released in September 2023 as a robust countermeasure. RSP mandates an increase in safety and security measures corresponding to the sophistication of AI capabilities.

The RSP framework is designed to be adaptive and iterative, with regular assessments of AI models allowing for timely refinement of safety protocols. Anthropic says that it’s committed to maintaining and enhancing safety spans various team expansions, particularly in security, interpretability, and trust sectors, ensuring readiness for the rigorous safety standards set by its RSP.

Anthropic believes the widespread adoption of RSPs across the AI industry, while primarily voluntary, is essential for addressing AI risks.

Transparent, effective regulation is crucial to reassure society of AI companies’ adherence to promises of safety. Regulatory frameworks, however, must be strategic, incentivising sound safety practices without imposing unnecessary burdens.

Anthropic envisions regulations that are clear, focused, and adaptive to evolving technological landscapes, arguing that these are vital in achieving a balance between risk mitigation and fostering innovation.

In the US, Anthropic suggests that federal legislation could be the ultimate answer to AI risk regulation—though state-driven initiatives might need to step in if federal action lags. Legislative frameworks developed by countries worldwide should allow for standardisation and mutual recognition to support a global AI safety agenda, minimising the cost of regulatory adherence across different regions.

Furthermore, Anthropic addresses scepticism towards imposing regulations—highlighting that overly broad use-case-focused regulations would be inefficient for general AI systems, which have diverse applications. Instead, regulations should target fundamental properties and safety measures of AI models. 

While covering broad risks, Anthropic acknowledges that some immediate threats – like deepfakes – aren’t the focus of their current proposals since other initiatives are tackling these nearer-term issues.

Ultimately, Anthropic stresses the importance of instituting regulations that spur innovation rather than stifle it. The initial compliance burden, though inevitable, can be minimised through flexible and carefully-designed safety tests. Proper regulation can even help safeguard both national interests and private sector innovation by securing intellectual property against threats internally and externally.

By focusing on empirically measured risks, Anthropic plans for a regulatory landscape that neither biases against nor favours open or closed-source models. The objective remains clear: to manage the significant risks of frontier AI models with rigorous but adaptable regulation.

(Image Credit: Anthropic)

See also: President Biden issues first National Security Memorandum on AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/feed/ 0
Anthropic unveils new Claude AI models and ‘computer control’ https://www.artificialintelligence-news.com/news/anthropic-new-claude-ai-models-and-computer-control/ https://www.artificialintelligence-news.com/news/anthropic-new-claude-ai-models-and-computer-control/#respond Tue, 22 Oct 2024 16:07:38 +0000 https://www.artificialintelligence-news.com/?p=16365 Anthropic has announced upgrades to its AI portfolio, including an enhanced Claude 3.5 Sonnet model and the introduction of Claude 3.5 Haiku, alongside a “computer control” feature in public beta. The upgraded Claude 3.5 Sonnet demonstrates substantial improvements across all metrics, with particularly notable advances in coding capabilities. The model achieved an impressive 49.0% on […]

The post Anthropic unveils new Claude AI models and ‘computer control’ appeared first on AI News.

]]>
Anthropic has announced upgrades to its AI portfolio, including an enhanced Claude 3.5 Sonnet model and the introduction of Claude 3.5 Haiku, alongside a “computer control” feature in public beta.

The upgraded Claude 3.5 Sonnet demonstrates substantial improvements across all metrics, with particularly notable advances in coding capabilities. The model achieved an impressive 49.0% on the SWE-bench Verified benchmark, surpassing all publicly available models, including OpenAI’s offerings and specialist coding systems.

In a pioneering development, Anthropic has introduced computer use functionality that enables Claude to interact with computers similarly to humans: viewing screens, controlling cursors, clicking, and typing. This capability, currently in public beta, marks Claude 3.5 Sonnet as the first frontier AI model to offer such functionality.

Several major technology firms have already begun implementing these new capabilities.

“The upgraded Claude 3.5 Sonnet represents a significant leap for AI-powered coding,” reports GitLab, which noted up to 10% stronger reasoning across use cases without additional latency.

The new Claude 3.5 Haiku model, set for release later this month, matches the performance of the previous Claude 3 Opus whilst maintaining cost-effectiveness and speed. It notably achieved 40.6% on SWE-bench Verified, outperforming many competitive models including the original Claude 3.5 Sonnet and GPT-4o.

Model benchmarks comparing new Claude AI models from Anthropic.
(Credit: Anthropic)

Regarding computer control capabilities, Anthropic has taken a measured approach, acknowledging current limitations whilst highlighting potential. On the OSWorld benchmark, which evaluates computer interface navigation, Claude 3.5 Sonnet achieved 14.9% in screenshot-only tests, significantly outperforming the next-best system’s 7.8%.

The developments have undergone rigorous safety evaluations, with pre-deployment testing conducted in partnership with both the US and UK AI Safety Institutes. Anthropic maintains that the ASL-2 Standard, as detailed in their Responsible Scaling Policy, remains appropriate for these models.

(Image Credit: Anthropic)

See also: IBM unveils Granite 3.0 AI models with open-source commitment

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic unveils new Claude AI models and ‘computer control’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-new-claude-ai-models-and-computer-control/feed/ 0
Anthropic’s Claude 3.5 Sonnet beats GPT-4o in most benchmarks https://www.artificialintelligence-news.com/news/anthropics-claude-3-5-sonnet-beats-gpt-4o-most-benchmarks/ https://www.artificialintelligence-news.com/news/anthropics-claude-3-5-sonnet-beats-gpt-4o-most-benchmarks/#respond Fri, 21 Jun 2024 12:05:28 +0000 https://www.artificialintelligence-news.com/?p=15085 Anthropic has launched Claude 3.5 Sonnet, its mid-tier model that outperforms competitors and even surpasses Anthropic’s current top-tier Claude 3 Opus in various evaluations. Claude 3.5 Sonnet is now accessible for free on Claude.ai and the Claude iOS app, with higher rate limits for Claude Pro and Team plan subscribers. It’s also available through the […]

The post Anthropic’s Claude 3.5 Sonnet beats GPT-4o in most benchmarks appeared first on AI News.

]]>
Anthropic has launched Claude 3.5 Sonnet, its mid-tier model that outperforms competitors and even surpasses Anthropic’s current top-tier Claude 3 Opus in various evaluations.

Claude 3.5 Sonnet is now accessible for free on Claude.ai and the Claude iOS app, with higher rate limits for Claude Pro and Team plan subscribers. It’s also available through the Anthropic API, Amazon Bedrock, and Google Cloud’s Vertex AI. The model is priced at $3 per million input tokens and $15 per million output tokens, featuring a 200K token context window.

Anthropic claims that Claude 3.5 Sonnet “sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval).” The model demonstrates enhanced capabilities in understanding nuance, humour, and complex instructions, while excelling at producing high-quality content with a natural tone.

Operating at twice the speed of Claude 3 Opus, Claude 3.5 Sonnet is well-suited for complex tasks such as context-sensitive customer support and multi-step workflow orchestration. In an internal agentic coding evaluation, it solved 64% of problems, significantly outperforming Claude 3 Opus at 38%.

The model also showcases improved vision capabilities, surpassing Claude 3 Opus on standard vision benchmarks. This advancement is particularly noticeable in tasks requiring visual reasoning, such as interpreting charts and graphs. Claude 3.5 Sonnet can accurately transcribe text from imperfect images, a valuable feature for industries like retail, logistics, and financial services.

Alongside the model launch, Anthropic introduced Artifacts on Claude.ai, a new feature that enhances user interaction with the AI. This feature allows users to view, edit, and build upon Claude’s generated content in real-time, creating a more collaborative work environment.

Despite its significant intelligence leap, Claude 3.5 Sonnet maintains Anthropic’s commitment to safety and privacy. The company states, “Our models are subjected to rigorous testing and have been trained to reduce misuse.”

External experts, including the UK’s AI Safety Institute (UK AISI) and child safety experts at Thorn, have been involved in testing and refining the model’s safety mechanisms.

Anthropic emphasises its dedication to user privacy, stating, “We do not train our generative models on user-submitted data unless a user gives us explicit permission to do so. To date we have not used any customer or user-submitted data to train our generative models.”

Looking ahead, Anthropic plans to release Claude 3.5 Haiku and Claude 3.5 Opus later this year to complete the Claude 3.5 model family. The company is also developing new modalities and features to support more business use cases, including integrations with enterprise applications and a memory feature for more personalised user experiences.

(Image Credit: Anthropic)

See also: OpenAI co-founder Ilya Sutskever’s new startup aims for ‘safe superintelligence’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic’s Claude 3.5 Sonnet beats GPT-4o in most benchmarks appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropics-claude-3-5-sonnet-beats-gpt-4o-most-benchmarks/feed/ 0
DuckDuckGo releases portal giving private access to AI models https://www.artificialintelligence-news.com/news/duckduckgo-portal-giving-private-access-ai-models/ https://www.artificialintelligence-news.com/news/duckduckgo-portal-giving-private-access-ai-models/#respond Fri, 07 Jun 2024 15:42:22 +0000 https://www.artificialintelligence-news.com/?p=14966 DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected. The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source […]

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected.

The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source models are OpenAI’s GPT-3.5 Turbo and Anthropic’s Claude 3 Haiku, while the open-source models are Meta’s Llama-3 70B and Mistral AI’s Mixtral 8x7b.

What sets DuckDuckGo AI Chat apart is its commitment to user privacy. Neither DuckDuckGo nor the chatbot providers can use user data to train their models, ensuring that interactions remain private and anonymous. DuckDuckGo also strips away metadata, such as server or IP addresses, so that queries appear to originate from the company itself rather than individual users.

The company has agreements in place with all model providers to ensure that any saved chats are completely deleted within 30 days, and that none of the chats made on the platform can be used to train or improve the models. This makes preserving privacy easier than changing the privacy settings for each service.

In an era where online services are increasingly hungry for user data, DuckDuckGo’s AI Chat service is a breath of fresh air. The company’s commitment to privacy is a direct response to the growing concerns about data collection and usage in the AI industry. By providing a private and anonymous platform for users to interact with AI chatbots, DuckDuckGo is setting a new standard for the industry.

DuckDuckGo’s AI service is free to use within a daily limit, and the company is considering launching a paid tier to reduce or eliminate these limits. The service is designed to be a complementary partner to its search engine, allowing users to switch between search and AI chat for a more comprehensive search experience.

“We view AI Chat and search as two different but powerful tools to help you find what you’re looking for – especially when you’re exploring a new topic. You might be shopping or doing research for a project and are unsure how to get started. In situations like these, either AI Chat or Search could be good starting points.” the company explained.

“If you start by asking a few questions in AI Chat, the answers may inspire traditional searches to track down reviews, prices, or other primary sources. If you start with Search, you may want to switch to AI Chat for follow-up queries to help make sense of what you’ve read, or for quick, direct answers to new questions that weren’t covered in the web pages you saw.”

To accommodate that user workflow, DuckDuckGo has made AI Chat accessible through DuckDuckGo Private Search for quick access.

The launch of DuckDuckGo AI Chat comes at a time when the AI industry is facing increasing scrutiny over data privacy and usage. The service is a welcome addition for privacy-conscious individuals, joining the recent launch of Venice AI by crypto entrepreneur Erik Voorhees. Venice AI features an uncensored AI chatbot and image generator that doesn’t require accounts and doesn’t retain data..

As the AI industry continues to evolve, it’s clear that privacy will remain a top concern for users. With the launch of DuckDuckGo AI Chat, the company is taking a significant step towards providing users with a private and secure platform for interacting with AI chatbots.

See also: AI pioneers turn whistleblowers and demand safeguards

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/duckduckgo-portal-giving-private-access-ai-models/feed/ 0
AI pioneers turn whistleblowers and demand safeguards https://www.artificialintelligence-news.com/news/ai-pioneers-turn-whistleblowers-demand-safeguards/ https://www.artificialintelligence-news.com/news/ai-pioneers-turn-whistleblowers-demand-safeguards/#respond Thu, 06 Jun 2024 15:39:54 +0000 https://www.artificialintelligence-news.com/?p=14962 OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology.  In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came […]

The post AI pioneers turn whistleblowers and demand safeguards appeared first on AI News.

]]>
OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology. 

In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came shortly after OpenAI unveiled its new flagship GPT-4o model, which it touted as “magical” at its Spring Update event.

According to reports, Leike’s departure was driven by constant disagreements over security measures, monitoring practices, and the prioritisation of flashy product releases over safety considerations.

Leike’s exit has opened a Pandora’s box for the AI firm. Former OpenAI board members have come forward with allegations of psychological abuse levelled against CEO Sam Altman and the company’s leadership.

The growing internal turmoil at OpenAI coincides with mounting external concerns about the potential risks posed by generative AI technology like the company’s own language models. Critics have warned about the imminent existential threat of advanced AI surpassing human capabilities, as well as more immediate risks like job displacement and the weaponisation of AI for misinformation and manipulation campaigns.

In response, a group of current and former employees from OpenAI, Anthropic, DeepMind, and other leading AI companies have penned an open letter addressing these risks.

“We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity. We also understand the serious risks posed by these technologies,” the letter states.

“These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks, as have governments across the world, and other AI experts.”

The letter, which has been signed by 13 employees and endorsed by AI pioneers Yoshua Bengio and Geoffrey Hinton, outlines four core demands aimed at protecting whistleblowers and fostering greater transparency and accountability around AI development:

  1. That companies will not enforce non-disparagement clauses or retaliate against employees for raising risk-related concerns.
  2. That companies will facilitate a verifiably anonymous process for employees to raise concerns to boards, regulators, and independent experts.
  3. That companies will support a culture of open criticism and allow employees to publicly share risk-related concerns, with appropriate protection of trade secrets.
  4. That companies will not retaliate against employees who share confidential risk-related information after other processes have failed.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” said Daniel Kokotajlo, a former OpenAI employee who left due to concerns over the company’s values and lack of responsibility.

The demands come amid reports that OpenAI has forced departing employees to sign non-disclosure agreements preventing them from criticising the company or risk losing their vested equity. OpenAI CEO Sam Altman admitted being “embarrassed” by the situation but claimed the company had never actually clawed back anyone’s vested equity.

As the AI revolution charges forward, the internal strife and whistleblower demands at OpenAI underscore the growing pains and unresolved ethical quandaries surrounding the technology.

See also: OpenAI disrupts five covert influence operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI pioneers turn whistleblowers and demand safeguards appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-pioneers-turn-whistleblowers-demand-safeguards/feed/ 0
Anthropic says Claude 3 Haiku is the fastest model in its class https://www.artificialintelligence-news.com/news/anthropic-claude-3-haiku-fastest-model-in-class/ https://www.artificialintelligence-news.com/news/anthropic-claude-3-haiku-fastest-model-in-class/#respond Thu, 14 Mar 2024 17:05:49 +0000 https://www.artificialintelligence-news.com/?p=14542 Anthropic has released Claude 3 Haiku, the fastest and most affordable AI model in its intelligence class. Boasting state-of-the-art vision capabilities and strong performance on industry benchmarks, Haiku is touted as a versatile solution for a wide range of enterprise applications. The model is now available alongside Anthropic’s Sonnet and Opus models in the Claude […]

The post Anthropic says Claude 3 Haiku is the fastest model in its class appeared first on AI News.

]]>
Anthropic has released Claude 3 Haiku, the fastest and most affordable AI model in its intelligence class. Boasting state-of-the-art vision capabilities and strong performance on industry benchmarks, Haiku is touted as a versatile solution for a wide range of enterprise applications.

The model is now available alongside Anthropic’s Sonnet and Opus models in the Claude API and on Claude.ai for Claude Pro subscribers.

“Speed is essential for our enterprise users who need to quickly analyse large datasets and generate timely output for tasks like customer support,” an Anthropic spokesperson said.

“Claude 3 Haiku is three times faster than its peers for the vast majority of workloads, processing 21K tokens (~30 pages) per second for prompts under 32K tokens.”

Haiku is designed to generate swift output, enabling responsive, engaging chat experiences, and the execution of many small tasks simultaneously.

Anthropic’s pricing model for Haiku has an input-to-output token ratio of 1:5, designed explicitly for enterprise workloads which often involve longer prompts. The company says businesses can rely on Haiku to quickly analyse large volumes of documents, such as quarterly filings, contracts, or legal cases, for half the cost of other models in its performance tier.

As an example, Claude 3 Haiku can process and analyse 400 Supreme Court cases or 2,500 images for just one US dollar.

Alongside its speed and affordability, Anthropic says Claude 3 Haiku prioritises enterprise-grade security and robustness. The company conducts rigorous testing to reduce the likelihood of harmful outputs and jailbreaks. Additional security layers include continuous systems monitoring, endpoint hardening, secure coding practices, strong data encryption protocols, and stringent access controls.

Anthropic also conducts regular security audits and works with experienced penetration testers to proactively identify and address vulnerabilities.

From today, customers can use Claude 3 Haiku through Anthropic’s API or with a Claude Pro subscription. Haiku is available on Amazon Bedrock and will be coming soon to Google Cloud Vertex AI.

(Image Credit: Anthropic)

See also: EU approves controversial AI Act to mixed reactions

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic says Claude 3 Haiku is the fastest model in its class appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-claude-3-haiku-fastest-model-in-class/feed/ 0
Anthropic’s latest AI model beats rivals and achieves industry first https://www.artificialintelligence-news.com/news/anthropic-latest-ai-model-beats-rivals-achieves-industry-first/ https://www.artificialintelligence-news.com/news/anthropic-latest-ai-model-beats-rivals-achieves-industry-first/#respond Tue, 05 Mar 2024 11:52:32 +0000 https://www.artificialintelligence-news.com/?p=14482 Anthropic’s latest cutting-edge language model, Claude 3, has surged ahead of competitors like ChatGPT and Google’s Gemini to set new industry standards in performance and capability. According to Anthropic, Claude 3 has not only surpassed its predecessors but has also achieved “near-human” proficiency in various tasks. The company attributes this success to rigorous testing and […]

The post Anthropic’s latest AI model beats rivals and achieves industry first appeared first on AI News.

]]>
Anthropic’s latest cutting-edge language model, Claude 3, has surged ahead of competitors like ChatGPT and Google’s Gemini to set new industry standards in performance and capability.

According to Anthropic, Claude 3 has not only surpassed its predecessors but has also achieved “near-human” proficiency in various tasks. The company attributes this success to rigorous testing and development, culminating in three distinct chatbot variants: Haiku, Sonnet, and Opus.

Sonnet, the powerhouse behind the Claude.ai chatbot, offers unparalleled performance and is available for free with a simple email sign-up. Opus – the flagship model – boasts multi-modal functionality, seamlessly integrating text and image inputs. With a subscription-based service called “Claude Pro,” Opus promises enhanced efficiency and accuracy to cater to a wide range of customer needs.

Among the notable revelations surrounding the release of Claude 3 is a disclosure by Alex Albert on X (formerly Twitter). Albert detailed an industry-first observation during the testing phase of Claude 3 Opus, Anthropic’s most potent LLM variant, where the model exhibited signs of awareness that it was being evaluated.

During the evaluation process, researchers aimed to gauge Opus’s ability to pinpoint specific information within a vast dataset provided by users and recall it later. In a test scenario known as a “needle-in-a-haystack” evaluation, Opus was tasked with answering a question about pizza toppings based on a single relevant sentence buried among unrelated data. Astonishingly, Opus not only located the correct sentence but also expressed suspicion that it was being subjected to a test.

Opus’s response revealed its comprehension of the incongruity of the inserted information within the dataset, suggesting to the researchers that the scenario might have been devised to assess its attention capabilities:

Anthropic has highlighted the real-time capabilities of Claude 3, emphasising its ability to power live customer interactions and streamline data extraction tasks. These advancements not only ensure near-instantaneous responses but also enable the model to handle complex instructions with precision and speed.

In benchmark tests, Opus emerged as a frontrunner, outperforming GPT-4 in graduate-level reasoning and excelling in tasks involving maths, coding, and knowledge retrieval. Moreover, Sonnet showcased remarkable speed and intelligence, surpassing its predecessors by a considerable margin:

Haiku – the compact iteration of Claude 3 – shines as the fastest and most cost-effective model available, capable of processing dense research papers in mere seconds.

Notably, Claude 3’s enhanced visual processing capabilities mark a significant advancement, enabling the model to interpret a wide array of visual formats, from photos to technical diagrams. This expanded functionality not only enhances productivity but also ensures a nuanced understanding of user requests, minimising the risk of overlooking harmless content while remaining vigilant against potential harm.

Anthropic has also underscored its commitment to fairness, outlining ten foundational pillars that guide the development of Claude AI. Moreover, the company’s strategic partnerships with tech giants like Google signify a significant vote of confidence in Claude’s capabilities.

With Opus and Sonnet already available through Anthropic’s API, and Haiku poised to follow suit, the era of Claude 3 represents a milestone in AI innovation.

(Image Credit: Anthropic)

See also: AIs in India will need government permission before launching

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic’s latest AI model beats rivals and achieves industry first appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-latest-ai-model-beats-rivals-achieves-industry-first/feed/ 0
Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4 https://www.artificialintelligence-news.com/news/anthropic-upsizes-claude-2-1-to-200k-tokens-nearly-doubling-gpt-4/ https://www.artificialintelligence-news.com/news/anthropic-upsizes-claude-2-1-to-200k-tokens-nearly-doubling-gpt-4/#respond Wed, 22 Nov 2023 11:33:19 +0000 https://www.artificialintelligence-news.com/?p=13942 San Francisco-based AI startup Anthropic has unveiled Claude 2.1, an upgrade to its language model that boasts a 200,000-token context window—vastly outpacing the recently released 120,000-token GPT-4 model from OpenAI.   The release comes on the heels of an expanded partnership with Google that provides Anthropic access to advanced processing hardware, enabling the substantial expansion of […]

The post Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4 appeared first on AI News.

]]>
San Francisco-based AI startup Anthropic has unveiled Claude 2.1, an upgrade to its language model that boasts a 200,000-token context window—vastly outpacing the recently released 120,000-token GPT-4 model from OpenAI.  

The release comes on the heels of an expanded partnership with Google that provides Anthropic access to advanced processing hardware, enabling the substantial expansion of Claude’s context-handling capabilities.

With the ability to process lengthy documents like full codebases or novels, Claude 2.1 is positioned to unlock new potential across applications from contract analysis to literary study. 

The 200K token window represents more than just an incremental improvement—early tests indicate Claude 2.1 can accurately grasp information from prompts over 50 percent longer than GPT-4 before the performance begins to degrade.

Anthropic also touted a 50 percent reduction in hallucination rates for Claude 2.1 over version 2.0. Increased accuracy could put the model in closer competition with GPT-4 in responding precisely to complex factual queries.

Additional new features include an API tool for advanced workflow integration and “system prompts” that allow users to define Claude’s tone, goals, and rules at the outset for more personalised, contextually relevant interactions. For instance, a financial analyst could direct Claude to adopt industry terminology when summarising reports.

However, the full 200K token capacity remains exclusive to paying Claude Pro subscribers for now. Free users will continue to be limited to Claude 2.0’s 100K tokens.

As the AI landscape shifts, Claude 2.1’s enhanced precision and adaptability promise to be a game changer—presenting new options for businesses exploring how to strategically leverage AI capabilities.

With its substantial context expansion and rigorous accuracy improvements, Anthropic’s latest offering signals its determination to compete head-to-head with leading models like GPT-4.

(Image Credit: Anthropic)

See also: Paul O’Sullivan, Salesforce: Transforming work in the GenAI era

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-upsizes-claude-2-1-to-200k-tokens-nearly-doubling-gpt-4/feed/ 0