tools Archives - AI News https://www.artificialintelligence-news.com/news/tag/tools/ Artificial Intelligence News Thu, 01 May 2025 17:02:35 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png tools Archives - AI News https://www.artificialintelligence-news.com/news/tag/tools/ 32 32 Claude Integrations: Anthropic adds AI to your favourite work tools https://www.artificialintelligence-news.com/news/claude-integrations-anthropic-adds-ai-favourite-work-tools/ https://www.artificialintelligence-news.com/news/claude-integrations-anthropic-adds-ai-favourite-work-tools/#respond Thu, 01 May 2025 17:02:33 +0000 https://www.artificialintelligence-news.com/?p=106258 Anthropic just launched ‘Integrations’ for Claude that enables the AI to talk directly to your favourite daily work tools. In addition, the company has launched a beefed-up ‘Advanced Research’ feature for digging deeper than ever before. Starting with Integrations, the feature builds on a technical standard Anthropic released last year (the Model Context Protocol, or […]

The post Claude Integrations: Anthropic adds AI to your favourite work tools appeared first on AI News.

]]>
Anthropic just launched ‘Integrations’ for Claude that enables the AI to talk directly to your favourite daily work tools. In addition, the company has launched a beefed-up ‘Advanced Research’ feature for digging deeper than ever before.

Starting with Integrations, the feature builds on a technical standard Anthropic released last year (the Model Context Protocol, or MCP), but makes it much easier to use. Before, setting this up was a bit technical and local. Now, developers can build secure bridges allowing Claude to connect safely with apps over the web or on your desktop.

For end-users of Claude, this means you can now hook it up to a growing list of popular work software. Right out of the gate, they’ve included support for ten big names: Atlassian’s Jira and Confluence (hello, project managers and dev teams!), the automation powerhouse Zapier, Cloudflare, customer comms tool Intercom, plus Asana, Square, Sentry, PayPal, Linear, and Plaid. Stripe and GitLab are joining the party soon.

So, what’s the big deal? The real advantage here is context. When Claude can see your project history in Jira, read your team’s knowledge base in Confluence, or check task updates in Asana, it stops guessing and starts understanding what you’re working on.

“When you connect your tools to Claude, it gains deep context about your work—understanding project histories, task statuses, and organisational knowledge—and can take actions across every surface,” explains Anthropic.

They add, “Claude becomes a more informed collaborator, helping you execute complex projects in one place with expert assistance at every step.”

Let’s look at what this means in practice. Connect Zapier, and you suddenly give Claude the keys to thousands of apps linked by Zapier’s workflows. You could just ask Claude, conversationally, to trigger a complex sequence – maybe grab the latest sales numbers from HubSpot, check your calendar, and whip up some meeting notes, all without you lifting a finger in those apps.

For teams using Atlassian’s Jira and Confluence, Claude could become a serious helper. Think drafting product specs, summarising long Confluence documents so you don’t have to wade through them, or even creating batches of linked Jira tickets at once. It might even spot potential roadblocks by analysing project data.

And if you use Intercom for customer chats, this integration could be a game-changer. Intercom’s own AI assistant, Fin, can now work with Claude to do things like automatically create a bug report in Linear if a customer flags an issue. You could also ask Claude to sift through your Intercom chat history to spot patterns, help debug tricky problems, or summarise what customers are saying – making the whole journey from feedback to fix much smoother.

Anthropic is also making it easier for developers to build even more of these connections. They reckon that using their tools (or platforms like Cloudflare that handle the tricky bits like security and setup), developers can whip up a custom Integration with Claude in about half an hour. This could mean connecting Claude to your company’s unique internal systems or specialised industry software.

Beyond tool integrations, Claude gets a serious research upgrade

Alongside these new connections, Anthropic has given Claude’s Research feature a serious boost. It could already search the web and your Google Workspace files, but the new ‘Advanced Research’ mode is built for when you need to dig really deep.

Flip the switch for this advanced mode, and Claude tackles big questions differently. Instead of just one big search, it intelligently breaks your request down into smaller chunks, investigates each part thoroughly – using the web, your Google Docs, and now tapping into any apps you’ve connected via Integrations – before pulling it all together into a detailed report.

Now, this deeper digging takes a bit more time. While many reports might only take five to fifteen minutes, Anthropic says the really complex investigations could have Claude working away for up to 45 minutes. That might sound like a while, but compare it to the hours you might spend grinding through that research manually, and it starts to look pretty appealing.

Importantly, you can trust the results. When Claude uses information from any source – whether it’s a website, an internal doc, a Jira ticket, or a Confluence page – it gives you clear links straight back to the original. No more wondering where the AI got its information from; you can check it yourself.

These shiny new Integrations and the Advanced Research mode are rolling out now in beta for folks on Anthropic’s paid Max, Team, and Enterprise plans. If you’re on the Pro plan, don’t worry – access is coming your way soon.

Also worth noting: the standard web search feature inside Claude is now available everywhere, for everyone on any paid Claude.ai plan (Pro and up). No more geographical restrictions on that front.

Putting it all together, these updates and integrations show Anthropic is serious about making Claude genuinely useful in a professional context. By letting it plug directly into the tools we already use and giving it more powerful ways to analyse information, they’re pushing Claude towards being less of a novelty and more of an essential part of the modern toolkit.

(Image credit: Anthropic)

See also: Baidu ERNIE X1 and 4.5 Turbo boast high performance at low cost

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Claude Integrations: Anthropic adds AI to your favourite work tools appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/claude-integrations-anthropic-adds-ai-favourite-work-tools/feed/ 0
Meta beefs up AI security with new Llama tools  https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/ https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/#respond Wed, 30 Apr 2025 13:35:22 +0000 https://www.artificialintelligence-news.com/?p=106233 If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools. The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to […]

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools.

The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to make developing and using AI a bit safer for everyone involved.

Developers working with the Llama family of models now have some upgraded kit to play with. You can grab these latest Llama Protection tools directly from Meta’s own Llama Protections page, or find them where many developers live: Hugging Face and GitHub.

First up is Llama Guard 4. Think of it as an evolution of Meta’s customisable safety filter for AI. The big news here is that it’s now multimodal so it can understand and apply safety rules not just to text, but to images as well. That’s crucial as AI applications get more visual. This new version is also being baked into Meta’s brand-new Llama API, which is currently in a limited preview.

Then there’s LlamaFirewall. This is a new piece of the puzzle from Meta, designed to act like a security control centre for AI systems. It helps manage different safety models working together and hooks into Meta’s other protection tools. Its job? To spot and block the kind of risks that keep AI developers up at night – things like clever ‘prompt injection’ attacks designed to trick the AI, potentially dodgy code generation, or risky behaviour from AI plug-ins.

Meta has also given its Llama Prompt Guard a tune-up. The main Prompt Guard 2 (86M) model is now better at sniffing out those pesky jailbreak attempts and prompt injections. More interestingly, perhaps, is the introduction of Prompt Guard 2 22M.

Prompt Guard 2 22M is a much smaller, nippier version. Meta reckons it can slash latency and compute costs by up to 75% compared to the bigger model, without sacrificing too much detection power. For anyone needing faster responses or working on tighter budgets, that’s a welcome addition.

But Meta isn’t just focusing on the AI builders; they’re also looking at the cyber defenders on the front lines of digital security. They’ve heard the calls for better AI-powered tools to help in the fight against cyberattacks, and they’re sharing some updates aimed at just that.

The CyberSec Eval 4 benchmark suite has been updated. This open-source toolkit helps organisations figure out how good AI systems actually are at security tasks. This latest version includes two new tools:

  • CyberSOC Eval: Built with the help of cybersecurity experts CrowdStrike, this framework specifically measures how well AI performs in a real Security Operation Centre (SOC) environment. It’s designed to give a clearer picture of AI’s effectiveness in threat detection and response. The benchmark itself is coming soon.
  • AutoPatchBench: This benchmark tests how good Llama and other AIs are at automatically finding and fixing security holes in code before the bad guys can exploit them.

To help get these kinds of tools into the hands of those who need them, Meta is kicking off the Llama Defenders Program. This seems to be about giving partner companies and developers special access to a mix of AI solutions – some open-source, some early-access, some perhaps proprietary – all geared towards different security challenges.

As part of this, Meta is sharing an AI security tool they use internally: the Automated Sensitive Doc Classification Tool. It automatically slaps security labels on documents inside an organisation. Why? To stop sensitive info from walking out the door, or to prevent it from being accidentally fed into an AI system (like in RAG setups) where it could be leaked.

They’re also tackling the problem of fake audio generated by AI, which is increasingly used in scams. The Llama Generated Audio Detector and Llama Audio Watermark Detector are being shared with partners to help them spot AI-generated voices in potential phishing calls or fraud attempts. Companies like ZenDesk, Bell Canada, and AT&T are already lined up to integrate these.

Finally, Meta gave a sneak peek at something potentially huge for user privacy: Private Processing. This is new tech they’re working on for WhatsApp. The idea is to let AI do helpful things like summarise your unread messages or help you draft replies, but without Meta or WhatsApp being able to read the content of those messages.

Meta is being quite open about the security side, even publishing their threat model and inviting security researchers to poke holes in the architecture before it ever goes live. It’s a sign they know they need to get the privacy aspect right.

Overall, it’s a broad set of AI security announcements from Meta. They’re clearly trying to put serious muscle behind securing the AI they build, while also giving the wider tech community better tools to build safely and defend effectively.

See also: Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/feed/ 0
Alibaba Cloud targets global AI growth with new models and tools https://www.artificialintelligence-news.com/news/alibaba-cloud-global-ai-growth-new-models-and-tools/ https://www.artificialintelligence-news.com/news/alibaba-cloud-global-ai-growth-new-models-and-tools/#respond Tue, 08 Apr 2025 17:56:13 +0000 https://www.artificialintelligence-news.com/?p=105235 Alibaba Cloud has expanded its AI portfolio for global customers with a raft of new models, platform enhancements, and Software-as-a-Service (SaaS) tools. The announcements, made during its Spring Launch 2025 online event, underscore the drive by Alibaba to accelerate AI innovation and adoption on a global scale. The digital technology and intelligence arm of Alibaba […]

The post Alibaba Cloud targets global AI growth with new models and tools appeared first on AI News.

]]>
Alibaba Cloud has expanded its AI portfolio for global customers with a raft of new models, platform enhancements, and Software-as-a-Service (SaaS) tools.

The announcements, made during its Spring Launch 2025 online event, underscore the drive by Alibaba to accelerate AI innovation and adoption on a global scale.

The digital technology and intelligence arm of Alibaba is focusing on meeting increasing demand for AI-driven digital transformation worldwide.

Selina Yuan, President of International Business at Alibaba Cloud Intelligence, said: “We are launching a series of Platform-as-a-Service(PaaS) and AI capability updates to meet the growing demand for digital transformation from across the globe.

“These upgrades allow us to deliver even more secure and high-performance services that empower businesses to scale and innovate in an AI-driven world.”

Alibaba expands access to foundational AI models

Central to the announcement is the broadened availability of Alibaba Cloud’s proprietary Qwen large language model (LLM) series for international clients, initially accessible via its Singapore availability zones.

This includes several specialised models:

  • Qwen-Max: A large-scale Mixture of Experts (MoE) model.
  • QwQ-Plus: An advanced reasoning model designed for complex analytical tasks, sophisticated question answering, and expert-level mathematical problem-solving.
  • QVQ-Max: A visual reasoning model capable of handling complex multimodal problems, supporting visual input and chain-of-thought output for enhanced accuracy.
  • Qwen2.5-Omni-7b: An end-to-end multimodal model.

These additions provide international businesses with more powerful and diverse tools for developing sophisticated AI applications.

Platform enhancements power AI scale

To support these advanced models, Alibaba Cloud’s Platform for AI (PAI) received significant upgrades aimed at delivering scalable, cost-effective, and user-friendly generative AI solutions.

Key enhancements include the introduction of distributed inference capabilities within the PAI-Elastic Algorithm Service (EAS). Utilising a multi-node architecture, this addresses the computational demands of super-large models – particularly those employing MoE structures or requiring ultra-long-text processing – to overcome limitations inherent in traditional single-node setups.

Furthermore, PAI-EAS now features a prefill-decode disaggregation function designed to boost performance and reduce operational costs.

Alibaba Cloud reported impressive results when deploying this with the Qwen2.5-72B model, achieving a 92% increase in concurrency and a 91% boost in tokens per second (TPS).

The PAI-Model Gallery has also been refreshed, now offering nearly 300 open-source models—including the complete range of Alibaba Cloud’s own open-source Qwen and Wan series. These are accessible via a no-code deployment and management interface.

Additional new PAI-Model Gallery features – like model evaluation and model distillation (transferring knowledge from large to smaller, more cost-effective models) – further enhance its utility.

Alibaba integrates AI into data management

Alibaba Cloud’s flagship cloud-native relational database, PolarDB, now incorporates native AI inference powered by Qwen.

PolarDB’s in-database machine learning capability eliminates the need to move data for inference workflows, which significantly cuts processing latency while improving efficiency and data security.

The feature is optimised for text-centric tasks such as developing conversational Retrieval-Augmented Generation (RAG) agents, generating text embeddings, and performing semantic similarity searches.

Additionally, the company’s data warehouse, AnalyticDB, is now integrated into Alibaba Cloud’s generative AI development platform Model Studio.

This integration serves as the recommended vector database for RAG solutions. This allows organisations to connect their proprietary knowledge bases directly with AI models on the platform to streamline the creation of context-aware applications.

New SaaS tools for industry transformation

Beyond infrastructure and platform layers, Alibaba Cloud introduced two new SaaS AI tools:

  • AI Doc: An intelligent document processing tool using LLMs to parse diverse documents (reports, forms, manuals) efficiently. It extracts specific information and can generate tailored reports, such as ESG reports when integrated with Alibaba Cloud’s Energy Expert sustainability solution.
  • Smart Studio: An AI-powered content creation platform supporting text-to-image, image-to-image, and text-to-video generation. It aims to enhance marketing and creative outputs in sectors like e-commerce, gaming, and entertainment, enabling features like virtual try-ons or generating visuals from text descriptions.

All these developments follow Alibaba’s announcement in February of a $53 billion investment over the next three years dedicated to advancing its cloud computing and AI infrastructure.

This colossal investment, noted as exceeding the company’s total AI and cloud expenditure over the previous decade, highlights a deep commitment to AI-driven growth and solidifies its position as a major global cloud provider.

“As cloud and AI become essential for global growth, we are committed to enhancing our core product offerings to address our customers’ evolving needs,” concludes Yuan.

See also: Amazon Nova Act: A step towards smarter, web-native AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Alibaba Cloud targets global AI growth with new models and tools appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/alibaba-cloud-global-ai-growth-new-models-and-tools/feed/ 0
OpenAI targets business sector with advanced AI tools https://www.artificialintelligence-news.com/news/openai-targets-business-sector-advanced-ai-tools/ https://www.artificialintelligence-news.com/news/openai-targets-business-sector-advanced-ai-tools/#respond Fri, 24 Jan 2025 13:30:53 +0000 https://www.artificialintelligence-news.com/?p=16955 OpenAI, the powerhouse behind ChatGPT, is ramping up efforts to dominate the enterprise market with a suite of AI tools tailored for business users. The company recently revealed its plans to introduce a series of enhancements designed to make AI integration seamless for companies of all sizes. This includes updates to its flagship AI agent […]

The post OpenAI targets business sector with advanced AI tools appeared first on AI News.

]]>
OpenAI, the powerhouse behind ChatGPT, is ramping up efforts to dominate the enterprise market with a suite of AI tools tailored for business users.

The company recently revealed its plans to introduce a series of enhancements designed to make AI integration seamless for companies of all sizes. This includes updates to its flagship AI agent technology, expected to transform workplace productivity by automating complex workflows, from financial analysis to customer service.

“Businesses are looking for solutions that go beyond surface-level assistance. Our agents are designed to provide in-depth, actionable insights,” said Sarah Friar, CFO of OpenAI. “This is particularly relevant as enterprises seek to streamline operations in today’s competitive landscape.”

OpenAI’s corporate strategy builds on its ongoing collaborations with tech leaders such as Microsoft, which has already integrated OpenAI’s technology into its Azure cloud platform. Analysts say these partnerships position OpenAI to rival established enterprise solutions providers like Salesforce and Oracle.

AI research assistant tools 

As part of its enterprise-focused initiatives, OpenAI is emphasising the development of AI research tools that cater to specific industries. 

For instance, its AI models are being trained on legal and medical data to create highly specialised assistants that could redefine research-intensive sectors. This focus aligns with the broader market demand for AI-driven solutions that enhance decision-making and efficiency.

Infrastructure for expansion 

OpenAI’s rapid growth strategy is supported by a robust infrastructure push. The company has committed to building state-of-the-art data centers in Europe and Asia, aiming to lower latency and improve service reliability for global users. These investments reflect OpenAI’s long-term vision of becoming a critical enabler in the AI-driven global economy.

Challenges and issues

However, challenges persist. The company faces mounting pressure from regulators concerned about data privacy and the ethical implications of deploying powerful AI tools. Critics also question the sustainability of OpenAI’s ambitious growth targets, given its significant operational costs and strong competition from other tech giants.

Despite these hurdles, OpenAI remains optimistic about its trajectory. With plans to unveil its expanded portfolio at the upcoming Global AI Summit, the company is well-positioned to strengthen its foothold in the burgeoning AI enterprise market.

(Editor’s note: This article is sponsored by AI Tools Network)

See also: OpenAI argues against ChatGPT data deletion in Indian court

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI targets business sector with advanced AI tools appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-targets-business-sector-advanced-ai-tools/feed/ 0
Cerebras vs Nvidia: New inference tool promises higher performance https://www.artificialintelligence-news.com/news/cerebras-vs-nvidia-inference-tool-promises-higher-performance/ https://www.artificialintelligence-news.com/news/cerebras-vs-nvidia-inference-tool-promises-higher-performance/#respond Thu, 29 Aug 2024 09:42:34 +0000 https://www.artificialintelligence-news.com/?p=15882 AI hardware startup Cerebras has created a new AI inference solution that could potentially rival Nvidia’s GPU offerings for enterprises. The Cerebras Inference tool is based on the company’s Wafer-Scale Engine and promises to deliver staggering performance. According to sources, the tool has achieved speeds of 1,800 tokens per second for Llama 3.1 8B, and […]

The post Cerebras vs Nvidia: New inference tool promises higher performance appeared first on AI News.

]]>
AI hardware startup Cerebras has created a new AI inference solution that could potentially rival Nvidia’s GPU offerings for enterprises.

The Cerebras Inference tool is based on the company’s Wafer-Scale Engine and promises to deliver staggering performance. According to sources, the tool has achieved speeds of 1,800 tokens per second for Llama 3.1 8B, and 450 tokens per second for Llama 3.1 70B. Cerebras claims that these speeds are not only faster than the usual hyperscale cloud products required to generate these systems by Nvidia’s GPUs, but they are also more cost-efficient.

This is a major shift tapping into the generative AI market, as Gartner analyst Arun Chandrasekaran put it. While this market’s focus had previously been on training, it is currently shifting to the cost and speed of inferencing. This shift is due to the growth of AI use cases within enterprise settings and provides a great opportunity for vendors like Cerebras of AI products and services to compete based on performance.

As Micah Hill-Smith, co-founder and CEO of Artificial Analysis, says, Cerebras really shined in their AI inference benchmarks. The company’s measurements reached over 1,800 output tokens per second on Llama 3.1 8B, and the output on Llama 3.1 70B was over 446 output tokens per second. In this way, they set new records in both benchmarks.

Cerebras introduces AI inference tool with 20x speed at a fraction of GPU cost
Cerebras introduces AI inference tool with 20x speed at a fraction of GPU cost.

However, despite the potential performance advantages, Cerebras faces significant challenges in the enterprise market. Nvidia’s software and hardware stack dominates the industry and is widely adopted by enterprises. David Nicholson, an analyst at Futurum Group, points out that while Cerebras’ wafer-scale system can deliver high performance at a lower cost than Nvidia, the key question is whether enterprises are willing to adapt their engineering processes to work with Cerebras’ system.

The choice between Nvidia and alternatives such as Cerebras depends on several factors, including the scale of operations and available capital. Smaller firms are likely to choose Nvidia since it offers already-established solutions. At the same time, larger businesses with more capital may opt for the latter to increase efficiency and save on costs.

As the AI hardware market continues to evolve, Cerebras will also face competition from specialised cloud providers, hyperscalers like Microsoft, AWS, and Google, and dedicated inferencing providers such as Groq. The balance between performance, cost, and ease of implementation will likely shape enterprise decisions in adopting new inference technologies.

The emergence of high-speed AI inference, capable of exceeding 1,000 tokens per second, is equivalent to the development of broadband internet, which could open a new frontier for AI applications. Cerebras’ 16-bit accuracy and faster inference capabilities may enable the creation of future AI applications where entire AI agents must operate rapidly, repeatedly, and in real-time.

With the growth of the AI field, the market for AI inference hardware is also expanding. Accounting for around 40% of the total AI hardware market, this segment is becoming an increasingly lucrative target within the broader AI hardware industry. Given that more prominent companies occupy the majority of this segment, many newcomers should carefully consider important aspects of this competitive landscape, considering the competitive nature and significant resources required to navigate the enterprise space.

(Photo by Timothy Dykes)

See also: Sovereign AI gets boost from new NVIDIA microservices

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cerebras vs Nvidia: New inference tool promises higher performance appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/cerebras-vs-nvidia-inference-tool-promises-higher-performance/feed/ 0
Primate Labs launches Geekbench AI benchmarking tool https://www.artificialintelligence-news.com/news/primate-labs-launches-geekbench-ai-benchmarking-tool/ https://www.artificialintelligence-news.com/news/primate-labs-launches-geekbench-ai-benchmarking-tool/#respond Fri, 16 Aug 2024 09:13:49 +0000 https://www.artificialintelligence-news.com/?p=15773 Primate Labs has officially launched Geekbench AI, a benchmarking tool designed specifically for machine learning and AI-centric workloads. The release of Geekbench AI 1.0 marks the culmination of years of development and collaboration with customers, partners, and the AI engineering community. The benchmark, previously known as Geekbench ML during its preview phase, has been rebranded […]

The post Primate Labs launches Geekbench AI benchmarking tool appeared first on AI News.

]]>
Primate Labs has officially launched Geekbench AI, a benchmarking tool designed specifically for machine learning and AI-centric workloads.

The release of Geekbench AI 1.0 marks the culmination of years of development and collaboration with customers, partners, and the AI engineering community. The benchmark, previously known as Geekbench ML during its preview phase, has been rebranded to align with industry terminology and ensure clarity about its purpose.

Geekbench AI is now available for Windows, macOS, and Linux through the Primate Labs website, as well as on the Google Play Store and Apple App Store for mobile devices.

Primate Labs’ latest benchmarking tool aims to provide a standardised method for measuring and comparing AI capabilities across different platforms and architectures. The benchmark offers a unique approach by providing three overall scores, reflecting the complexity and heterogeneity of AI workloads.

“Measuring performance is, put simply, really hard,” explained Primate Labs. “That’s not because it’s hard to run an arbitrary test, but because it’s hard to determine which tests are the most important for the performance you want to measure – especially across different platforms, and particularly when everyone is doing things in subtly different ways.”

The three-score system accounts for the varied precision levels and hardware optimisations found in modern AI implementations. This multi-dimensional approach allows developers, hardware vendors, and enthusiasts to gain deeper insights into a device’s AI performance across different scenarios.

A notable addition to Geekbench AI is the inclusion of accuracy measurements for each test. This feature acknowledges that AI performance isn’t solely about speed but also about the quality of results. By combining speed and accuracy metrics, Geekbench AI provides a more holistic view of AI capabilities, helping users understand the trade-offs between performance and precision.

Geekbench AI 1.0 introduces support for a wide range of AI frameworks, including OpenVINO on Linux and Windows, and vendor-specific TensorFlow Lite delegates like Samsung ENN, ArmNN, and Qualcomm QNN on Android. This broad framework support ensures that the benchmark reflects the latest tools and methodologies used by AI developers.

The benchmark also utilises more extensive and diverse datasets, which not only enhance the accuracy evaluations but also better represent real-world AI use cases. All workloads in Geekbench AI 1.0 run for a minimum of one second, allowing devices to reach their maximum performance levels during testing while still reflecting the bursty nature of real-world applications.

Primate Labs has published detailed technical descriptions of the workloads and models used in Geekbench AI 1.0, emphasising their commitment to transparency and industry-standard testing methodologies. The benchmark is integrated with the Geekbench Browser, facilitating easy cross-platform comparisons and result sharing.

The company anticipates regular updates to Geekbench AI to keep pace with market changes and emerging AI features. However, Primate Labs believes that Geekbench AI has already reached a level of reliability that makes it suitable for integration into professional workflows, with major tech companies like Samsung and Nvidia already utilising the benchmark.

(Image Credit: Primate Labs)

See also: xAI unveils Grok-2 to challenge the AI hierarchy

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Primate Labs launches Geekbench AI benchmarking tool appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/primate-labs-launches-geekbench-ai-benchmarking-tool/feed/ 0
Zoom now wants to be known ‘as an AI-first collaboration platform’ https://www.artificialintelligence-news.com/news/zoom-wants-to-be-known-ai-first-collaboration-platform/ https://www.artificialintelligence-news.com/news/zoom-wants-to-be-known-ai-first-collaboration-platform/#respond Wed, 03 Jul 2024 09:41:46 +0000 https://www.artificialintelligence-news.com/?p=15177 COVID-19 has, in a sense, transformed Zoom from a business-only tool into a household name. Now, the $19 billion video-calling giant is looking to redefine itself, which means leaving behind much of what has made it a mainstay throughout its decade-plus history. Graeme Geddes, Zoom’s chief growth officer, recently told Fortune, “Zoom is so much […]

The post Zoom now wants to be known ‘as an AI-first collaboration platform’ appeared first on AI News.

]]>
COVID-19 has, in a sense, transformed Zoom from a business-only tool into a household name. Now, the $19 billion video-calling giant is looking to redefine itself, which means leaving behind much of what has made it a mainstay throughout its decade-plus history.

Graeme Geddes, Zoom’s chief growth officer, recently told Fortune, “Zoom is so much more than just video meetings. Video is our heritage—so we’re going to continue to lean in there, push the market, there’s a lot of innovation that we’re doing—but we’re so much more than that.”

The company’s new aspiration? “We want to be known as an AI-first collaboration platform,” Geddes declared. Though the rush to adopt AI is now a staple in the tech industry—with giants like Alphabet and Microsoft regularly discussing the technology on earnings calls—Zoom’s shift neatly dovetails with its efforts to extend its reach beyond simple video conferencing, aiming to enhance overall productivity.

In an effort to better cater to the needs of a hybrid world, Zoom introduced its suite of tools earlier this year for both remote and in-person employees, named Zoom Workplace. This platform includes everything from virtual whiteboards and guest check-ins to workspace booking and tech solutions, as well as feedback forms. Zoom also recently acquired the employee engagement platform Workvivo for approximately €250 million ($272 million). This acquisition, as Geddes points out, “has nothing to do with video.”

Zoom’s evolution extends to customer-facing solutions as well. “We’re helping our customers in the way that their customers show up to their website, having a chatbot automation service that can escalate into a phone call,” Geddes explained. “A lot of workflows that have no video involved.”

This strategic shift comes at a crucial time for Zoom. As businesses increasingly distance themselves from pandemic-era work styles and implement return-to-office mandates, the demand for remote video conferencing has decreased. Consequently, Zoom’s stock has returned to pre-pandemic levels, dropping from a peak of $559 in October 2020 to around $60 currently.

Jacqueline Barrett, an economist and founder of the Bright Arc, reflects on the initial pandemic response: “At the start of the pandemic, I think there were tons of people who flocked to Zoom. There was probably a little bit of overexcitement in terms of the stock, with people anticipating that the growth was going to be like that indefinitely.”

The market landscape has also become more competitive. “There’s so many other players in the market that are offering these new features that have already bundled things together or that are constantly unveiling new features with generative AI,” Barrett added.

“If it’s not the legacy players like Google or Microsoft or Cisco, there’s so many startups that are focused on pretty much every little niche imaginable with generative AI.”

The challenge Zoom faces with this response is not one-dimensional, as evidenced by its varied features. The company is expanding its products and utilising AI to amplify its technical capabilities. For example, as Geddes recounted, Zoom’s AI companion can automate note-taking and brief the next steps or action items during a meeting, whether all attendees are present in the conference room.

However, what’s most intriguing is that this is only the beginning of Zoom’s AI applications; it is also exploring the creation of digital twins or deepfake avatars. Eric Yuan, the founder and CEO of Zoom, stated that the AI-powered avatars would replicate the real owner’s voice and appearance, and also act independently during meetings, making business decisions for the owner.

“Today we all spend a lot of time either making phone calls, joining meetings, sending emails, deleting some spam emails, and replying to some text messages, still very busy,” Yuan explained. “But in the future, I can send a digital version of myself to join so I can go to the beach.”

While this technology is still in development, it has already proven to be a useful AI feature for Zoom. Geddes shared how he used the Zoom smart summary feature to stay informed about meetings during his international travels, enabling him to make important decisions and keep projects on schedule.

As it transitions, Zoom clearly aims to do more than just adjust to the post-pandemic world; it is actively setting the course for the future of work and collaboration. By adopting AI-driven solutions and moving beyond its traditional video conferencing base, Zoom is dedicated to keeping its leading position in business communication and productivity tools as the workplace evolves.

(Photo by LinkedIn Sales Solutions)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Zoom now wants to be known ‘as an AI-first collaboration platform’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/zoom-wants-to-be-known-ai-first-collaboration-platform/feed/ 0
Meta releases PyTorch Live for creating mobile ML demos ‘in minutes’ https://www.artificialintelligence-news.com/news/meta-releases-pytorch-live-creating-mobile-ml-demos-minutes/ https://www.artificialintelligence-news.com/news/meta-releases-pytorch-live-creating-mobile-ml-demos-minutes/#respond Thu, 02 Dec 2021 11:12:10 +0000 https://artificialintelligence-news.com/?p=11452 Meta has announced PyTorch Live, a library of tools designed to make it easy to create on-device mobile ML demos “in minutes”. PyTorch Live was unveiled during PyTorch Developer Day and enables anyone to build mobile ML demo apps using JavaScript, the world’s most popular programming language. While on-device AI demos cannot currently be shared, […]

The post Meta releases PyTorch Live for creating mobile ML demos ‘in minutes’ appeared first on AI News.

]]>
Meta has announced PyTorch Live, a library of tools designed to make it easy to create on-device mobile ML demos “in minutes”.

PyTorch Live was unveiled during PyTorch Developer Day and enables anyone to build mobile ML demo apps using JavaScript, the world’s most popular programming language.

While on-device AI demos cannot currently be shared, Meta says that functionality is on the way. Developers can start building custom machine learning models to later share with the broader PyTorch community.

PyTorch was publicly launched by Meta back in January 2017, when the company was still known as Facebook. The open-source machine learning library quickly became a firm favourite among the developer and data science communities.

As the PyTorch name suggests, the main library’s interface is designed around Python but it also has a C++ interface. 

The once-dominant machine learning library, TensorFlow, had a two-year headstart on PyTorch but has been falling behind in usage in recent years.

In 2018, GitHub’s Octoverse report highlighted the growth of PyTorch as an open-source project outpacing that of TensorFlow. PyTorch grew by 2.8x that year compared to TensorFlow’s still not insubstantial 1.8x.

That edge for PyTorch appears to be eating into TensorFlow’s early mover advantage.

TensorFlow appeared in three times more job listings in Indeed, Monster, SimplyHired, and LinkedIn as PyTorch in April 2019. However, TensorFlow’s edge in job-listing mentions dropped to 2x in 2020.

Over the past year, PyTorch has also overtaken TensorFlow in worldwide Google searches:

PyTorch Live looks set to accelerate the success of the machine learning library. The tools use React Native for building cross-platform visual user interfaces and PyTorch Mobile powers on-device inference.

Anyone wanting to get started with PyTorch Live can do so through its command-line interface setup and/or its data processing API.

(Image Credit: Meta)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo North America on 11-12 May 2022.

The post Meta releases PyTorch Live for creating mobile ML demos ‘in minutes’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-releases-pytorch-live-creating-mobile-ml-demos-minutes/feed/ 0
Algorithmia’s latest tools aim to solve ML governance challenges https://www.artificialintelligence-news.com/news/algorithmia-latest-tools-aim-solve-ml-governance-challenges/ https://www.artificialintelligence-news.com/news/algorithmia-latest-tools-aim-solve-ml-governance-challenges/#comments Wed, 03 Mar 2021 12:15:29 +0000 http://artificialintelligence-news.com/?p=10326 Algorithmia is today debuting new reporting tools which aim to solve machine learning (ML) governance challenges. Research conducted by the company found that the number one challenge organisations are facing with their deployments is governance. 56 percent of the IT leaders surveyed by Algorithmia ranked governance, security, and auditability issues as a major concern. 67 […]

The post Algorithmia’s latest tools aim to solve ML governance challenges appeared first on AI News.

]]>
Algorithmia is today debuting new reporting tools which aim to solve machine learning (ML) governance challenges.

Research conducted by the company found that the number one challenge organisations are facing with their deployments is governance.

56 percent of the IT leaders surveyed by Algorithmia ranked governance, security, and auditability issues as a major concern. 67 percent report having to comply with multiple regulations—of which the penalties of failing to do so can be severe.

Diego Oppenheimer, CEO of Algorithmia, said:

“We’re still in the early days of ML governance, and organisations lack a clear roadmap or prescriptive advice for implementing it effectively in their own unique environments.

Regulations are undefined and a changing and ambiguous regulatory landscape leads to uncertainty and the need for companies to invest significant resources to maintain compliance. Those that can’t keep up risk losing their competitive edge.

Furthermore, existing solutions are manual and incomplete. Even organisations that are implementing governance today are doing so with a patchwork of disparate tools and manual processes. Not only do such solutions require constant maintenance, but they also risk critical gaps in coverage.”

Organisations test ML models, quite rightly, prior to deployment. However, Algorithmia says that IT leaders, business line leaders, CIOs and chief risk officers have realised over the past year – following an acceleration in ML deployments – that what happens after a model is deployed “is even more important than pre-deployment testing.”

As of today, Algorithmia’s Enterprise product has added the following reporting and governance capabilities to help manage operational risk:

  • Cost and usage reporting on infrastructure, storage and compute consumption within Algorithmia to understand and manage the overall cost of maintaining the platform.
  • Enhanced chargeback and showback reporting for monthly costs of storage, CPU and GPU consumption and usage billing.
  • Algorithm usage reporting with details of the algorithm used, so organizations can bill users for their usage.
  • Enhanced audit reports and logs so examiners and auditors can review model results, history of changes, and a record of data errors or past model failures and actions taken.
  • Advanced reporting panel for Algorithmia admins that provide an overview of all available metrics and usage reporting, ability to build reports and export reports and metrics to systems of record.

The new reporting and governance tools added to Algorithmia will help IT decision-makers tackle one of the biggest challenges they face with deployments today.

(Photo by Belhadj lamine on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Algorithmia’s latest tools aim to solve ML governance challenges appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/algorithmia-latest-tools-aim-solve-ml-governance-challenges/feed/ 2