amazon Archives - AI News https://www.artificialintelligence-news.com/news/tag/amazon/ Artificial Intelligence News Thu, 24 Apr 2025 11:43:00 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png amazon Archives - AI News https://www.artificialintelligence-news.com/news/tag/amazon/ 32 32 Amazon Nova Act: A step towards smarter, web-native AI agents https://www.artificialintelligence-news.com/news/amazon-nova-act-step-towards-smarter-web-native-ai-agents/ https://www.artificialintelligence-news.com/news/amazon-nova-act-step-towards-smarter-web-native-ai-agents/#respond Tue, 01 Apr 2025 16:57:43 +0000 https://www.artificialintelligence-news.com/?p=105105 Amazon has introduced Nova Act, an advanced AI model engineered for smarter agents that can execute tasks within web browsers. While large language models popularised the concept of “agents” as tools that answer queries or retrieve information via methods such as Retrieval-Augmented Generation (RAG), Amazon envisions something more robust. The company defines agents not just […]

The post Amazon Nova Act: A step towards smarter, web-native AI agents appeared first on AI News.

]]>
Amazon has introduced Nova Act, an advanced AI model engineered for smarter agents that can execute tasks within web browsers.

While large language models popularised the concept of “agents” as tools that answer queries or retrieve information via methods such as Retrieval-Augmented Generation (RAG), Amazon envisions something more robust. The company defines agents not just as responders but as entities capable of performing tangible, multi-step tasks in diverse digital and physical environments.

“Our dream is for agents to perform wide-ranging, complex, multi-step tasks like organising a wedding or handling complex IT tasks to increase business productivity,” said Amazon.

Current market offerings often fall short, with many agents requiring continuous human supervision and their functionality dependent on comprehensive API integration—something not feasible for all tasks. Nova Act is Amazon’s answer to these limitations.

Alongside the model, Amazon is releasing a research preview of the Amazon Nova Act SDK. Using the SDK, developers can create agents capable of automating web tasks like submitting out-of-office notifications, scheduling calendar holds, or enabling automatic email replies.

The SDK aims to break down complex workflows into dependable “atomic commands” such as searching, checking out, or interacting with specific interface elements like dropdowns or popups. Detailed instructions can be added to refine these commands, allowing developers to, for instance, instruct an agent to bypass an insurance upsell during checkout.

To further enhance accuracy, the SDK supports browser manipulation via Playwright, API calls, Python integrations, and parallel threading to overcome web page load delays.

Nova Act: Exceptional performance on benchmarks

Unlike other generative models that showcase middling accuracy on complex tasks, Nova Act prioritises reliability. Amazon highlights its model’s impressive scores of over 90% on internal evaluations for specific capabilities that typically challenge competitors. 

Nova Act achieved a near-perfect 0.939 on the ScreenSpot Web Text benchmark, which measures natural language instructions for text-based interactions, such as adjusting font sizes. Competing models such as Claude 3.7 Sonnet (0.900) and OpenAI’s CUA (0.883) trail behind by significant margins.

Similarly, Nova Act scored 0.879 in the ScreenSpot Web Icon benchmark, which tests interactions with visual elements like rating stars or icons. While the GroundUI Web test, designed to assess an AI’s proficiency in navigating various user interface elements, showed Nova Act slightly trailing competitors, Amazon sees this as an area ripe for improvement as the model evolves.

Amazon stresses its focus on delivering practical reliability. Once an agent built using Nova Act functions as expected, developers can deploy it headlessly, integrate it as an API, or even schedule it to run tasks asynchronously. In one demonstrated use case, an agent automatically orders a salad for delivery every Tuesday evening without requiring ongoing user intervention.

Amazon sets out its vision for scalable and smart AI agents

One of Nova Act’s standout features is its ability to transfer its user interface understanding to new environments with minimal additional training. Amazon shared an instance where Nova Act performed admirably in browser-based games, even though its training had not included video game experiences. This adaptability positions Nova Act as a versatile agent for diverse applications.

This capability is already being leveraged in Amazon’s own ecosystem. Within Alexa+, Nova Act enables self-directed web navigation to complete tasks for users, even when API access is not comprehensive enough. This represents a step towards smarter AI assistants that can function independently, harnessing their skills in more dynamic ways.

Amazon is clear that Nova Act represents the first stage in a broader mission to craft intelligent, reliable AI agents capable of handling increasingly complex, multi-step tasks. 

Expanding beyond simple instructions, Amazon’s focus is on training agents through reinforcement learning across varied, real-world scenarios rather than overly simplistic demonstrations. This foundational model serves as a checkpoint in a long-term training curriculum for Nova models, indicating the company’s ambition to reshape the AI agent landscape.

“The most valuable use cases for agents have yet to be built,” Amazon noted. “The best developers and designers will discover them. This research preview of our Nova Act SDK enables us to iterate alongside these builders through rapid prototyping and iterative feedback.”

Nova Act is a step towards making AI agents truly useful for complex, digital tasks. From rethinking benchmarks to emphasising reliability, its design philosophy is centred around empowering developers to move beyond what’s possible with current-generation tools. 

See also: Anthropic provides insights into the ‘AI biology’ of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon Nova Act: A step towards smarter, web-native AI agents appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-nova-act-step-towards-smarter-web-native-ai-agents/feed/ 0
Amazon Bedrock gains new AI models, tools, and features https://www.artificialintelligence-news.com/news/amazon-bedrock-gains-new-ai-models-tools-and-features/ https://www.artificialintelligence-news.com/news/amazon-bedrock-gains-new-ai-models-tools-and-features/#respond Thu, 05 Dec 2024 15:00:23 +0000 https://www.artificialintelligence-news.com/?p=16652 Amazon Web Services (AWS) has announced improvements to bolster Bedrock, its fully managed generative AI service. The updates include new foundational models from several AI pioneers, enhanced data processing capabilities, and features aimed at improving inference efficiency. Dr Swami Sivasubramanian, VP of AI and Data at AWS, said: “Amazon Bedrock continues to see rapid growth […]

The post Amazon Bedrock gains new AI models, tools, and features appeared first on AI News.

]]>
Amazon Web Services (AWS) has announced improvements to bolster Bedrock, its fully managed generative AI service.

The updates include new foundational models from several AI pioneers, enhanced data processing capabilities, and features aimed at improving inference efficiency.

Dr Swami Sivasubramanian, VP of AI and Data at AWS, said: “Amazon Bedrock continues to see rapid growth as customers flock to the service for its broad selection of leading models, tools to easily customise with their data, built-in responsible AI features, and capabilities for developing sophisticated agents.

“With this new set of capabilities, we are empowering customers to develop more intelligent AI applications that will deliver greater value to their end-users.”

Amazon Bedrock expands its model diversity

AWS is set to become the first cloud provider to feature models from AI developers Luma AI and poolside, while also incorporating Stability AI’s latest release.

Through its new Amazon Bedrock Marketplace, customers will have access to over 100 emerging and specialised models from across industries, ensuring they can select the most appropriate tools for their unique needs.

  • Luma AI’s Ray 2 

Luma AI, known for advancing generative AI in video content creation, brings its next-generation Ray 2 model to Amazon Bedrock. This model generates high-quality, lifelike video outputs from text or image inputs and allows organisations to create detailed outputs in fields such as fashion, architecture, and graphic design. AWS’s presence as the first provider for this model ensures businesses can experiment with new camera angles, cinematographic styles, and consistent characters with a frictionless workflow.

  • poolside’s malibu and point

Designed to address challenges in modern software engineering, poolside’s models – malibu and point – specialise in code generation, testing, documentation, and real-time code completion. Importantly, developers can securely fine-tune these models using their private datasets. Accompanied by Assistant – an integration for development environments – poolside’s tools allow engineering teams to accelerate productivity, ship projects faster, and increase accuracy.

  • Stability AI’s Stable Diffusion 3.5 Large  

Amazon Bedrock customers will soon gain access to Stability AI’s text-to-image model Stable Diffusion 3.5 Large. This addition supports businesses in creating high-quality visual media for use cases in areas like gaming, advertising, and retail.  

Through the Bedrock Marketplace, AWS also enables access to over 100 specialised models. These include solutions tailored to fields such as biology (EvolutionaryScale’s ESM3 generative model), financial data (Writer’s Palmyra-Fin), and media (Camb.ai’s text-to-audio MARS6).

Image showing the expanded AI model catalog in the Amazon Bedrock Marketplace.

Zendesk, a global customer service software firm, leverages Bedrock’s marketplace to personalise support across email and social channels using AI-driven localisation and sentiment analysis tools. For example, they use models like Widn.AI to tailor responses based on real-time sentiment in customers’ native languages.

Scaling inference with new Amazon Bedrock features

Large-scale generative AI applications require balancing the cost, latency, and accuracy of inference processes. AWS is addressing this challenge with two new Amazon Bedrock features:

  • Prompt Caching

The new caching capability reduces redundant processing of prompts by securely storing frequently used queries, saving on both time and costs. This feature can lead to up to a 90% reduction in costs and an 85% decrease in latency. For example, Adobe incorporated Prompt Caching into its Acrobat AI Assistant to summarise documents and answer questions, achieving a 72% reduction in response times during initial testing.  

  • Intelligent Prompt Routing

This feature dynamically directs prompts to the most suitable foundation model within a family, optimising results for both cost and quality. Customers such as Argo Labs, which builds conversational voice AI solutions for restaurants, have already benefited. While simpler queries (like booking tables) are handled by smaller models, more nuanced requests (e.g., dietary-specific menu questions) are intelligently routed to larger models. Argo Labs’ usage of intelligent Prompt Routing has not only improved response quality but also reduced costs by up to 30%.

Data utilisation: Knowledge bases and automation

A key attraction of generative AI lies in its ability to extract value from data. AWS is enhancing its Amazon Bedrock Knowledge Bases to ensure organisations can deploy their unique datasets for richer AI-powered user experiences.  

  • Using structured data 

AWS has introduced capabilities for structured data retrieval within Knowledge Bases. This enhancement allows customers to query data stored across Amazon services like SageMaker Lakehouse and Redshift through natural-language prompts, with results translated back into SQL queries. Octus, a credit intelligence firm, plans to use this capability to provide clients with dynamic, natural-language reports on its structured financial data.  

  • GraphRAG integration 

By incorporating automated graph modelling (powered by Amazon Neptune), customers can now generate and connect relational data for stronger AI applications. BMW Group, for instance, will use GraphRAG to augment its virtual assistant MAIA. This assistant taps into BMW’s wealth of internal data to deliver comprehensive responses and premium user experiences.

Separately, AWS has unveiled Amazon Bedrock Data Automation, a tool that transforms unstructured content (e.g., documents, video, and audio) into structured formats for analytics or retrieval-augmented generation (RAG). Companies like Symbeo (automated claims processing) and Tenovos (digital asset management) are already piloting the tool to improve operational efficiency and data reuse.

The expansion of Amazon Bedrock’s ecosystem reflects its growing popularity, with the service recording a 4.7x increase in its customer base over the last year. Industry leaders like Adobe, BMW, Zendesk, and Tenovos have all embraced AWS’s latest innovations to improve their generative AI capabilities.  

Most of the newly announced tools – such as inference management, Knowledge Bases with structured data retrieval, and GraphRAG – are currently in preview, while notable model releases from Luma AI, poolside, and Stability AI are expected soon.

See also: Alibaba Cloud overhauls AI partner initiative

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon Bedrock gains new AI models, tools, and features appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-bedrock-gains-new-ai-models-tools-and-features/feed/ 0
Big tech’s AI spending hits new heights https://www.artificialintelligence-news.com/news/big-tech-ai-spending-hits-new-heights/ https://www.artificialintelligence-news.com/news/big-tech-ai-spending-hits-new-heights/#respond Fri, 22 Nov 2024 12:02:34 +0000 https://www.artificialintelligence-news.com/?p=16537 In 2024, Big Tech is all-in on artificial intelligence, with companies like Microsoft, Amazon, Alphabet, and Meta leading the way. Their combined spending on AI is projected to exceed a jaw-dropping $240 billion. Why? Because AI isn’t just the future—it’s the present, and the demand for AI-powered tools and infrastructure has never been higher. The […]

The post Big tech’s AI spending hits new heights appeared first on AI News.

]]>
In 2024, Big Tech is all-in on artificial intelligence, with companies like Microsoft, Amazon, Alphabet, and Meta leading the way.

Their combined spending on AI is projected to exceed a jaw-dropping $240 billion. Why? Because AI isn’t just the future—it’s the present, and the demand for AI-powered tools and infrastructure has never been higher. The companies aren’t just keeping up; they’re setting the pace for the industry.

The scale of their investment is hard to ignore. In the first half of 2023, tech giants poured $74 billion into capital expenditure. By Q3, that number had jumped to $109 billion. In mid-2024, spending reached $104 billion, a remarkable 47% rise over the same period a year earlier. By Q3, the total hit $171 billion.

If this pattern continues, Q4 might add another $70 billion, bringing the total to a truly staggering $240 billion for the year.

Why so much spending?

AI’s potential is immense, and companies are making sure they’re positioned to reap the rewards.

  • A growing market: AI is projected to create $20 trillion in global economic impact by 2030. In countries like India, AI could contribute $500 billion to GDP by 2025. With stakes this high, big tech isn’t hesitating to invest heavily.
  • Infrastructure demands: Training and running AI models require massive investment in infrastructure, from data centres to high-performance GPUs. Alphabet increased its capital expenditures by 62% last quarter compared to the previous year, even as it cut its workforce by 9,000 employees to manage costs.
  • Revenue potential: AI is already proving its value. Microsoft’s AI products are expected to generate $10 billion annually—the fastest-growing segment in the company’s history. Alphabet, meanwhile, uses AI to write over 25% of its new code, streamlining operations.

Amazon is also ramping up, with plans to spend $75 billion on capital expenditure in 2024. Meta’s forecast is not far behind, with estimates between $38 and $40 billion. Across the board, organisations recognise that maintaining their edge in AI requires sustained and significant investment.

Supporting revenue streams

What keeps the massive investments keep on coming is the strength of big tech’s core businesses. Last quarter, Alphabet’s digital advertising machine, which is powered by Google’s search engine, generated $49.39 billion in ad revenue, a 12% year-over-year increase. This as a solid foundation that allows Alphabet to pour resources into building out its AI arsenal without destabilising the bottom line.

Microsoft’s diversified revenue streams are another example. While the company spent $20 billion on AI and cloud infrastructure last quarter, its productivity segment, which includes Office, grew by 12% to $28.3 billion, and its personal computing business, boosted by Xbox and the Activision Blizzard acquisition, grew 17% to $13.2 billion. These successes demonstrate how AI investments can support broader growth strategies.

The financial payoff

Big tech is already seeing the benefits of its heavy spending. Microsoft’s Azure platform has seen substantial growth, with its AI income approaching $6 billion. Amazon’s AI business is growing at triple-digit rates, and Alphabet reported a 34% jump in profits last quarter, with cloud revenue playing a major role.

Meta, while primarily focused on advertising, is leveraging AI to make its platforms more engaging. AI-driven tools, such as improved feeds and search features keep users on its platforms longer, resulting in new revenue growth.

AI spending shows no signs of slowing down. Tech leaders at Microsoft and Alphabet view AI as a long-term investment critical to their future success. And the results speak for themselves: Alphabet’s cloud revenue is up 35%, while Microsoft’s cloud business grew 20% last quarter.

For the time being, the focus is on scaling up infrastructure and meeting demand. However, the real transformation will come when big tech unlocks AI’s full potential, transforming industries and redefining how we work and live.

By investing in high-quality, centralised data strategies, businesses can ensure trustworthy and accurate AI implementations, and unlock AI’s full potential to drive innovation, improve decision-making, and gain competitive edge. AI’s revolutionary promise is within reach—but only for companies prepared to lay the groundwork for sustainable growth and long-term results.

(Photo by Unsplash)

See also: Microsoft tries to convert Google Chrome users

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Big tech’s AI spending hits new heights appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/big-tech-ai-spending-hits-new-heights/feed/ 0
Walmart and Amazon drive retail transformation with AI https://www.artificialintelligence-news.com/news/walmart-amazon-drive-retail-transformation-ai/ https://www.artificialintelligence-news.com/news/walmart-amazon-drive-retail-transformation-ai/#respond Mon, 16 Sep 2024 14:02:23 +0000 https://www.artificialintelligence-news.com/?p=16081 Walmart and Amazon are harnessing AI to drive retail transformation with new consumer experiences and enhanced operational efficiency. According to analytics firm GlobalData, Walmart is focusing on augmented reality and AI-enhanced store management. Amazon, meanwhile, is leading advancements in customer personalisation and autonomous systems. Kiran Raj, Practice Head of Disruptive Tech at GlobalData, notes: “Walmart […]

The post Walmart and Amazon drive retail transformation with AI appeared first on AI News.

]]>
Walmart and Amazon are harnessing AI to drive retail transformation with new consumer experiences and enhanced operational efficiency.

According to analytics firm GlobalData, Walmart is focusing on augmented reality and AI-enhanced store management. Amazon, meanwhile, is leading advancements in customer personalisation and autonomous systems.

Kiran Raj, Practice Head of Disruptive Tech at GlobalData, notes: “Walmart and Amazon are no longer competing for market share alone. Their AI strategies are reshaping the entire retail ecosystem—from Walmart’s blend of digital and physical shopping experiences to Amazon’s operational automation.”

GlobalData’s Disruptor Intelligence Center, utilising its Technology Foresights tool, has identified the strategic focus of these retail titans based on their patent filings.

Walmart has submitted over 3,000 AI-related patents, with 20% of these in the last three years, indicating a swift evolution in its AI capabilities. In contrast, Amazon boasts more than 9,000 patents; half of which were filed during the same timeframe, underpinning its leadership in AI-driven retail innovations.

AI-powered retail transformation

Walmart is deploying AI-driven solutions like in-store product recognition while making notable strides in AR applications, including virtual try-ons. The company’s progress in smart warehouses and image-based transactions denotes a shift towards fully automated retail, enhancing both speed and precision in customer service.

Amazon stands out with its extensive deployment of AI in customer personalisation and autonomous systems. By harnessing technologies such as Autonomous Network Virtualisation and Automated VNF Deployment, the company is advancing its operational infrastructure and aiming to set new standards in network efficiency and data management.

Walmart’s development of intelligent voice assistants and automated store surveillance emphasises its aim to provide a seamless and secure shopping experience. Concurrently, Amazon’s progress in AI for coding and surveillance is pushing the boundaries of enterprise AI applications and enhancing security capabilities.

“Walmart and Amazon’s aggressive innovation strategies not only strengthen their market positions but also set a blueprint for the future of the retail sector,” Raj explains.

“As these two giants continue to push the boundaries of retail AI, the broader industry can expect ripple effects in supply chain innovation, customer loyalty programmes, and operational scalability—setting the stage for a new era of consumer engagement.”

(Photo by Marques Thomas)

See also: Whitepaper dispels fears of AI-induced job losses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Walmart and Amazon drive retail transformation with AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/walmart-amazon-drive-retail-transformation-ai/feed/ 0
Amazon partners with Anthropic to enhance Alexa https://www.artificialintelligence-news.com/news/amazon-partners-anthropic-enhance-alexa/ https://www.artificialintelligence-news.com/news/amazon-partners-anthropic-enhance-alexa/#respond Mon, 02 Sep 2024 13:18:08 +0000 https://www.artificialintelligence-news.com/?p=15929 Amazon is gearing up to roll out a revamped version of its Alexa voice assistant, which is expected to be available this October, right before the US shopping rush. Internally referred to as “Remarkable,” the new technology will be powered by Anthropic’s Claude AI models. Sources close to the matter have indicated that this shift […]

The post Amazon partners with Anthropic to enhance Alexa appeared first on AI News.

]]>
Amazon is gearing up to roll out a revamped version of its Alexa voice assistant, which is expected to be available this October, right before the US shopping rush.

Internally referred to as “Remarkable,” the new technology will be powered by Anthropic’s Claude AI models. Sources close to the matter have indicated that this shift occurred due to the underperformance of Amazon’s in-house software.

The enhanced Alexa will operate using advanced generative AI to handle more complex queries. Amazon plans to offer the new Alexa as a subscription service, priced between $5 and $10 per month, while the classic version of Alexa will remain free. This approach marks a significant change for Amazon and suggests that the company aims to turn this voice assistant into a profitable venture after years of limited success in generating revenue through this platform.

Amazon’s decision to quickly adopt an external model, Claude, indicates a strategic shift. Amazon typically prefers to build everything in-house to minimise its dependence on third-party vendors, thereby avoiding external influences on customer behaviour and business strategies, as well as external influences on who controls data. However, it seems that Amazon’s traditional strategy does not provide the massive AI capability needed, or perhaps Amazon has realised the need for more powerful AI. It is also worth noting that the involved AI developer, OpenAI, is affiliated with major technology companies like Apple and Microsoft in developing AI technologies.

The launch of the “Remarkable” Alexa is anticipated during Amazon’s annual devices and services event in September, though the company has not confirmed the exact date. This event will also mark the first public appearance of Panos Panay, the new head of Amazon’s devices division, who has taken over from long-time executive David Limp.

The updated version of Alexa would be a more interactive and intuitive assistant, as the new functionality would stem from its conversational mode. The assistant is envisioned to do more than just recognise patterns in people’s speech; it would be able to hold conversations built on previous interactions. The most likely features include personalised shopping advice, news aggregation, and more advanced home automation. As for whether customers would pay for Alexa, this likely depends on the final set of available features. The issue might be particularly pressing for Amazon, given that customers already pay for Prime membership.

The future for Alexa is quite ambitious, but it also bears significant risks. For the new version to be successful, internal performance benchmarks must be met. While estimates for “Remarkable” Alexa suggest that even a small percentage of current users paying for the premium version could become a substantial income stream for Amazon, the likelihood of achieving the expected outcomes remains uncertain.

However, Amazon’s partnership with Anthropic is currently under regulatory review, largely due to an investigation by the UK’s antitrust regulator. The impending upgrade announcement and the regulator’s response could significantly influence the company’s future activities.

Amazon’s initiative to adopt an AI solution developed by Anthropic marks a significant shift for the company, which previously focused on developing its proprietary technology. At this point, it is possible to view this move as part of the general trend in the industry to turn to partnerships regarding AI development to enhance the competitiveness of products.

See also: Amazon strives to outpace Nvidia with cheaper, faster AI chips

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon partners with Anthropic to enhance Alexa appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-partners-anthropic-enhance-alexa/feed/ 0
Amazon strives to outpace Nvidia with cheaper, faster AI chips https://www.artificialintelligence-news.com/news/amazon-strives-outpace-nvidia-cheaper-faster-ai-chips/ https://www.artificialintelligence-news.com/news/amazon-strives-outpace-nvidia-cheaper-faster-ai-chips/#respond Mon, 29 Jul 2024 14:15:11 +0000 https://www.artificialintelligence-news.com/?p=15534 Amazon’s chip lab is churning out a constant stream of innovation in Austin, Texas. A new server design was put through its paces by a group of devoted engineers on July 26th. During a visit to the facility in Austin, Amazon executive Rami Sinno shed light on the server’s use of Amazon’s AI chips. This […]

The post Amazon strives to outpace Nvidia with cheaper, faster AI chips appeared first on AI News.

]]>
Amazon’s chip lab is churning out a constant stream of innovation in Austin, Texas. A new server design was put through its paces by a group of devoted engineers on July 26th.

During a visit to the facility in Austin, Amazon executive Rami Sinno shed light on the server’s use of Amazon’s AI chips. This development is a bold step toward competing with Nvidia, the current leader in the field.

The main reason Amazon is developing its own processor is this: it doesn’t want to rely on Nvidia and buy the company’s chips. The expensive Nvidia chips power a big part of the AI cloud business at Amazon Web Services. This business is the most significant growth engine of the company. Thus, the so-called “Nvidia tax” was pushing the company to look for a cheaper option.

Amazon’s chip development program has a dual purpose. Firstly, the project is meant to provide customers with more affordable opportunities for complex calculations and large data volume processing. Secondly, the initiative was developed to preserve Amazon’s competitiveness in the volatile cloud computing and AI industry. This move was also supported by the directions of tech giants such as Microsoft and Alphabet, which are developing custom-made chips to maintain their leadership in the market.

Rami Sinno, director of engineering for Amazon’s Annapurna Labs, a key element of the AWS ecosystem, emphasised that customer demand for more economical solutions to Nvidia’s products is growing. The acquisition of Annapurna Labs in 2015 was a savvy move by Amazon as it enabled the company to lay the groundwork to begin developing popular chips.

Although Amazon’s chips for AI are in their early days, the company has been making and refining chips for other mainstream applications for nearly a decade, most notably its general-purpose chip, Graviton, which is now in its fourth generation. Amazon has announced that its Trainium and Inferentia chips, the company’s latest and strongest, are still in their early days and are specially designed processors.

The impact is potentially huge because the impressive performance underscores the reports by David Brown, vice president of compute and networking at AWS. In this light, it should be acknowledged that Amazon’s in-house chips could deliver up to a 40-50% price-performance ratio improvement compared to Nvidia-based solutions. In turn, this potential improvement could mean considerable savings for AWS clientele deploying their AI workloads.

AWS’ significance to Amazon’s overall business cannot be underestimated. In the first quarter of this year, AWS made up a little under a fifth of Amazon’s total revenue, as its sales soared by 17 per cent year over year to reach $25 billion. At the moment, AWS holds about a third of the global cloud computing market, and Microsoft’s Azure covers about a quarter, or 25%.

Amazon’s commitment to its custom chip strategy was demonstrated during the recent Prime Day, a two-day sales event at Amazon.com. To handle the highly elevated level of shopping as well as streaming video, music, and other content, Amazon deployed an impressive 250,000 Graviton chips and 80,000 of its custom AI chips across its platforms. Adobe Analytics announced record Prime Day results of $14.2 billion in sales.

It seems that as Amazon intensifies its work on the development of AI chips, the industry leader, Nvidia, is not going to remain at the same level. Nvidia’s CEO, Jensen Huang, has presented Nvidia’s latest Blackwell chips, which are scheduled for release later in the year. Their performance has increased significantly, and Huang promised that the new chips are twice as powerful for AI model training and five times faster for inference.

Nvidia’s dominant position in the AI chip market is underscored by its impressive client list, which includes tech giants like Amazon, Google, Microsoft, OpenAI, and Meta. The company’s focus on AI has propelled its market value to a staggering $2 trillion, making it the third most valuable company globally, behind only Microsoft and Apple.

As the AI chip race intensifies, Nvidia is also diversifying its offerings. The company has introduced new software tools to facilitate AI integration across various industries and is developing specialised chips for emerging applications such as in-car chatbots and humanoid robots.

(Image by Gerd Altmann)

See also: Nvidia: World’s most valuable company under French antitrust fire

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon strives to outpace Nvidia with cheaper, faster AI chips appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-strives-outpace-nvidia-cheaper-faster-ai-chips/feed/ 0
Amazon will use computer vision to spot defects before dispatch https://www.artificialintelligence-news.com/news/amazon-use-computer-vision-spot-defects-before-dispatch/ https://www.artificialintelligence-news.com/news/amazon-use-computer-vision-spot-defects-before-dispatch/#respond Tue, 04 Jun 2024 11:44:26 +0000 https://www.artificialintelligence-news.com/?p=14956 Amazon will harness computer vision and AI to ensure customers receive products in pristine condition and further its sustainability efforts. The initiative – dubbed “Project P.I.” (short for “private investigator”) – operates within Amazon fulfilment centres across North America, where it will scan millions of products daily for defects. Project P.I. leverages generative AI and […]

The post Amazon will use computer vision to spot defects before dispatch appeared first on AI News.

]]>
Amazon will harness computer vision and AI to ensure customers receive products in pristine condition and further its sustainability efforts. The initiative – dubbed “Project P.I.” (short for “private investigator”) – operates within Amazon fulfilment centres across North America, where it will scan millions of products daily for defects.

Project P.I. leverages generative AI and computer vision technologies to detect issues such as damaged products or incorrect colours and sizes before they reach customers. The AI model not only identifies defects but also helps uncover the root causes, enabling Amazon to implement preventative measures upstream. This system has proven highly effective in the sites where it has been deployed, accurately identifying product issues among the vast number of items processed each month.

Before any item is dispatched, it passes through an imaging tunnel where Project P.I. evaluates its condition. If a defect is detected, the item is isolated and further investigated to determine if similar products are affected.

Amazon associates review the flagged items and decide whether to resell them at a discount via Amazon’s Second Chance site, donate them, or find alternative uses. This technology aims to act as an extra pair of eyes, enhancing manual inspections at several North American fulfilment centres, with plans for expansion throughout 2024.

Dharmesh Mehta, Amazon’s VP of Worldwide Selling Partner Services, said: “We want to get the experience right for customers every time they shop in our store.

“By leveraging AI and product imaging within our operations facilities, we are able to efficiently detect potentially damaged products and address more of those issues before they ever reach a customer, which is a win for the customer, our selling partners, and the environment.”

Project P.I. also plays a crucial role in Amazon’s sustainability initiatives. By preventing damaged or defective items from reaching customers, the system helps reduce unwanted returns, wasted packaging, and unnecessary carbon emissions from additional transportation.

Kara Hurst, Amazon’s VP of Worldwide Sustainability, commented: “AI is helping Amazon ensure that we’re not just delighting customers with high-quality items, but we’re extending that customer obsession to our sustainability work by preventing less-than-perfect items from leaving our facilities, and helping us avoid unnecessary carbon emissions due to transportation, packaging, and other steps in the returns process.”

In parallel, Amazon is utilising a generative AI system equipped with a Multi-Modal LLM (MLLM) to investigate the root causes of negative customer experiences.

When defects reported by customers slip through initial checks, this system reviews customer feedback and analyses images from fulfilment centres to understand what went wrong. For example, if a customer receives the wrong size of a product, the system examines the product labels in fulfilment centre images to pinpoint the error.

This technology is also beneficial for Amazon’s selling partners, especially the small and medium-sized businesses that make up over 60% of Amazon’s sales. By making defect data more accessible, Amazon helps these sellers rectify issues quickly and reduce future errors.

(Photo by Andrew Stickelman)

See also: X now permits AI-generated adult content

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon will use computer vision to spot defects before dispatch appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-use-computer-vision-spot-defects-before-dispatch/feed/ 0
Amazon trains 980M parameter LLM with ’emergent abilities’ https://www.artificialintelligence-news.com/news/amazon-trains-980m-parameter-llm-emergent-abilities/ https://www.artificialintelligence-news.com/news/amazon-trains-980m-parameter-llm-emergent-abilities/#respond Thu, 15 Feb 2024 14:35:28 +0000 https://www.artificialintelligence-news.com/?p=14410 Researchers at Amazon have trained a new large language model (LLM) for text-to-speech that they claim exhibits “emergent” abilities.  The 980 million parameter model, called BASE TTS, is the largest text-to-speech model yet created. The researchers trained models of various sizes on up to 100,000 hours of public domain speech data to see if they […]

The post Amazon trains 980M parameter LLM with ’emergent abilities’ appeared first on AI News.

]]>
Researchers at Amazon have trained a new large language model (LLM) for text-to-speech that they claim exhibits “emergent” abilities. 

The 980 million parameter model, called BASE TTS, is the largest text-to-speech model yet created. The researchers trained models of various sizes on up to 100,000 hours of public domain speech data to see if they would observe the same performance leaps that occur in natural language processing models once they grow past a certain scale. 

They found that their medium-sized 400 million parameter model – trained on 10,000 hours of audio – showed a marked improvement in versatility and robustness on tricky test sentences.

The test sentences contained complex lexical, syntactic, and paralinguistic features like compound nouns, emotions, foreign words, and punctuation that normally trip up text-to-speech systems. While BASE TTS did not handle them perfectly, it made significantly fewer errors in stress, intonation, and pronunciation than existing models.

“These sentences are designed to contain challenging tasks—none of which BASE TTS is explicitly trained to perform,” explained the researchers. 

The largest 980 million parameter version of the model – trained on 100,000 hours of audio – did not demonstrate further abilities beyond the 400 million parameter version.

While an experimental process, the creation of BASE TTS demonstrates these models can reach new versatility thresholds as they scale—an encouraging sign for conversational AI. The researchers plan further work to identify optimal model size for emergent abilities.

The model is also designed to be lightweight and streamable, packaging emotional and prosodic data separately. This could allow the natural-sounding spoken audio to be transmitted across low-bandwidth connections.

You can find the full BASE TTS paper on arXiv here.

(Photo by Nik on Unsplash)

See also: OpenAI rolls out ChatGPT memory to select users

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon trains 980M parameter LLM with ’emergent abilities’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-trains-980m-parameter-llm-emergent-abilities/feed/ 0
Amazon is building a LLM to rival OpenAI and Google https://www.artificialintelligence-news.com/news/amazon-is-building-llm-rival-openai-and-google/ https://www.artificialintelligence-news.com/news/amazon-is-building-llm-rival-openai-and-google/#respond Wed, 08 Nov 2023 14:53:52 +0000 https://www.artificialintelligence-news.com/?p=13861 Amazon is reportedly making substantial investments in the development of a large language model (LLM) named Olympus.  According to Reuters, the tech giant is pouring millions into this project to create a model with a staggering two trillion parameters. OpenAI’s GPT-4, for comparison, is estimated to have around one trillion parameters. This move puts Amazon […]

The post Amazon is building a LLM to rival OpenAI and Google appeared first on AI News.

]]>
Amazon is reportedly making substantial investments in the development of a large language model (LLM) named Olympus. 

According to Reuters, the tech giant is pouring millions into this project to create a model with a staggering two trillion parameters. OpenAI’s GPT-4, for comparison, is estimated to have around one trillion parameters.

This move puts Amazon in direct competition with OpenAI, Meta, Anthropic, Google, and others. The team behind Amazon’s initiative is led by Rohit Prasad, former head of Alexa, who now reports directly to CEO Andy Jassy.

Prasad, as the head scientist of artificial general intelligence (AGI) at Amazon, has unified AI efforts across the company. He brought in researchers from the Alexa AI team and Amazon’s science division to collaborate on training models, aligning Amazon’s resources towards this ambitious goal.

Amazon’s decision to invest in developing homegrown models stems from the belief that having their own LLMs could enhance the attractiveness of their offerings, particularly on Amazon Web Services (AWS).

Enterprises on AWS are constantly seeking top-performing models and Amazon’s move aims to cater to the growing demand for advanced AI technologies.

While Amazon has not provided a specific timeline for the release of the Olympus model, insiders suggest that the company’s focus on training larger AI models underscores its commitment to remaining at the forefront of AI research and development.

Training such massive AI models is a costly endeavour, primarily due to the significant computing power required.

Amazon’s decision to invest heavily in LLMs is part of its broader strategy, as revealed in an earnings call in April. During the call, Amazon executives announced increased investments in LLMs and generative AI while reducing expenditures on retail fulfillment and transportation.

Amazon’s move signals a new chapter in the race for AI supremacy, with major players vying to push the boundaries of the technology.

(Photo by ANIRUDH on Unsplash)

See also: OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon is building a LLM to rival OpenAI and Google appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-is-building-llm-rival-openai-and-google/feed/ 0
Amazon invests $4B in Anthropic to boost AI capabilities https://www.artificialintelligence-news.com/news/amazon-invests-4b-anthropic-boost-ai-capabilities/ https://www.artificialintelligence-news.com/news/amazon-invests-4b-anthropic-boost-ai-capabilities/#respond Mon, 25 Sep 2023 13:36:26 +0000 https://www.artificialintelligence-news.com/?p=13635 Amazon has announced an investment of up to $4 billion into Anthropic, an emerging AI startup renowned for its innovative Claude chatbot. Anthropic was founded by siblings Dario and Daniela Amodei, who were previously associated with OpenAI. This latest investment signifies Amazon’s strategic move to bolster its presence in the ever-intensifying AI arena. “We are […]

The post Amazon invests $4B in Anthropic to boost AI capabilities appeared first on AI News.

]]>
Amazon has announced an investment of up to $4 billion into Anthropic, an emerging AI startup renowned for its innovative Claude chatbot.

Anthropic was founded by siblings Dario and Daniela Amodei, who were previously associated with OpenAI. This latest investment signifies Amazon’s strategic move to bolster its presence in the ever-intensifying AI arena.

“We are excited to use AWS’s Trainium chips to develop future foundation models,” said Dario Amodei, co-founder and CEO of Anthropic. “Since announcing our support of Amazon Bedrock in April, Claude has seen significant organic adoption from AWS customers.

“By significantly expanding our partnership, we can unlock new possibilities for organisations of all sizes, as they deploy Anthropic’s safe, state-of-the-art AI systems together with AWS’s leading cloud technology.”

While Amazon’s investment in Anthropic may seem overshadowed by Microsoft’s reported $13 billion commitment to OpenAI, it is a clear indication of Amazon’s ambition in the rapidly-evolving AI landscape. The collaboration between Amazon and Anthropic holds the promise of reshaping the AI sector with innovative developments.

“We have tremendous respect for Anthropic’s team and foundation models, and believe we can help improve many customer experiences, short and long-term, through our deeper collaboration,” said Andy Jassy, CEO of Amazon.

“Customers are quite excited about Amazon Bedrock, AWS’ new managed service that enables companies to use various foundation models to build generative AI applications on top of, as well as AWS Trainium, AWS’ AI training chip, and our collaboration with Anthropic should help customers get even more value from these two capabilities.”

Anthropic’s flagship product, the Claude AI model, distinguishes itself by claiming a higher level of safety compared to its competitors.

Claude and its advanced iteration, Claude 2, are large language model-based chatbots similar in functionality to OpenAI’s ChatGPT and Google’s Bard. They excel in tasks like text translation, code generation, and answering a variety of questions.

What sets Claude apart is its ability to autonomously revise responses, eliminating the need for human moderation. This unique feature positions Claude as a safer and more dependable AI tool, especially in contexts where precise, unbiased information is crucial.

Claude’s capacity to handle larger prompts also makes it particularly suitable for tasks involving extensive business or legal documents, offering a valuable edge in industries reliant on meticulous data analysis.

As part of this strategic investment, Amazon will acquire a minority ownership stake in Anthropic. Amazon is set to integrate Anthropic’s cutting-edge technology into a range of its products, including the Amazon Bedrock service, designed for building AI applications. 

In return, Anthropic will leverage Amazon’s custom-designed chips for the development, training, and deployment of its future AI foundation models. The partnership also solidifies Anthropic’s commitment to Amazon Web Services (AWS) as its primary cloud provider.

In the initial phase, Amazon has committed $1.25 billion to Anthropic, with an option to increase its investment by an additional $2.75 billion. If the full $4 billion investment materialises, it will become the largest publicly-known investment linked to AWS.

Anthropic’s partnership with Amazon comes alongside its existing collaboration with Google, where Google holds approximately a 10 percent stake following a $300 million investment earlier this year. Anthropic has affirmed its intent to maintain this relationship with Google and continue offering its technology through Google Cloud, showcasing its commitment to broadening its reach across the industry.

In a rapidly-advancing landscape, Amazon’s strategic investment in Anthropic underscores its determination to remain at the forefront of AI innovation and sets the stage for exciting future developments.

(Image Credit: Anthropic)

See also: OpenAI reveals DALL-E 3 text-to-image model

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon invests $4B in Anthropic to boost AI capabilities appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-invests-4b-anthropic-boost-ai-capabilities/feed/ 0
MIT launches cross-disciplinary program to boost AI hardware innovation https://www.artificialintelligence-news.com/news/mit-launches-cross-disciplinary-program-boost-ai-hardware-innovation/ https://www.artificialintelligence-news.com/news/mit-launches-cross-disciplinary-program-boost-ai-hardware-innovation/#respond Thu, 31 Mar 2022 15:31:40 +0000 https://artificialintelligence-news.com/?p=11825 MIT has launched a new academia and industry partnership called the AI Hardware Program that aims to boost research and development. “A sharp focus on AI hardware manufacturing, research, and design is critical to meet the demands of the world’s evolving devices, architectures, and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering, […]

The post MIT launches cross-disciplinary program to boost AI hardware innovation appeared first on AI News.

]]>
MIT has launched a new academia and industry partnership called the AI Hardware Program that aims to boost research and development.

“A sharp focus on AI hardware manufacturing, research, and design is critical to meet the demands of the world’s evolving devices, architectures, and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science. 

“Knowledge-sharing between industry and academia is imperative to the future of high-performance computing.”

There are five inaugural members of the program:

  • Amazon
  • Analog Devices
  • ASML
  • NTT Research
  • TSMC

As the diversity of the inaugural members shows, the program is intended to be a cross-disciplinary effort.

“As AI systems become more sophisticated, new solutions are sorely needed to enable more advanced applications and deliver greater performance,” commented Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science

 “Our aim is to devise real-world technological solutions and lead the development of technologies for AI in hardware and software.”

A key goal of the program is to help create more energy-efficient systems.

“We are all in awe at the seemingly superhuman capabilities of today’s AI systems. But this comes at a rapidly increasing and unsustainable energy cost,” explained Jesús del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science.

“Continued progress in AI will require new and vastly more energy-efficient systems. This, in turn, will demand innovations across the entire abstraction stack, from materials and devices to systems and software. The program is in a unique position to contribute to this quest.”

Other key areas of exploration include:

  • Analog neural networks
  • New CMOS designs
  • Heterogeneous integration for AI systems
  • Monolithic-3D AI systems
  • Analog nonvolatile memory devices
  • Software-hardware co-design
  • Intelligence at the edge
  • Intelligent sensors
  • Energy-efficient AI
  • Intelligent Internet of Things (IIoT)
  • Neuromorphic computing
  • AI edge security
  • Quantum AI
  • Wireless technologies
  • Hybrid-cloud computing
  • High-performance computation

It’s an exhaustive list and an ambitious project. However, the AI Hardware Program is off to a great start with the inaugural members bringing significant talent and expertise in their respective fields to the table.

“We live in an era where paradigm-shifting discoveries in hardware, systems communications, and computing have become mandatory to find sustainable solutions—solutions that we are proud to give to the world and generations to come,” says Aude Oliva, Senior Research Scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Director of Strategic Industry Engagement at the MIT Schwarzman College of Computing.

The program is being co-led by Jesús del Alamo and Aude Oliva. Anantha Chandrakasan will serve as its chair.

More information about the AI Hardware Program can be found here.

(Photo by Nejc Soklič on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MIT launches cross-disciplinary program to boost AI hardware innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/mit-launches-cross-disciplinary-program-boost-ai-hardware-innovation/feed/ 0
Amazon will continue to ban police from using its facial recognition AI https://www.artificialintelligence-news.com/news/amazon-continue-ban-police-using-facial-recognition-ai/ https://www.artificialintelligence-news.com/news/amazon-continue-ban-police-using-facial-recognition-ai/#respond Mon, 24 May 2021 16:27:29 +0000 http://artificialintelligence-news.com/?p=10587 Amazon will extend a ban it enacted last year on the use of its facial recognition for law enforcement purposes. The web giant’s Rekognition service is one of the most powerful facial recognition tools available. Last year, Amazon signed a one-year moratorium that banned its use by police departments following a string of cases where […]

The post Amazon will continue to ban police from using its facial recognition AI appeared first on AI News.

]]>
Amazon will extend a ban it enacted last year on the use of its facial recognition for law enforcement purposes.

The web giant’s Rekognition service is one of the most powerful facial recognition tools available. Last year, Amazon signed a one-year moratorium that banned its use by police departments following a string of cases where facial recognition services – from various providers – were found to be inaccurate and/or misused by law enforcement.

Amazon has now extended its ban indefinitely.

Facial recognition services have already led to wrongful arrests that disproportionally impacted marginalised communities.

Last year, the American Civil Liberties Union (ACLU) filed a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from the trauma” following a misidentification by a facial recognition system.

Williams was held in a “crowded and filthy” cell overnight without being given any reason before being released on a cold and rainy January night where he was forced to wait outside on the curb for approximately an hour while his wife scrambled to find childcare so that she could come and pick him up.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, Deputy Director of digital rights group Fight for the Future.

Clearview AI – a controversial facial recognition provider that scrapes data about people from across the web and is used by approximately 2,400 agencies across the US alone – boasted in January that police use of its system jumped 26 percent following the Capitol raid.

Last year, the UK and Australia launched a joint probe into Clearview AI’s practices. Clearview AI was also forced to suspend operations in Canada after the federal Office of the Privacy Commissioner of Canada opened an investigation into the company.

Many states, countries, and even some police departments are taking matters into their own hands and banning the use of facial recognition by law enforcement. Various rights groups continue to apply pressure and call for more to follow.

Human rights group Liberty won the first international case banning the use of facial recognition technology for policing in August last year. Liberty launched the case on behalf of Cardiff, Wales resident Ed Bridges who was scanned by the technology first on a busy high street in December 2017 and again when he was at a protest in March 2018.

Following the case, the Court of Appeal ruled that South Wales Police’s use of facial recognition technology breaches privacy rights, data protection laws, and equality laws. South Wales Police had used facial recognition technology around 70 times – with around 500,000 people estimated to have been scanned by May 2019 – but must now halt its use entirely.

Facial recognition tests in the UK so far have been nothing short of a complete failure. An initial trial at the 2016 Notting Hill Carnival led to not a single person being identified. A follow-up trial the following year led to no legitimate matches, but 35 false positives.

A 2019 independent report into the Met Police’s facial recognition trials concluded that it was only verifiably accurate in just 19 percent of cases.

(Photo by Bermix Studio on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Amazon will continue to ban police from using its facial recognition AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-continue-ban-police-using-facial-recognition-ai/feed/ 0