Amazon | Amazon AI Developments & News | AI News https://www.artificialintelligence-news.com/categories/ai-companies/amazon/ Artificial Intelligence News Thu, 24 Apr 2025 11:39:47 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Amazon | Amazon AI Developments & News | AI News https://www.artificialintelligence-news.com/categories/ai-companies/amazon/ 32 32 Amazon Nova Act: A step towards smarter, web-native AI agents https://www.artificialintelligence-news.com/news/amazon-nova-act-step-towards-smarter-web-native-ai-agents/ https://www.artificialintelligence-news.com/news/amazon-nova-act-step-towards-smarter-web-native-ai-agents/#respond Tue, 01 Apr 2025 16:57:43 +0000 https://www.artificialintelligence-news.com/?p=105105 Amazon has introduced Nova Act, an advanced AI model engineered for smarter agents that can execute tasks within web browsers. While large language models popularised the concept of “agents” as tools that answer queries or retrieve information via methods such as Retrieval-Augmented Generation (RAG), Amazon envisions something more robust. The company defines agents not just […]

The post Amazon Nova Act: A step towards smarter, web-native AI agents appeared first on AI News.

]]>
Amazon has introduced Nova Act, an advanced AI model engineered for smarter agents that can execute tasks within web browsers.

While large language models popularised the concept of “agents” as tools that answer queries or retrieve information via methods such as Retrieval-Augmented Generation (RAG), Amazon envisions something more robust. The company defines agents not just as responders but as entities capable of performing tangible, multi-step tasks in diverse digital and physical environments.

“Our dream is for agents to perform wide-ranging, complex, multi-step tasks like organising a wedding or handling complex IT tasks to increase business productivity,” said Amazon.

Current market offerings often fall short, with many agents requiring continuous human supervision and their functionality dependent on comprehensive API integration—something not feasible for all tasks. Nova Act is Amazon’s answer to these limitations.

Alongside the model, Amazon is releasing a research preview of the Amazon Nova Act SDK. Using the SDK, developers can create agents capable of automating web tasks like submitting out-of-office notifications, scheduling calendar holds, or enabling automatic email replies.

The SDK aims to break down complex workflows into dependable “atomic commands” such as searching, checking out, or interacting with specific interface elements like dropdowns or popups. Detailed instructions can be added to refine these commands, allowing developers to, for instance, instruct an agent to bypass an insurance upsell during checkout.

To further enhance accuracy, the SDK supports browser manipulation via Playwright, API calls, Python integrations, and parallel threading to overcome web page load delays.

Nova Act: Exceptional performance on benchmarks

Unlike other generative models that showcase middling accuracy on complex tasks, Nova Act prioritises reliability. Amazon highlights its model’s impressive scores of over 90% on internal evaluations for specific capabilities that typically challenge competitors. 

Nova Act achieved a near-perfect 0.939 on the ScreenSpot Web Text benchmark, which measures natural language instructions for text-based interactions, such as adjusting font sizes. Competing models such as Claude 3.7 Sonnet (0.900) and OpenAI’s CUA (0.883) trail behind by significant margins.

Similarly, Nova Act scored 0.879 in the ScreenSpot Web Icon benchmark, which tests interactions with visual elements like rating stars or icons. While the GroundUI Web test, designed to assess an AI’s proficiency in navigating various user interface elements, showed Nova Act slightly trailing competitors, Amazon sees this as an area ripe for improvement as the model evolves.

Amazon stresses its focus on delivering practical reliability. Once an agent built using Nova Act functions as expected, developers can deploy it headlessly, integrate it as an API, or even schedule it to run tasks asynchronously. In one demonstrated use case, an agent automatically orders a salad for delivery every Tuesday evening without requiring ongoing user intervention.

Amazon sets out its vision for scalable and smart AI agents

One of Nova Act’s standout features is its ability to transfer its user interface understanding to new environments with minimal additional training. Amazon shared an instance where Nova Act performed admirably in browser-based games, even though its training had not included video game experiences. This adaptability positions Nova Act as a versatile agent for diverse applications.

This capability is already being leveraged in Amazon’s own ecosystem. Within Alexa+, Nova Act enables self-directed web navigation to complete tasks for users, even when API access is not comprehensive enough. This represents a step towards smarter AI assistants that can function independently, harnessing their skills in more dynamic ways.

Amazon is clear that Nova Act represents the first stage in a broader mission to craft intelligent, reliable AI agents capable of handling increasingly complex, multi-step tasks. 

Expanding beyond simple instructions, Amazon’s focus is on training agents through reinforcement learning across varied, real-world scenarios rather than overly simplistic demonstrations. This foundational model serves as a checkpoint in a long-term training curriculum for Nova models, indicating the company’s ambition to reshape the AI agent landscape.

“The most valuable use cases for agents have yet to be built,” Amazon noted. “The best developers and designers will discover them. This research preview of our Nova Act SDK enables us to iterate alongside these builders through rapid prototyping and iterative feedback.”

Nova Act is a step towards making AI agents truly useful for complex, digital tasks. From rethinking benchmarks to emphasising reliability, its design philosophy is centred around empowering developers to move beyond what’s possible with current-generation tools. 

See also: Anthropic provides insights into the ‘AI biology’ of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon Nova Act: A step towards smarter, web-native AI agents appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-nova-act-step-towards-smarter-web-native-ai-agents/feed/ 0
Big tech’s $320B AI spend defies efficiency race https://www.artificialintelligence-news.com/news/big-techs-320b-ai-spend-defies-efficiency-race/ https://www.artificialintelligence-news.com/news/big-techs-320b-ai-spend-defies-efficiency-race/#respond Wed, 12 Feb 2025 11:57:25 +0000 https://www.artificialintelligence-news.com/?p=104318 Tech giants are beginning an unprecedented $320 billion AI infrastructure spending spree in 2025, brushing aside concerns about more efficient AI models from challengers like DeepSeek. The massive investment push from Amazon, Microsoft, Google, and Meta signals the big players’ unwavering conviction that AI’s future demands bold infrastructure bets, despite (or perhaps because of) emerging […]

The post Big tech’s $320B AI spend defies efficiency race appeared first on AI News.

]]>
Tech giants are beginning an unprecedented $320 billion AI infrastructure spending spree in 2025, brushing aside concerns about more efficient AI models from challengers like DeepSeek. The massive investment push from Amazon, Microsoft, Google, and Meta signals the big players’ unwavering conviction that AI’s future demands bold infrastructure bets, despite (or perhaps because of) emerging efficiency breakthroughs.

The stakes are high, with collective capital expenditure jumping 30% up from 2024’s $246 billion investment. While investors may question the necessity of such aggressive spending, tech leaders are doubling down on their belief that AI represents a transformative opportunity worth every dollar.

Amazon stands at the forefront of this AI arms spend, according toa reportby Business Insider. Amazon is flexing its financial muscle with a planned $100 billion capital expenditure for 2025 – a dramatic leap from its $77 billion last year. AWS chief Andy Jassy isn’t mincing words, calling AI a “once-in-a-lifetime business opportunity” that demands aggressive investment.

Microsoft’s Satya Nadella also has a bullish stance with his own hard numbers. Having earmarked $80 billion for AI infrastructure in 2025, Microsoft’s existing AI ventures are already delivering; Nadella has spoken of $13 billion annual revenue from AI and 175% year-over-year growth.

His perspective draws from economic wisdom: citing the Jevons paradox, he argues that making AI more efficient and accessible will spark an unprecedented surge in demand.

Not to be outdone, Google parent Alphabet is pushing all its chips to the centre of the table, with a $75 billion infrastructure investment in 2025, dwarfing analysts’ expectations of $58 billion. Despite market jitters about cloud growth and AI strategy, CEO Sundar Pichai maintains Google’s product innovation engine is firing on all cylinders.

Meta’s approach is to pour $60-65 billion into capital spending in 2025 – up from $39 billion in 2024. The company is carving its own path by championing an “American standard” for open-source AI models, a strategy has caught investor attention, particularly given Meta’s proven track record in monetising AI through sophisticated ad targeting.

The emergence of DeepSeek’s efficient AI models has sparked some debate in investment circles. Investing.com’s Jesse Cohen voices growing demands for concrete returns on existing AI investments. Yet Wedbush’s Dan Ives dismisses such concerns, likening DeepSeek to “the Temu of AI” and insisting the revolution is just beginning.

The market’s response to these bold plans tells a mixed story. Meta’s strategy has won investor applause, while Amazon and Google face more sceptical reactions, with stock drops of 5% and 8% respectively following spending announcements in earnings calls. Yet tech leaders remain undeterred, viewing robust AI infrastructure as non-negotiable for future success.

The intensity of infrastructure investment suggests a reality: technological breakthroughs in AI efficiency aren’t slowing the race – they’re accelerating it. As big tech pours unprecedented resources into AI development, it’s betting that increased efficiency will expand rather than contract the market for AI services.

The high-stakes gamble on AI’s future reveals a shift in how big tech views investment. Rather than waiting to see how efficiency improvements might reduce costs, it’s are scaling up aggressively, convinced that tomorrow’s AI landscape will demand more infrastructure, not less. In this view, DeepSeek’s breakthroughs aren’t a threat to their strategy – they’re validation of AI’s expanding potential.

The message from Silicon Valley is that the AI revolution demands massive infrastructure investment, and the giants of tech are all in. The question isn’t whether to invest in AI infrastructure, but whether $320 billion will be enough to meet the coming surge in demand.

See also: DeepSeek ban? China data transfer boosts security concerns

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Big tech’s $320B AI spend defies efficiency race appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/big-techs-320b-ai-spend-defies-efficiency-race/feed/ 0
Amazon stakes $4bn more in Anthropic–the next tech arms race? https://www.artificialintelligence-news.com/news/amazon-stakes-4bn-more-in-anthropic-the-next-tech-arms-race/ https://www.artificialintelligence-news.com/news/amazon-stakes-4bn-more-in-anthropic-the-next-tech-arms-race/#respond Tue, 17 Dec 2024 17:19:19 +0000 https://www.artificialintelligence-news.com/?p=16719 Amazon has announced an additional $4 billion investment in Anthropic, bringing the company’s total commitment to $8 billion, part of its expanding artificial intelligence strategy. The investment was announced on November 22, 2024 and strengthens Amazon’s position in the AI sector, building on its established cloud computing services in the form of AWS.  While maintaining […]

The post Amazon stakes $4bn more in Anthropic–the next tech arms race? appeared first on AI News.

]]>
Amazon has announced an additional $4 billion investment in Anthropic, bringing the company’s total commitment to $8 billion, part of its expanding artificial intelligence strategy. The investment was announced on November 22, 2024 and strengthens Amazon’s position in the AI sector, building on its established cloud computing services in the form of AWS. 

While maintaining Amazon’s minority stake in Anthropic, the investment represents a significant development in the company’s approach to AI technology and cloud infrastructure. The expanded collaboration goes beyond mere financial investment. Anthropic has now designated AWS as its “primary training partner” for AI model development, in addition to Amazon’s role as a primary cloud provider. 

Amazon’s investment will see Anthropic utilizing AWS Trainium and Inferentia chips for training and on which to deploy its future foundational models, including any updates to the flagship Claude AI system.

AWS’s competitive edge

The continuing partnership provides Amazon with several strategic advantages in the competitive cloud computing and AI services market:

  1. Hardware innovation: The commitment to use AWS Trainium and Inferentia chips for Anthropic’s advanced AI models validates Amazon’s investment in custom AI chips and positions AWS as a serious competitor to NVIDIA in the AI infrastructure space.
  2. Cloud service enhancement: AWS customers will receive early access to fine-tuning capabilities for data processed by Anthropic models. This benefit alone could attract more enterprises to Amazon’s cloud platform.
  3. Model performance: Claude 3.5 Sonnet, Anthropic’s latest model available through Amazon Bedrock, has demonstrated exceptional performance in agentic coding tasks, according to Anthropic.

Amazon’s multi-faceted AI strategy

While the increased investment in Anthropic is impressive in monetary terms, it represents just one component of Amazon’s broader AI strategy. The company appears to be pursuing a multi-pronged approach:

  1. External partnerships: The Anthropic investment provides immediate access to cutting-edge AI capabilities from third-parties.
  2. Internal development: Amazon continues to develop its own AI models and capabilities.
  3. Infrastructure development: Ongoing investment in AI-specific hardware like Trainium chips demonstrates a commitment to building AI-focussed infrastructure.

The expanded partnership signals Amazon’s long-term commitment to AI development yet retains flexibility thanks to its minority stakeholding. This approach allows Amazon to benefit from Anthropic’s innovations while preserving the ability to pursue other partnerships with external AI companies and continue internal development initiatives.

The investment reinforces the growing trend where major tech companies seek strategic AI partnerships rather than relying solely on internal development. It also highlights the important role of cloud infrastructure in the AI industry’s growth. AWS has positioned itself as a suitable platform for AI model training and deployment.

The post Amazon stakes $4bn more in Anthropic–the next tech arms race? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-stakes-4bn-more-in-anthropic-the-next-tech-arms-race/feed/ 0
Amazon Bedrock gains new AI models, tools, and features https://www.artificialintelligence-news.com/news/amazon-bedrock-gains-new-ai-models-tools-and-features/ https://www.artificialintelligence-news.com/news/amazon-bedrock-gains-new-ai-models-tools-and-features/#respond Thu, 05 Dec 2024 15:00:23 +0000 https://www.artificialintelligence-news.com/?p=16652 Amazon Web Services (AWS) has announced improvements to bolster Bedrock, its fully managed generative AI service. The updates include new foundational models from several AI pioneers, enhanced data processing capabilities, and features aimed at improving inference efficiency. Dr Swami Sivasubramanian, VP of AI and Data at AWS, said: “Amazon Bedrock continues to see rapid growth […]

The post Amazon Bedrock gains new AI models, tools, and features appeared first on AI News.

]]>
Amazon Web Services (AWS) has announced improvements to bolster Bedrock, its fully managed generative AI service.

The updates include new foundational models from several AI pioneers, enhanced data processing capabilities, and features aimed at improving inference efficiency.

Dr Swami Sivasubramanian, VP of AI and Data at AWS, said: “Amazon Bedrock continues to see rapid growth as customers flock to the service for its broad selection of leading models, tools to easily customise with their data, built-in responsible AI features, and capabilities for developing sophisticated agents.

“With this new set of capabilities, we are empowering customers to develop more intelligent AI applications that will deliver greater value to their end-users.”

Amazon Bedrock expands its model diversity

AWS is set to become the first cloud provider to feature models from AI developers Luma AI and poolside, while also incorporating Stability AI’s latest release.

Through its new Amazon Bedrock Marketplace, customers will have access to over 100 emerging and specialised models from across industries, ensuring they can select the most appropriate tools for their unique needs.

  • Luma AI’s Ray 2 

Luma AI, known for advancing generative AI in video content creation, brings its next-generation Ray 2 model to Amazon Bedrock. This model generates high-quality, lifelike video outputs from text or image inputs and allows organisations to create detailed outputs in fields such as fashion, architecture, and graphic design. AWS’s presence as the first provider for this model ensures businesses can experiment with new camera angles, cinematographic styles, and consistent characters with a frictionless workflow.

  • poolside’s malibu and point

Designed to address challenges in modern software engineering, poolside’s models – malibu and point – specialise in code generation, testing, documentation, and real-time code completion. Importantly, developers can securely fine-tune these models using their private datasets. Accompanied by Assistant – an integration for development environments – poolside’s tools allow engineering teams to accelerate productivity, ship projects faster, and increase accuracy.

  • Stability AI’s Stable Diffusion 3.5 Large  

Amazon Bedrock customers will soon gain access to Stability AI’s text-to-image model Stable Diffusion 3.5 Large. This addition supports businesses in creating high-quality visual media for use cases in areas like gaming, advertising, and retail.  

Through the Bedrock Marketplace, AWS also enables access to over 100 specialised models. These include solutions tailored to fields such as biology (EvolutionaryScale’s ESM3 generative model), financial data (Writer’s Palmyra-Fin), and media (Camb.ai’s text-to-audio MARS6).

Image showing the expanded AI model catalog in the Amazon Bedrock Marketplace.

Zendesk, a global customer service software firm, leverages Bedrock’s marketplace to personalise support across email and social channels using AI-driven localisation and sentiment analysis tools. For example, they use models like Widn.AI to tailor responses based on real-time sentiment in customers’ native languages.

Scaling inference with new Amazon Bedrock features

Large-scale generative AI applications require balancing the cost, latency, and accuracy of inference processes. AWS is addressing this challenge with two new Amazon Bedrock features:

  • Prompt Caching

The new caching capability reduces redundant processing of prompts by securely storing frequently used queries, saving on both time and costs. This feature can lead to up to a 90% reduction in costs and an 85% decrease in latency. For example, Adobe incorporated Prompt Caching into its Acrobat AI Assistant to summarise documents and answer questions, achieving a 72% reduction in response times during initial testing.  

  • Intelligent Prompt Routing

This feature dynamically directs prompts to the most suitable foundation model within a family, optimising results for both cost and quality. Customers such as Argo Labs, which builds conversational voice AI solutions for restaurants, have already benefited. While simpler queries (like booking tables) are handled by smaller models, more nuanced requests (e.g., dietary-specific menu questions) are intelligently routed to larger models. Argo Labs’ usage of intelligent Prompt Routing has not only improved response quality but also reduced costs by up to 30%.

Data utilisation: Knowledge bases and automation

A key attraction of generative AI lies in its ability to extract value from data. AWS is enhancing its Amazon Bedrock Knowledge Bases to ensure organisations can deploy their unique datasets for richer AI-powered user experiences.  

  • Using structured data 

AWS has introduced capabilities for structured data retrieval within Knowledge Bases. This enhancement allows customers to query data stored across Amazon services like SageMaker Lakehouse and Redshift through natural-language prompts, with results translated back into SQL queries. Octus, a credit intelligence firm, plans to use this capability to provide clients with dynamic, natural-language reports on its structured financial data.  

  • GraphRAG integration 

By incorporating automated graph modelling (powered by Amazon Neptune), customers can now generate and connect relational data for stronger AI applications. BMW Group, for instance, will use GraphRAG to augment its virtual assistant MAIA. This assistant taps into BMW’s wealth of internal data to deliver comprehensive responses and premium user experiences.

Separately, AWS has unveiled Amazon Bedrock Data Automation, a tool that transforms unstructured content (e.g., documents, video, and audio) into structured formats for analytics or retrieval-augmented generation (RAG). Companies like Symbeo (automated claims processing) and Tenovos (digital asset management) are already piloting the tool to improve operational efficiency and data reuse.

The expansion of Amazon Bedrock’s ecosystem reflects its growing popularity, with the service recording a 4.7x increase in its customer base over the last year. Industry leaders like Adobe, BMW, Zendesk, and Tenovos have all embraced AWS’s latest innovations to improve their generative AI capabilities.  

Most of the newly announced tools – such as inference management, Knowledge Bases with structured data retrieval, and GraphRAG – are currently in preview, while notable model releases from Luma AI, poolside, and Stability AI are expected soon.

See also: Alibaba Cloud overhauls AI partner initiative

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon Bedrock gains new AI models, tools, and features appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-bedrock-gains-new-ai-models-tools-and-features/feed/ 0
Big tech’s AI spending hits new heights https://www.artificialintelligence-news.com/news/big-tech-ai-spending-hits-new-heights/ https://www.artificialintelligence-news.com/news/big-tech-ai-spending-hits-new-heights/#respond Fri, 22 Nov 2024 12:02:34 +0000 https://www.artificialintelligence-news.com/?p=16537 In 2024, Big Tech is all-in on artificial intelligence, with companies like Microsoft, Amazon, Alphabet, and Meta leading the way. Their combined spending on AI is projected to exceed a jaw-dropping $240 billion. Why? Because AI isn’t just the future—it’s the present, and the demand for AI-powered tools and infrastructure has never been higher. The […]

The post Big tech’s AI spending hits new heights appeared first on AI News.

]]>
In 2024, Big Tech is all-in on artificial intelligence, with companies like Microsoft, Amazon, Alphabet, and Meta leading the way.

Their combined spending on AI is projected to exceed a jaw-dropping $240 billion. Why? Because AI isn’t just the future—it’s the present, and the demand for AI-powered tools and infrastructure has never been higher. The companies aren’t just keeping up; they’re setting the pace for the industry.

The scale of their investment is hard to ignore. In the first half of 2023, tech giants poured $74 billion into capital expenditure. By Q3, that number had jumped to $109 billion. In mid-2024, spending reached $104 billion, a remarkable 47% rise over the same period a year earlier. By Q3, the total hit $171 billion.

If this pattern continues, Q4 might add another $70 billion, bringing the total to a truly staggering $240 billion for the year.

Why so much spending?

AI’s potential is immense, and companies are making sure they’re positioned to reap the rewards.

  • A growing market: AI is projected to create $20 trillion in global economic impact by 2030. In countries like India, AI could contribute $500 billion to GDP by 2025. With stakes this high, big tech isn’t hesitating to invest heavily.
  • Infrastructure demands: Training and running AI models require massive investment in infrastructure, from data centres to high-performance GPUs. Alphabet increased its capital expenditures by 62% last quarter compared to the previous year, even as it cut its workforce by 9,000 employees to manage costs.
  • Revenue potential: AI is already proving its value. Microsoft’s AI products are expected to generate $10 billion annually—the fastest-growing segment in the company’s history. Alphabet, meanwhile, uses AI to write over 25% of its new code, streamlining operations.

Amazon is also ramping up, with plans to spend $75 billion on capital expenditure in 2024. Meta’s forecast is not far behind, with estimates between $38 and $40 billion. Across the board, organisations recognise that maintaining their edge in AI requires sustained and significant investment.

Supporting revenue streams

What keeps the massive investments keep on coming is the strength of big tech’s core businesses. Last quarter, Alphabet’s digital advertising machine, which is powered by Google’s search engine, generated $49.39 billion in ad revenue, a 12% year-over-year increase. This as a solid foundation that allows Alphabet to pour resources into building out its AI arsenal without destabilising the bottom line.

Microsoft’s diversified revenue streams are another example. While the company spent $20 billion on AI and cloud infrastructure last quarter, its productivity segment, which includes Office, grew by 12% to $28.3 billion, and its personal computing business, boosted by Xbox and the Activision Blizzard acquisition, grew 17% to $13.2 billion. These successes demonstrate how AI investments can support broader growth strategies.

The financial payoff

Big tech is already seeing the benefits of its heavy spending. Microsoft’s Azure platform has seen substantial growth, with its AI income approaching $6 billion. Amazon’s AI business is growing at triple-digit rates, and Alphabet reported a 34% jump in profits last quarter, with cloud revenue playing a major role.

Meta, while primarily focused on advertising, is leveraging AI to make its platforms more engaging. AI-driven tools, such as improved feeds and search features keep users on its platforms longer, resulting in new revenue growth.

AI spending shows no signs of slowing down. Tech leaders at Microsoft and Alphabet view AI as a long-term investment critical to their future success. And the results speak for themselves: Alphabet’s cloud revenue is up 35%, while Microsoft’s cloud business grew 20% last quarter.

For the time being, the focus is on scaling up infrastructure and meeting demand. However, the real transformation will come when big tech unlocks AI’s full potential, transforming industries and redefining how we work and live.

By investing in high-quality, centralised data strategies, businesses can ensure trustworthy and accurate AI implementations, and unlock AI’s full potential to drive innovation, improve decision-making, and gain competitive edge. AI’s revolutionary promise is within reach—but only for companies prepared to lay the groundwork for sustainable growth and long-term results.

(Photo by Unsplash)

See also: Microsoft tries to convert Google Chrome users

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Big tech’s AI spending hits new heights appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/big-tech-ai-spending-hits-new-heights/feed/ 0
Walmart and Amazon drive retail transformation with AI https://www.artificialintelligence-news.com/news/walmart-amazon-drive-retail-transformation-ai/ https://www.artificialintelligence-news.com/news/walmart-amazon-drive-retail-transformation-ai/#respond Mon, 16 Sep 2024 14:02:23 +0000 https://www.artificialintelligence-news.com/?p=16081 Walmart and Amazon are harnessing AI to drive retail transformation with new consumer experiences and enhanced operational efficiency. According to analytics firm GlobalData, Walmart is focusing on augmented reality and AI-enhanced store management. Amazon, meanwhile, is leading advancements in customer personalisation and autonomous systems. Kiran Raj, Practice Head of Disruptive Tech at GlobalData, notes: “Walmart […]

The post Walmart and Amazon drive retail transformation with AI appeared first on AI News.

]]>
Walmart and Amazon are harnessing AI to drive retail transformation with new consumer experiences and enhanced operational efficiency.

According to analytics firm GlobalData, Walmart is focusing on augmented reality and AI-enhanced store management. Amazon, meanwhile, is leading advancements in customer personalisation and autonomous systems.

Kiran Raj, Practice Head of Disruptive Tech at GlobalData, notes: “Walmart and Amazon are no longer competing for market share alone. Their AI strategies are reshaping the entire retail ecosystem—from Walmart’s blend of digital and physical shopping experiences to Amazon’s operational automation.”

GlobalData’s Disruptor Intelligence Center, utilising its Technology Foresights tool, has identified the strategic focus of these retail titans based on their patent filings.

Walmart has submitted over 3,000 AI-related patents, with 20% of these in the last three years, indicating a swift evolution in its AI capabilities. In contrast, Amazon boasts more than 9,000 patents; half of which were filed during the same timeframe, underpinning its leadership in AI-driven retail innovations.

AI-powered retail transformation

Walmart is deploying AI-driven solutions like in-store product recognition while making notable strides in AR applications, including virtual try-ons. The company’s progress in smart warehouses and image-based transactions denotes a shift towards fully automated retail, enhancing both speed and precision in customer service.

Amazon stands out with its extensive deployment of AI in customer personalisation and autonomous systems. By harnessing technologies such as Autonomous Network Virtualisation and Automated VNF Deployment, the company is advancing its operational infrastructure and aiming to set new standards in network efficiency and data management.

Walmart’s development of intelligent voice assistants and automated store surveillance emphasises its aim to provide a seamless and secure shopping experience. Concurrently, Amazon’s progress in AI for coding and surveillance is pushing the boundaries of enterprise AI applications and enhancing security capabilities.

“Walmart and Amazon’s aggressive innovation strategies not only strengthen their market positions but also set a blueprint for the future of the retail sector,” Raj explains.

“As these two giants continue to push the boundaries of retail AI, the broader industry can expect ripple effects in supply chain innovation, customer loyalty programmes, and operational scalability—setting the stage for a new era of consumer engagement.”

(Photo by Marques Thomas)

See also: Whitepaper dispels fears of AI-induced job losses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Walmart and Amazon drive retail transformation with AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/walmart-amazon-drive-retail-transformation-ai/feed/ 0
Amazon partners with Anthropic to enhance Alexa https://www.artificialintelligence-news.com/news/amazon-partners-anthropic-enhance-alexa/ https://www.artificialintelligence-news.com/news/amazon-partners-anthropic-enhance-alexa/#respond Mon, 02 Sep 2024 13:18:08 +0000 https://www.artificialintelligence-news.com/?p=15929 Amazon is gearing up to roll out a revamped version of its Alexa voice assistant, which is expected to be available this October, right before the US shopping rush. Internally referred to as “Remarkable,” the new technology will be powered by Anthropic’s Claude AI models. Sources close to the matter have indicated that this shift […]

The post Amazon partners with Anthropic to enhance Alexa appeared first on AI News.

]]>
Amazon is gearing up to roll out a revamped version of its Alexa voice assistant, which is expected to be available this October, right before the US shopping rush.

Internally referred to as “Remarkable,” the new technology will be powered by Anthropic’s Claude AI models. Sources close to the matter have indicated that this shift occurred due to the underperformance of Amazon’s in-house software.

The enhanced Alexa will operate using advanced generative AI to handle more complex queries. Amazon plans to offer the new Alexa as a subscription service, priced between $5 and $10 per month, while the classic version of Alexa will remain free. This approach marks a significant change for Amazon and suggests that the company aims to turn this voice assistant into a profitable venture after years of limited success in generating revenue through this platform.

Amazon’s decision to quickly adopt an external model, Claude, indicates a strategic shift. Amazon typically prefers to build everything in-house to minimise its dependence on third-party vendors, thereby avoiding external influences on customer behaviour and business strategies, as well as external influences on who controls data. However, it seems that Amazon’s traditional strategy does not provide the massive AI capability needed, or perhaps Amazon has realised the need for more powerful AI. It is also worth noting that the involved AI developer, OpenAI, is affiliated with major technology companies like Apple and Microsoft in developing AI technologies.

The launch of the “Remarkable” Alexa is anticipated during Amazon’s annual devices and services event in September, though the company has not confirmed the exact date. This event will also mark the first public appearance of Panos Panay, the new head of Amazon’s devices division, who has taken over from long-time executive David Limp.

The updated version of Alexa would be a more interactive and intuitive assistant, as the new functionality would stem from its conversational mode. The assistant is envisioned to do more than just recognise patterns in people’s speech; it would be able to hold conversations built on previous interactions. The most likely features include personalised shopping advice, news aggregation, and more advanced home automation. As for whether customers would pay for Alexa, this likely depends on the final set of available features. The issue might be particularly pressing for Amazon, given that customers already pay for Prime membership.

The future for Alexa is quite ambitious, but it also bears significant risks. For the new version to be successful, internal performance benchmarks must be met. While estimates for “Remarkable” Alexa suggest that even a small percentage of current users paying for the premium version could become a substantial income stream for Amazon, the likelihood of achieving the expected outcomes remains uncertain.

However, Amazon’s partnership with Anthropic is currently under regulatory review, largely due to an investigation by the UK’s antitrust regulator. The impending upgrade announcement and the regulator’s response could significantly influence the company’s future activities.

Amazon’s initiative to adopt an AI solution developed by Anthropic marks a significant shift for the company, which previously focused on developing its proprietary technology. At this point, it is possible to view this move as part of the general trend in the industry to turn to partnerships regarding AI development to enhance the competitiveness of products.

See also: Amazon strives to outpace Nvidia with cheaper, faster AI chips

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon partners with Anthropic to enhance Alexa appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-partners-anthropic-enhance-alexa/feed/ 0
Chinese firms use cloud loophole to access US AI tech https://www.artificialintelligence-news.com/news/chinese-firms-cloud-loophole-access-us-ai-tech/ https://www.artificialintelligence-news.com/news/chinese-firms-cloud-loophole-access-us-ai-tech/#respond Wed, 28 Aug 2024 13:40:22 +0000 https://www.artificialintelligence-news.com/?p=15851 Chinese organisations are utilising cloud services from Amazon and its competitors to gain access to advanced US AI chips and capabilities that they cannot otherwise obtain, according to a Reuters report based on public tender documents. In a comprehensive investigation, Reuters revealed how Chinese cloud access to US AI chips is facilitated through intermediaries. Over […]

The post Chinese firms use cloud loophole to access US AI tech appeared first on AI News.

]]>
Chinese organisations are utilising cloud services from Amazon and its competitors to gain access to advanced US AI chips and capabilities that they cannot otherwise obtain, according to a Reuters report based on public tender documents.

In a comprehensive investigation, Reuters revealed how Chinese cloud access to US AI chips is facilitated through intermediaries. Over 50 tender documents posted in the past year revealed that at least 11 Chinese entities have sought access to restricted US technologies or cloud services. Four of these explicitly named Amazon Web Services (AWS) as a cloud service provider, though accessed through Chinese intermediaries rather than directly from AWS.

“AWS complies with all applicable US laws, including trade laws, regarding the provision of AWS services inside and outside of China,” an AWS spokesperson told Reuters.

The report highlights that while the US government has restricted the export of high-end AI chips to China, providing access to such chips or advanced AI models through the cloud is not a violation of US regulations. This loophole has raised concerns among US officials and lawmakers.

One example cited in the report involves Shenzhen University, which spent 200,000 yuan (£21,925) on an AWS account to access cloud servers powered by Nvidia A100 and H100 chips for an unspecified project. The university obtained this service via an intermediary, Yunda Technology Ltd Co. Neither Shenzhen University nor Yunda Technology responded to Reuters’ requests for comment.

The investigation also revealed that Zhejiang Lab, a research institute developing its own large language model called GeoGPT, stated in a tender document that it intended to spend 184,000 yuan to purchase AWS cloud computing services. The institute claimed that its AI model could not get enough computing power from homegrown Alibaba cloud services.

Michael McCaul, chair of the US House of Representatives Foreign Affairs Committee, told Reuters: “This loophole has been a concern of mine for years, and we are long overdue to address it.”

In response to these concerns, the US Commerce Department is tightening rules. A government spokeswoman told Reuters that they are “seeking additional resources to strengthen our existing controls that restrict PRC companies from accessing advanced AI chips through remote access to cloud computing capability.”

The Commerce Department has also proposed a rule that would require US cloud computing firms to verify large AI model users and notify authorities when they use US cloud computing services to train large AI models capable of “malicious cyber-enabled activity.”

The study also found that Chinese companies are seeking access to Microsoft’s cloud services. For example, Sichuan University stated in a tender filing that it was developing a generative AI platform and would purchase 40 million Microsoft Azure OpenAI tokens to help with project delivery.

Reuters’ report also indicated that Amazon has provided Chinese businesses with access to modern AI chips as well as advanced AI models such as Anthropic’s Claude, which they would not otherwise have had. This was demonstrated by public postings, tenders, and marketing materials evaluated by the news organisation.

Chu Ruisong, President of AWS Greater China, stated during a generative AI-themed conference in Shanghai in May that “Bedrock provides a selection of leading LLMs, including prominent closed-source models such as Anthropic’s Claude 3.”

The report overall emphasises the difficulty of regulating access to advanced computing resources in an increasingly interconnected global technological ecosystem. It focuses on the intricate relationship between US export laws, cloud service providers, and Chinese enterprises looking to improve their AI capabilities.

As the US government works to close this gap, the scenario raises concerns about the efficacy of present export controls and the potential need for more comprehensive laws that cover cloud-based access to banned technologies.

The findings of this paper are likely to feed ongoing discussions about technology transfer, national security, and the global AI race. As politicians and industry leaders analyse these findings, they may spark fresh discussions about how to balance technological cooperation with national security concerns in an era of rapid AI growth.

See also: GlobalData: China is ahead of global rivals for AI ‘unicorns’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Chinese firms use cloud loophole to access US AI tech appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chinese-firms-cloud-loophole-access-us-ai-tech/feed/ 0
Use of AI for business governance must improve at the board level https://www.artificialintelligence-news.com/news/use-ai-business-governance-must-improve-at-board-level/ https://www.artificialintelligence-news.com/news/use-ai-business-governance-must-improve-at-board-level/#respond Tue, 20 Aug 2024 16:58:43 +0000 https://www.artificialintelligence-news.com/?p=15781 According to Carine Smith Ihenacho, chief governance and compliance officer of Norway’s $1.7 trillion sovereign wealth fund, boards need to be proficient with the use of AI and take control of its application in businesses to mitigate risks. The Norges Bank Investment Fund, which holds considerable shares in almost 9,000 companies worldwide — accounting for […]

The post Use of AI for business governance must improve at the board level appeared first on AI News.

]]>
According to Carine Smith Ihenacho, chief governance and compliance officer of Norway’s $1.7 trillion sovereign wealth fund, boards need to be proficient with the use of AI and take control of its application in businesses to mitigate risks.

The Norges Bank Investment Fund, which holds considerable shares in almost 9,000 companies worldwide — accounting for 1.5% of all listed stocks — has become a trailblazer in environmental, social, and corporate governance issues. About a year ago, the fund also provided its invested companies with recommendations on integrating responsible AI to improve economic outcomes.

Several companies still have a lot of ground to cover. Specifically, when stating that “Overall, a lot of competence building needs to be done at the board level,” Smith Ihenacho clarified that this does not mean every board should have an AI specialist. Instead, boards need to collectively understand how AI matters in their business and have policies in place.

“They should know: ‘What’s our policy on AI? Are we high risk or low risk? Where does AI meet customers? Are we transparent around it?’ It’s a big-picture question they should be able to answer,” Smith Ihenacho added, highlighting the breadth of understanding required at the board level.

The fund has shared its perspective on AI with the boards of its 60 largest portfolio companies, as reported in its 2023 responsible investment report. It is particularly focused on AI use in the healthcare sector due to its substantial impact on consumers, and is closely monitoring Big Tech companies that develop AI-based products.

In its engagement with tech firms, the fund emphasises the importance of robust governance structures to manage AI-related risks. “We focus more on the governance structure,” Smith Ihenacho explained. “Is the board involved? Do you have a proper policy on AI?”

The fund’s emphasis on AI governance is particularly relevant, given that nine of the ten largest positions in its equity holdings are tech companies. Leading among them are names such as Microsoft, Apple, Amazon, and Meta Platforms. Investments in these companies contributed to a 12.5% growth in the fund’s stock portfolio in the first half of 2024. The overall exposure to the tech sector increased from 21% to 26% over the past year, now comprising a quarter of the stock portfolio. This underscores the significant role that technology and AI play in the world today.

Though the fund favours AI innovation for its potential to boost efficiency and productivity, Smith Ihenacho has emphasised the importance of responsible use. She is quoted as saying, “It is fantastic what AI may be able to do to support innovation, efficiency, and productivity… we support that.” However, she also stressed the need to be responsible in how we manage the risks.

The fund’s adoption of AI governance aligns with rising global concerns about the ethical implications and potential dangers of these technologies. AI is increasingly utilised across various sectors, from finance to healthcare, and the need for governance frameworks has never been greater. The Norwegian sovereign wealth fund maintains a standard that requires companies to develop comprehensive AI policies at the board level, fostering the adoption of responsible AI practices across its large portfolio.

This initiative by one of the world’s largest investors could have far-reaching implications for corporate governance practices globally. As companies seek to harness the power of AI while navigating its complexities, the guidance provided by influential investors like Norges Bank Investment Fund may serve as a blueprint for responsible AI implementation and governance in the corporate world.

See also: X agrees to halt use of certain EU data for AI chatbot training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Use of AI for business governance must improve at the board level appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/use-ai-business-governance-must-improve-at-board-level/feed/ 0
Amazon strives to outpace Nvidia with cheaper, faster AI chips https://www.artificialintelligence-news.com/news/amazon-strives-outpace-nvidia-cheaper-faster-ai-chips/ https://www.artificialintelligence-news.com/news/amazon-strives-outpace-nvidia-cheaper-faster-ai-chips/#respond Mon, 29 Jul 2024 14:15:11 +0000 https://www.artificialintelligence-news.com/?p=15534 Amazon’s chip lab is churning out a constant stream of innovation in Austin, Texas. A new server design was put through its paces by a group of devoted engineers on July 26th. During a visit to the facility in Austin, Amazon executive Rami Sinno shed light on the server’s use of Amazon’s AI chips. This […]

The post Amazon strives to outpace Nvidia with cheaper, faster AI chips appeared first on AI News.

]]>
Amazon’s chip lab is churning out a constant stream of innovation in Austin, Texas. A new server design was put through its paces by a group of devoted engineers on July 26th.

During a visit to the facility in Austin, Amazon executive Rami Sinno shed light on the server’s use of Amazon’s AI chips. This development is a bold step toward competing with Nvidia, the current leader in the field.

The main reason Amazon is developing its own processor is this: it doesn’t want to rely on Nvidia and buy the company’s chips. The expensive Nvidia chips power a big part of the AI cloud business at Amazon Web Services. This business is the most significant growth engine of the company. Thus, the so-called “Nvidia tax” was pushing the company to look for a cheaper option.

Amazon’s chip development program has a dual purpose. Firstly, the project is meant to provide customers with more affordable opportunities for complex calculations and large data volume processing. Secondly, the initiative was developed to preserve Amazon’s competitiveness in the volatile cloud computing and AI industry. This move was also supported by the directions of tech giants such as Microsoft and Alphabet, which are developing custom-made chips to maintain their leadership in the market.

Rami Sinno, director of engineering for Amazon’s Annapurna Labs, a key element of the AWS ecosystem, emphasised that customer demand for more economical solutions to Nvidia’s products is growing. The acquisition of Annapurna Labs in 2015 was a savvy move by Amazon as it enabled the company to lay the groundwork to begin developing popular chips.

Although Amazon’s chips for AI are in their early days, the company has been making and refining chips for other mainstream applications for nearly a decade, most notably its general-purpose chip, Graviton, which is now in its fourth generation. Amazon has announced that its Trainium and Inferentia chips, the company’s latest and strongest, are still in their early days and are specially designed processors.

The impact is potentially huge because the impressive performance underscores the reports by David Brown, vice president of compute and networking at AWS. In this light, it should be acknowledged that Amazon’s in-house chips could deliver up to a 40-50% price-performance ratio improvement compared to Nvidia-based solutions. In turn, this potential improvement could mean considerable savings for AWS clientele deploying their AI workloads.

AWS’ significance to Amazon’s overall business cannot be underestimated. In the first quarter of this year, AWS made up a little under a fifth of Amazon’s total revenue, as its sales soared by 17 per cent year over year to reach $25 billion. At the moment, AWS holds about a third of the global cloud computing market, and Microsoft’s Azure covers about a quarter, or 25%.

Amazon’s commitment to its custom chip strategy was demonstrated during the recent Prime Day, a two-day sales event at Amazon.com. To handle the highly elevated level of shopping as well as streaming video, music, and other content, Amazon deployed an impressive 250,000 Graviton chips and 80,000 of its custom AI chips across its platforms. Adobe Analytics announced record Prime Day results of $14.2 billion in sales.

It seems that as Amazon intensifies its work on the development of AI chips, the industry leader, Nvidia, is not going to remain at the same level. Nvidia’s CEO, Jensen Huang, has presented Nvidia’s latest Blackwell chips, which are scheduled for release later in the year. Their performance has increased significantly, and Huang promised that the new chips are twice as powerful for AI model training and five times faster for inference.

Nvidia’s dominant position in the AI chip market is underscored by its impressive client list, which includes tech giants like Amazon, Google, Microsoft, OpenAI, and Meta. The company’s focus on AI has propelled its market value to a staggering $2 trillion, making it the third most valuable company globally, behind only Microsoft and Apple.

As the AI chip race intensifies, Nvidia is also diversifying its offerings. The company has introduced new software tools to facilitate AI integration across various industries and is developing specialised chips for emerging applications such as in-car chatbots and humanoid robots.

(Image by Gerd Altmann)

See also: Nvidia: World’s most valuable company under French antitrust fire

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon strives to outpace Nvidia with cheaper, faster AI chips appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-strives-outpace-nvidia-cheaper-faster-ai-chips/feed/ 0
The exponential expenses of AI development https://www.artificialintelligence-news.com/news/the-exponential-expenses-of-ai-development/ https://www.artificialintelligence-news.com/news/the-exponential-expenses-of-ai-development/#respond Mon, 29 Jul 2024 12:40:13 +0000 https://www.artificialintelligence-news.com/?p=15538 Tech giants like Microsoft, Alphabet, and Meta are riding high on a wave of revenue from AI-driven cloud services, yet simultaneously drowning in the substantial costs of pushing AI’s boundaries. Recent financial reports paint a picture of a double-edged sword: on one side, impressive gains; on the other, staggering expenses.  This dichotomy has led Bloomberg to aptly […]

The post The exponential expenses of AI development appeared first on AI News.

]]>
Tech giants like Microsoft, Alphabet, and Meta are riding high on a wave of revenue from AI-driven cloud services, yet simultaneously drowning in the substantial costs of pushing AI’s boundaries. Recent financial reports paint a picture of a double-edged sword: on one side, impressive gains; on the other, staggering expenses. 

This dichotomy has led Bloomberg to aptly dub AI development a “huge money pit,” highlighting the complex economic reality behind today’s AI revolution. At the heart of this financial problem lies a relentless push for bigger, more sophisticated AI models. The quest for artificial general intelligence (AGI) has led companies to develop increasingly complex systems, exemplified by large language models like GPT-4. These models require vast computational power, driving up hardware costs to unprecedented levels.

To top it off, the demand for specialised AI chips, mainly graphics processing units (GPUs), has skyrocketed. Nvidia, the leading manufacturer in this space, has seen its market value soar as tech companies scramble to secure these essential components. Its H100 graphics chip, the gold standard for training AI models, has sold for an estimated $30,000 — with some resellers offering them for multiple times that amount. 

The global chip shortage has only exacerbated this issue, with some firms waiting months to acquire the necessary hardware. Meta Chief Executive Officer Zuckerberg previously said that his company planned to acquire 350,000 H100 chips by the end of this year to support its AI research efforts. Even if he gets a bulk-buying discount, that quickly adds to billions of dollars.

On the other hand, the push for more advanced AI has also sparked an arms race in chip design. Companies like Google and Amazon invest heavily in developing their AI-specific processors, aiming to gain a competitive edge and reduce reliance on third-party suppliers. This trend towards custom silicon adds another layer of complexity and cost to the AI development process.

But the hardware challenge extends beyond just procuring chips. The scale of modern AI models necessitates massive data centres, which come with their technological hurdles. These facilities must be designed to handle extreme computational loads while managing heat dissipation and energy consumption efficiently. As models grow larger, so do the power requirements, significantly increasing operational costs and environmental impact.

In a podcast interview in early April, Dario Amodei, the chief executive officer of OpenAI-rival Anthropic, said the current crop of AI models on the market cost around $100 million to train. “The models that are in training now and that will come out at various times later this year or early next year are closer in cost to $1 billion,” he said. “And then I think in 2025 and 2026, we’ll get more towards $5 or $10 billion.”

Then, there is data, the lifeblood of AI systems, presenting its own technological challenges. The need for vast, high-quality datasets has led companies to invest heavily in data collection, cleaning, and annotation technologies. Some firms are developing sophisticated synthetic data generation tools to supplement real-world data, further driving up research and development costs.

The rapid pace of AI innovation also means that infrastructure and tools quickly become obsolete. Companies must continuously upgrade their systems and retrain their models to stay competitive, creating a constant cycle of investment and obsolescence.

“On April 25, Microsoft said it spent $14 billion on capital expenditures in the most recent quarter and expects those costs to “increase materially,” driven partly by AI infrastructure investments. That was a 79% increase from the year-earlier quarter. Alphabet said it spent $12 billion during the quarter, a 91% increase from a year earlier, and expects the rest of the year to be “at or above” that level as it focuses on AI opportunities,” the article by Bloomberg reads.

Bloomberg also noted that Meta, meanwhile, raised its estimates for investments for the year and now believes capital expenditures will be $35 billion to $40 billion, which would be a 42% increase at the high end of the range. “It cited aggressive investment in AI research and product development,” Bloomberg wrote.

Interestingly, Bloomberg’s article also points out that despite these enormous costs, tech giants are proving that AI can be a real revenue driver. Microsoft and Alphabet reported significant growth in their cloud businesses, mainly attributed to increased demand for AI services. This suggests that while the initial investment in AI technology is staggering, the potential returns are compelling enough to justify the expense.

However, the high costs of AI development raise concerns about market concentration. As noted in the article, the expenses associated with cutting-edge AI research may limit innovation to a handful of well-funded companies, potentially stifling competition and diversity in the field. Looking ahead, the industry is focusing on developing more efficient AI technologies to address these cost challenges. 

Research into techniques like few-shot learning, transfer learning, and more energy-efficient model architectures aims to reduce the computational resources required for AI development and deployment. Moreover, the push towards edge AI – running AI models on local devices rather than in the cloud – could help distribute computational loads and reduce the strain on centralised data centres. 

This shift, however, requires its own set of technological innovations in chip design and software optimisation. Overall, it is clear that the future of AI will be shaped not just by breakthroughs in algorithms and model design but also by our ability to overcome the immense technological and financial hurdles that come with scaling AI systems. Companies that can navigate these challenges effectively will likely emerge as the leaders in the next phase of the AI revolution.

(Image by Igor Omilaev)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The exponential expenses of AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/the-exponential-expenses-of-ai-development/feed/ 0
Amazon will use computer vision to spot defects before dispatch https://www.artificialintelligence-news.com/news/amazon-use-computer-vision-spot-defects-before-dispatch/ https://www.artificialintelligence-news.com/news/amazon-use-computer-vision-spot-defects-before-dispatch/#respond Tue, 04 Jun 2024 11:44:26 +0000 https://www.artificialintelligence-news.com/?p=14956 Amazon will harness computer vision and AI to ensure customers receive products in pristine condition and further its sustainability efforts. The initiative – dubbed “Project P.I.” (short for “private investigator”) – operates within Amazon fulfilment centres across North America, where it will scan millions of products daily for defects. Project P.I. leverages generative AI and […]

The post Amazon will use computer vision to spot defects before dispatch appeared first on AI News.

]]>
Amazon will harness computer vision and AI to ensure customers receive products in pristine condition and further its sustainability efforts. The initiative – dubbed “Project P.I.” (short for “private investigator”) – operates within Amazon fulfilment centres across North America, where it will scan millions of products daily for defects.

Project P.I. leverages generative AI and computer vision technologies to detect issues such as damaged products or incorrect colours and sizes before they reach customers. The AI model not only identifies defects but also helps uncover the root causes, enabling Amazon to implement preventative measures upstream. This system has proven highly effective in the sites where it has been deployed, accurately identifying product issues among the vast number of items processed each month.

Before any item is dispatched, it passes through an imaging tunnel where Project P.I. evaluates its condition. If a defect is detected, the item is isolated and further investigated to determine if similar products are affected.

Amazon associates review the flagged items and decide whether to resell them at a discount via Amazon’s Second Chance site, donate them, or find alternative uses. This technology aims to act as an extra pair of eyes, enhancing manual inspections at several North American fulfilment centres, with plans for expansion throughout 2024.

Dharmesh Mehta, Amazon’s VP of Worldwide Selling Partner Services, said: “We want to get the experience right for customers every time they shop in our store.

“By leveraging AI and product imaging within our operations facilities, we are able to efficiently detect potentially damaged products and address more of those issues before they ever reach a customer, which is a win for the customer, our selling partners, and the environment.”

Project P.I. also plays a crucial role in Amazon’s sustainability initiatives. By preventing damaged or defective items from reaching customers, the system helps reduce unwanted returns, wasted packaging, and unnecessary carbon emissions from additional transportation.

Kara Hurst, Amazon’s VP of Worldwide Sustainability, commented: “AI is helping Amazon ensure that we’re not just delighting customers with high-quality items, but we’re extending that customer obsession to our sustainability work by preventing less-than-perfect items from leaving our facilities, and helping us avoid unnecessary carbon emissions due to transportation, packaging, and other steps in the returns process.”

In parallel, Amazon is utilising a generative AI system equipped with a Multi-Modal LLM (MLLM) to investigate the root causes of negative customer experiences.

When defects reported by customers slip through initial checks, this system reviews customer feedback and analyses images from fulfilment centres to understand what went wrong. For example, if a customer receives the wrong size of a product, the system examines the product labels in fulfilment centre images to pinpoint the error.

This technology is also beneficial for Amazon’s selling partners, especially the small and medium-sized businesses that make up over 60% of Amazon’s sales. By making defect data more accessible, Amazon helps these sellers rectify issues quickly and reduce future errors.

(Photo by Andrew Stickelman)

See also: X now permits AI-generated adult content

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon will use computer vision to spot defects before dispatch appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-use-computer-vision-spot-defects-before-dispatch/feed/ 0