Enterprise | Enterprise AI News | AI News https://www.artificialintelligence-news.com/categories/ai-enterprise/ Artificial Intelligence News Fri, 02 May 2025 09:54:33 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Enterprise | Enterprise AI News | AI News https://www.artificialintelligence-news.com/categories/ai-enterprise/ 32 32 Are AI chatbots really changing the world of work? https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/ https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/#respond Fri, 02 May 2025 09:54:32 +0000 https://www.artificialintelligence-news.com/?p=106266 We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now. Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far. Researchers Anders Humlum (University of Chicago) […]

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now.

Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far.

Researchers Anders Humlum (University of Chicago) and Emilie Vestergaard (University of Copenhagen) didn’t just rely on anecdotes. They dug deep, connecting responses from two big surveys (late 2023 and 2024) with official, detailed records about jobs and pay in Denmark.

The pair zoomed in on around 25,000 people working in 7,000 different places, covering 11 jobs thought to be right in the path of AI disruption.   

Everyone’s using AI chatbots for work, but where are the benefits?

What they found confirms what many of us see: AI chatbots are everywhere in Danish workplaces now. Most bosses are actually encouraging staff to use them, a real turnaround from the early days when companies were understandably nervous about things like data privacy.

Almost four out of ten employers have even rolled out their own in-house chatbots, and nearly a third of employees have had some formal training on these tools.   

When bosses gave the nod, the number of staff using chatbots practically doubled, jumping from 47% to 83%. It also helped level the playing field a bit. That gap between men and women using chatbots? It shrank noticeably when companies actively encouraged their use, especially when they threw in some training.

So, the tools are popular, companies are investing, people are getting trained… but the big economic shift? It seems to be missing in action.

Using statistical methods to compare people who used AI chatbots for work with those who didn’t, both before and after ChatGPT burst onto the scene, the researchers found… well, basically nothing.

“Precise zeros,” the researchers call their findings. No significant bump in pay, no change in recorded work hours, across all 11 job types they looked at. And they’re pretty confident about this – the numbers rule out any average effect bigger than just 1%.

This wasn’t just a blip, either. The lack of impact held true even for the keen beans who jumped on board early, those using chatbots daily, or folks working where the boss was actively pushing the tech.

Looking at whole workplaces didn’t change the story; places with lots of chatbot users didn’t see different trends in hiring, overall wages, or keeping staff compared to places using them less.

Productivity gains: More of a gentle nudge than a shove

Why the big disconnect? Why all the hype and investment if it’s not showing up in paychecks or job stats? The study flags two main culprits: the productivity boosts aren’t as huge as hoped in the real world, and what little gains there are aren’t really making their way into wages.

Sure, people using AI chatbots for work felt they were helpful. They mentioned better work quality and feeling more creative. But the number one benefit? Saving time.

However, when the researchers crunched the numbers, the average time saved was only about 2.8% of a user’s total work hours. That’s miles away from the huge 15%, 30%, even 50% productivity jumps seen in controlled lab-style experiments (RCTs) involving similar jobs.

Why the difference? A few things seem to be going on. Those experiments often focus on jobs or specific tasks where chatbots really shine (like coding help or basic customer service responses). This study looked at a wider range, including jobs like teaching where the benefits might be smaller.

The researchers stress the importance of what they call “complementary investments”. People whose companies encouraged chatbot use and provided training actually did report bigger benefits – saving more time, improving quality, and feeling more creative. This suggests that just having the tool isn’t enough; you need the right support and company environment to really unlock its potential.

And even those modest time savings weren’t padding wallets. The study reckons only a tiny fraction – maybe 3% to 7% – of the time saved actually showed up as higher earnings. It might be down to standard workplace inertia, or maybe it’s just harder to ask for a raise based on using a tool your boss hasn’t officially blessed, especially when many people started using them off their own bat.

Making new work, not less work

One fascinating twist is that AI chatbots aren’t just about doing old work tasks faster. They seem to be creating new tasks too. Around 17% of people using them said they had new workloads, mostly brand new types of tasks.

This phenomenon happened more often in workplaces that encouraged chatbot use. It even spilled over to people not using the tools – about 5% of non-users reported new tasks popping up because of AI, especially teachers having to adapt assignments or spot AI-written homework.   

What kind of new tasks? Things like figuring out how to weave AI into daily workflows, drafting content with AI help, and importantly, dealing with the ethical side and making sure everything’s above board. It hints that companies are still very much in the ‘figuring it out’ phase, spending time and effort adapting rather than just reaping instant rewards.

What’s the verdict on the work impact of AI chatbots?

The researchers are careful not to write off generative AI completely. They see pathways for it to become more influential over time, especially as companies get better at integrating it and maybe as those “new tasks” evolve.

But for now, their message is clear: the current reality doesn’t match the hype about a massive, immediate job market overhaul.

“Despite rapid adoption and substantial investments… our key finding is that AI chatbots have had minimal impact on productivity and labor market outcomes to date,” the researchers conclude.   

It brings to mind that old quote about the early computer age: seen everywhere, except in the productivity stats. Two years on from ChatGPT’s launch kicking off the fastest tech adoption we’ve ever seen, its actual mark on jobs and pay looks surprisingly light.

The revolution might still be coming, but it seems to be taking its time.   

See also: Claude Integrations: Anthropic adds AI to your favourite work tools

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/feed/ 0
China’s MCP adoption: AI assistants that actually do things https://www.artificialintelligence-news.com/news/chinas-mcp-adoption-ai-assistants-that-actually-do-things/ https://www.artificialintelligence-news.com/news/chinas-mcp-adoption-ai-assistants-that-actually-do-things/#respond Wed, 23 Apr 2025 12:03:11 +0000 https://www.artificialintelligence-news.com/?p=105453 China’s tech companies will drive adoption of the MCP (Model Context Protocol) standard that transforms AI assistants from simple chatbots into powerful digital helpers. MCP works like a universal connector that lets AI assistants interact directly with favourite apps and services – enabling them to make payments, book appointments, check maps, and access information on […]

The post China’s MCP adoption: AI assistants that actually do things appeared first on AI News.

]]>
China’s tech companies will drive adoption of the MCP (Model Context Protocol) standard that transforms AI assistants from simple chatbots into powerful digital helpers.

MCP works like a universal connector that lets AI assistants interact directly with favourite apps and services – enabling them to make payments, book appointments, check maps, and access information on different platforms on users’ behalves.

As reported by the South China Morning Post, companies like Ant Group, Alibaba Cloud, and Baidu are deploying MCP-based services and positioning AI agents as the next step, after chatbots and large language models. But will China’s MCP adoption truly transform the AI landscape, or is it simply another step in the technology’s evolution?

Why China’s MCP adoption matters for AI’s evolution

The Model Context Protocol was initially introduced by Anthropic in November 2024, at the time described as a standard that connects AI agents “to the systems where data lives, including content repositories, business tools and development environments.”

MCP serves as what Ant Group calls a “USB-C port for AI applications” – a universal connector allowing AI agents to integrate with multiple systems.

The standardisation is particularly significant for AI agents like Butterfly Effect’s Manus, which are designed to autonomously perform tasks by creating plans consisting of specific subtasks using available resources.

Unlike traditional chatbots that just respond to queries, AI agents can actively interact with different systems, collect feedback, and incorporate that feedback into new actions.

Chinese tech giants lead the MCP movement

China’s MCP adoption by tech leaders highlights the importance placed on AI agents as the next evolution in artificial intelligence:

  • Ant Group, Alibaba’s fintech affiliate, has unveiled its “MCP server for payment services,” that lets AI agents connect with Alipay’s payment platform. The integration allows users to “easily make payments, check payment statuses and initiate refunds using simple natural language commands,” according to Ant Group’s statement.
  • Additionally, Ant Group’s AI agent development platform, Tbox, now supports deployment of more than 30 MCP services currently on the market, including those for Alipay, Amap Maps, Google MCP, and Amazon Web Services’ knowledge base retrieval server.
  • Alibaba Cloud launched an MCP marketplace through its AI model hosting platform ModelScope, offering more than 1,000 services connecting to mapping tools, office collaboration platforms, online storage services, and various Google services.
  • Baidu, China’s leading search and AI company, has indicated that its support for MCP would foster “abundant use cases for [AI] applications and solutions.”

Beyond chatbots: Why AI agents represent the next frontier

China’s MCP adoption signals a shift in focus from large language models and chatbots to more capable AI agents. As Red Xiao Hong, founder and CEO of Butterfly Effect, described, an AI agent is “more like a human being” compared to how chatbots perform.

The agents not only respond to questions but “interact with the environment, collect feedback and use the feedback as a new prompt.” This distinction is held to be important by companies driving progress in AI.

While chatbots and LLMs can generate text and respond to queries, AI agents can take actions on multiple platforms and services. They represent an advance from the limited capabilities of conventional AI applications toward autonomous systems capable of completing more complex tasks with less human intervention.

The rapid embrace of MCP by Chinese tech companies suggests they view AI agents as a new avenue for innovation and commercial opportunity that go beyond what’s possible with existing chatbots and language models.

China’s MCP adoption could position its tech companies at the forefront of practical AI implementation. By creating standardised ways for AI agents to interact with services, Chinese companies are building ecosystems where AI could deliver more comprehensive experiences.

Challenges and considerations of China’s MCP adoption

Despite the developments in China’s MCP adoption, several factors may influence the standard’s longer-term impact:

  1. International standards competition. While Chinese tech companies are racing to implement MCP, its global success depends on widespread adoption. Originally developed by Anthropic, the protocol faces potential competition from alternative standards that might emerge from other major AI players like OpenAI, Google, or Microsoft.
  2. Regulatory environments. As AI agents gain more autonomy in performing tasks, especially those involving payments and sensitive user data, regulatory scrutiny will inevitably increase. China’s regulatory landscape for AI is still evolving, and how authorities respond to these advancements will significantly impact MCP’s trajectory.
  3. Security and privacy. The integration of AI agents with multiple systems via MCP creates new potential vulnerabilities. Ensuring robust security measures across all connected platforms will be important for maintaining user trust.
  4. Technical integration challenges. While the concept of universal connectivity is appealing, achieving integration across diverse systems with varying architectures, data structures, and security protocols presents significant technical challenges.

The outlook for China’s AI ecosystem

China’s MCP adoption represents a strategic bet on AI agents as the next evolution in artificial intelligence. If successful, it could accelerate the practical implementation of AI in everyday applications, potentially transforming how users interact with digital services.

As Red Xiao Hong noted, AI agents are designed to interact with their environment in ways that more closely resemble human behaviour than traditional AI applications. The capacity for interaction and adaptation could be what finally bridges the gap between narrow AI tools and the more generalised assistants that tech companies have long promised.

See also: Manus AI agent: breakthrough in China’s agentic AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post China’s MCP adoption: AI assistants that actually do things appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chinas-mcp-adoption-ai-assistants-that-actually-do-things/feed/ 0
AI memory demand propels SK Hynix to historic DRAM market leadership https://www.artificialintelligence-news.com/news/ai-memory-demand-propels-sk-hynix-to-historic-dram-market-leadership/ https://www.artificialintelligence-news.com/news/ai-memory-demand-propels-sk-hynix-to-historic-dram-market-leadership/#respond Wed, 23 Apr 2025 11:33:53 +0000 https://www.artificialintelligence-news.com/?p=105416 AI memory demand has catapulted SK Hynix to a top position in the global DRAM market, overtaking longtime leader Samsung for the first time. According to Counterpoint Research data, SK Hynix captured 36% of the DRAM market in Q1 2025, compared to Samsung’s 34% share. HBM chips drive market shift The company’s achievement ends Samsung’s […]

The post AI memory demand propels SK Hynix to historic DRAM market leadership appeared first on AI News.

]]>
AI memory demand has catapulted SK Hynix to a top position in the global DRAM market, overtaking longtime leader Samsung for the first time.

According to Counterpoint Research data, SK Hynix captured 36% of the DRAM market in Q1 2025, compared to Samsung’s 34% share.

HBM chips drive market shift

The company’s achievement ends Samsung’s three-decade dominance in DRAM manufacturing and comes shortly after SK Hynix’s operating profit passed Samsung’s in Q4 2024.

The company’s strategic focus on high-bandwidth memory (HBM) chips, essential components for artificial intelligence applications, has proven to be the decisive factor in the market shift.

“The is a milestone for SK Hynix which is successfully delivering on DRAM to a market that continues to show unfettered demand for HBM memory,” said Jeongku Choi, senior analyst at Counterpoint Research.

“The manufacturing of specialised HBM DRAM chips has been notoriously tricky and those that got it right early on have reaped dividends.”

SK Hynix has taken the overall DRAM market lead and has established its dominance in the HBM sector, occupying 70% of this high-value market segment, according to Counterpoint Research.

HBM chips, which stack multiple DRAM dies to dramatically increase data processing capabilities, have become fundamental components for training AI models.

“It’s another wake-up call for Samsung,” said MS Hwang, research director at Counterpoint Research in Seoul, as quoted by Bloomberg. Hwang noted that SK Hynix’s leadership in HBM chips likely comprised a larger portion of the company’s operating income.

Financial performance and industry outlook

The company is expected to report positive financial results on Thursday, with analysts projecting a 38% quarterly rise in sales and a 129% increase in operating profit for the March quarter, according to Bloomberg data.

The shift in market leadership reflects broader changes in the semiconductor industry as AI applications drive demand for specialised memory solutions.

While traditional DRAM remains essential for computing devices, HBM chips that can handle the enormous data requirements of generative AI systems are becoming increasingly valuable.

Market research firm TrendForce forecasts that SK Hynix will maintain its leadership position throughout 2025, coming to control over 50% of the HBM market in gigabit shipments.

Samsung’s share is expected to decline to under 30%, while Micron Technology is said to gain ground to take close to 20% of the market.

Counterpoint Research expects the overall DRAM market in Q2 2025 to maintain similar patterns across segment growth and vendor share, suggesting SK Hynix’s newfound leadership position may be sustainable in the near term.

Navigating potential AI memory demand headwinds

Despite the current AI memory demand boom, industry analysts identify several challenges on the horizon. “Right now the world is focused on the impact of tariffs, so the question is: what’s going to happen with HBM DRAM?” said MS Hwang.

“At least in the short term, the segment is less likely to be affected by any trade shock as AI demand should remain strong. More significantly, the end product for HBM is AI servers, which – by definition – can be borderless.”

However, longer-term risks remain significant. Counterpoint Research sees potential threats to HBM DRAM market growth “stemming from structural challenges brought on by trade shock that could trigger a recession or even a depression.”

Morgan Stanley analysts, led by Shawn Kim, expressed similar sentiment in a note to investors cited by Bloomberg: “The real tariff impact on memory resembles an iceberg, with most danger unseen below the surface and still approaching.”

The analysts cautioned that earnings reports might be overshadowed by these larger macroeconomic forces. Interestingly, despite SK Hynix’s current advantage, Morgan Stanley still favours Samsung as their top pick in the memory sector.

“It can better withstand a macro slowdown, is priced at trough multiples, has optionality of future growth via HBM, and is buying back shares every day,” analysts wrote.

Samsung is scheduled to provide its complete financial statement with net income and divisional breakdowns on April 30, after reporting preliminary operating profit of 6.6 trillion won ($6 billion) on revenue of 79 trillion won earlier this month.

The shift in competitive positioning between the two South Korean memory giants underscores how specialised AI components are reshaping the semiconductor industry.

SK Hynix’s early and aggressive investment in HBM technology has paid off, though Samsung’s considerable resources ensure the rivalry will continue.

For the broader technology ecosystem, the change in DRAM market leadership signals the growing importance of AI-specific hardware components.

As data centres worldwide continue expanding to support increasingly-sophisticated AI models, AI memory demand should remain robust despite potential macroeconomic headwinds.

(Image credit: SK Hynix)

See also: Samsung aims to boost on-device AI with LPDDR5X DRAM

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI memory demand propels SK Hynix to historic DRAM market leadership appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-memory-demand-propels-sk-hynix-to-historic-dram-market-leadership/feed/ 0
Google introduces AI reasoning control in Gemini 2.5 Flash https://www.artificialintelligence-news.com/news/google-introduces-ai-reasoning-control-gemini-2-5-flash/ https://www.artificialintelligence-news.com/news/google-introduces-ai-reasoning-control-gemini-2-5-flash/#respond Wed, 23 Apr 2025 07:01:20 +0000 https://www.artificialintelligence-news.com/?p=105376 Google has introduced an AI reasoning control mechanism for its Gemini 2.5 Flash model that allows developers to limit how much processing power the system expends on problem-solving. Released on April 17, this “thinking budget” feature responds to a growing industry challenge: advanced AI models frequently overanalyse straightforward queries, consuming unnecessary computational resources and driving […]

The post Google introduces AI reasoning control in Gemini 2.5 Flash appeared first on AI News.

]]>
Google has introduced an AI reasoning control mechanism for its Gemini 2.5 Flash model that allows developers to limit how much processing power the system expends on problem-solving.

Released on April 17, this “thinking budget” feature responds to a growing industry challenge: advanced AI models frequently overanalyse straightforward queries, consuming unnecessary computational resources and driving up operational and environmental costs.

While not revolutionary, the development represents a practical step toward addressing efficiency concerns that have emerged as reasoning capabilities become standard in commercial AI software.

The new mechanism enables precise calibration of processing resources before generating responses, potentially changing how organisations manage financial and environmental impacts of AI deployment.

“The model overthinks,” acknowledges Tulsee Doshi, Director of Product Management at Gemini. “For simple prompts, the model does think more than it needs to.”

The admission reveals the challenge facing advanced reasoning models – the equivalent of using industrial machinery to crack a walnut.

The shift toward reasoning capabilities has created unintended consequences. Where traditional large language models primarily matched patterns from training data, newer iterations attempt to work through problems logically, step by step. While this approach yields better results for complex tasks, it introduces significant inefficiency when handling simpler queries.

Balancing cost and performance

The financial implications of unchecked AI reasoning are substantial. According to Google’s technical documentation, when full reasoning is activated, generating outputs becomes approximately six times more expensive than standard processing. The cost multiplier creates a powerful incentive for fine-tuned control.

Nathan Habib, an engineer at Hugging Face who studies reasoning models, describes the problem as endemic across the industry. “In the rush to show off smarter AI, companies are reaching for reasoning models like hammers even where there’s no nail in sight,” he explained to MIT Technology Review.

The waste isn’t merely theoretical. Habib demonstrated how a leading reasoning model, when attempting to solve an organic chemistry problem, became trapped in a recursive loop, repeating “Wait, but…” hundreds of times – essentially experiencing a computational breakdown and consuming processing resources.

Kate Olszewska, who evaluates Gemini models at DeepMind, confirmed Google’s systems sometimes experience similar issues, getting stuck in loops that drain computing power without improving response quality.

Granular control mechanism

Google’s AI reasoning control provides developers with a degree of precision. The system offers a flexible spectrum ranging from zero (minimal reasoning) to 24,576 tokens of “thinking budget” – the computational units representing the model’s internal processing. The granular approach allows for customised deployment based on specific use cases.

Jack Rae, principal research scientist at DeepMind, says that defining optimal reasoning levels remains challenging: “It’s really hard to draw a boundary on, like, what’s the perfect task right now for thinking.”

Shifting development philosophy

The introduction of AI reasoning control potentially signals a change in how artificial intelligence evolves. Since 2019, companies have pursued improvements by building larger models with more parameters and training data. Google’s approach suggests an alternative path focusing on efficiency rather than scale.

“Scaling laws are being replaced,” says Habib, indicating that future advances may emerge from optimising reasoning processes rather than continuously expanding model size.

The environmental implications are equally significant. As reasoning models proliferate, their energy consumption grows proportionally. Research indicates that inferencing – generating AI responses – now contributes more to the technology’s carbon footprint than the initial training process. Google’s reasoning control mechanism offers a potential mitigating factor for this concerning trend.

Competitive dynamics

Google isn’t operating in isolation. The “open weight” DeepSeek R1 model, which emerged earlier this year, demonstrated powerful reasoning capabilities at potentially lower costs, triggering market volatility that reportedly caused nearly a trillion-dollar stock market fluctuation.

Unlike Google’s proprietary approach, DeepSeek makes its internal settings publicly available for developers to implement locally.

Despite the competition, Google DeepMind’s chief technical officer Koray Kavukcuoglu maintains that proprietary models will maintain advantages in specialised domains requiring exceptional precision: “Coding, math, and finance are cases where there’s high expectation from the model to be very accurate, to be very precise, and to be able to understand really complex situations.”

Industry maturation signs

The development of AI reasoning control reflects an industry now confronting practical limitations beyond technical benchmarks. While companies continue to push reasoning capabilities forward, Google’s approach acknowledges a important reality: efficiency matters as much as raw performance in commercial applications.

The feature also highlights tensions between technological advancement and sustainability concerns. Leaderboards tracking reasoning model performance show that single tasks can cost upwards of $200 to complete – raising questions about scaling such capabilities in production environments.

By allowing developers to dial reasoning up or down based on actual need, Google addresses both financial and environmental aspects of AI deployment.

“Reasoning is the key capability that builds up intelligence,” states Kavukcuoglu. “The moment the model starts thinking, the agency of the model has started.” The statement reveals both the promise and the challenge of reasoning models – their autonomy creates both opportunities and resource management challenges.

For organisations deploying AI solutions, the ability to fine-tune reasoning budgets could democratise access to advanced capabilities while maintaining operational discipline.

Google claims Gemini 2.5 Flash delivers “comparable metrics to other leading models for a fraction of the cost and size” – a value proposition strengthened by the ability to optimise reasoning resources for specific applications.

Practical implications

The AI reasoning control feature has immediate practical applications. Developers building commercial applications can now make informed trade-offs between processing depth and operational costs.

For simple applications like basic customer queries, minimal reasoning settings preserve resources while still using the model’s capabilities. For complex analysis requiring deep understanding, the full reasoning capacity remains available.

Google’s reasoning ‘dial’ provides a mechanism for establishing cost certainty while maintaining performance standards.

See also: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google introduces AI reasoning control in Gemini 2.5 Flash appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-introduces-ai-reasoning-control-gemini-2-5-flash/feed/ 0
Huawei’s AI hardware breakthrough challenges Nvidia’s dominance https://www.artificialintelligence-news.com/news/huawei-ai-hardware-breakthrough-challenges-nvidia-dominance/ https://www.artificialintelligence-news.com/news/huawei-ai-hardware-breakthrough-challenges-nvidia-dominance/#respond Thu, 17 Apr 2025 15:12:36 +0000 https://www.artificialintelligence-news.com/?p=105355 Chinese tech giant Huawei has made a bold move that could potentially change who leads the global AI chip race. The company has unveiled a powerful new computing system called the CloudMatrix 384 Supernode that, according to local media reports, performs better than similar technology from American chip leader Nvidia. If the performance claims prove […]

The post Huawei’s AI hardware breakthrough challenges Nvidia’s dominance appeared first on AI News.

]]>
Chinese tech giant Huawei has made a bold move that could potentially change who leads the global AI chip race. The company has unveiled a powerful new computing system called the CloudMatrix 384 Supernode that, according to local media reports, performs better than similar technology from American chip leader Nvidia.

If the performance claims prove accurate, the AI hardware breakthrough might reshape the technology landscape at a time when AI development is continuing worldwide, and despite US efforts to limit China’s access to advanced technology.

300 petaflops: Challenging Nvidia’s hardware dominance

The CloudMatrix 384 Supernode is described as a “nuclear-level product,” according to reports from STAR Market Daily cited by the South China Morning Post (SCMP). The hardware achieves an impressive 300 petaflops of computing power, in excess of the 180 petaflops delivered by Nvidia’s NVL72 system.

The CloudMatrix 384 Supernode was specifically engineered to address the computing bottlenecks that have become increasingly problematic as artificial intelligence models continue to grow in size and complexity.

The system is designed to compete directly with Nvidia’s offerings, which have dominated the global market for AI accelerator hardware thus far. Huawei’s CloudMatrix infrastructure was first unveiled in September 2024, and was developed specifically to meet surging demand in China’s domestic market.

The 384 Supernode variant represents the most powerful implementation of AI architecture to date, with reports indicating it can achieve a throughput of 1,920 tokens per second and maintain high levels of accuracy, reportedly matching the performance of Nvidia’s H100 chips, but using Chinese-made components instead.

Developing under sanctions: The technical achievement

What makes the AI hardware breakthrough particularly significant is that it has been achieved despite the severe technological restrictions Huawei has faced since being placed on the US Entity List.

Sanctions have limited the company’s access to advanced US semiconductor technology and design software, forcing Huawei to develop alternative approaches and rely on domestic supply chains.

The core technological advancement enabling the CloudMatrix 384’s performance appears to be Huawei’s answer to Nvidia’s NVLink – a high-speed interconnect technology that allows multiple GPUs to communicate efficiently.

Nvidia’s NVL72 system, released in March 2024, features a 72-GPU NVLink domain that functions as a single, powerful GPU, enabling real-time inference for trillion-parameter models at speeds 30 times faster than previous generations.

According to reporting from the SCMP, Huawei is collaborating with Chinese AI infrastructure startup SiliconFlow to implement the CloudMatrix 384 Supernode in supporting DeepSeek-R1, a reasoning model from Hangzhou-based DeepSeek.

Supernodes are AI infrastructure architectures equipped with more resources than standard systems – including enhanced central processing units, neural processing units, network bandwidth, storage, and memory.

The configuration allows them to function as relay servers, enhancing the overall computing performance of clusters and significantly accelerating the training of foundational AI models.

Beyond Huawei: China’s broader AI infrastructure push

The AI hardware breakthrough from Huawei doesn’t exist in isolation but rather represents part of a broader push by Chinese technology companies to build domestic AI computing infrastructure.

In February, e-commerce giant Alibaba Group announced a massive 380 billion yuan ($52.4 billion) investment in computing resources and AI infrastructure over three years – the largest-ever investment by a private Chinese company in a computing project.

For the global AI community, the emergence of viable alternatives to Nvidia’s hardware could eventually address the computing bottlenecks that have limited AI advancement. Competition in this space could potentially increase available computing capacity and provide developers with more options for training and deploying their models.

However, it’s worth noting that as of the report’s publication, Huawei had not yet responded to requests for comment on these claims.

As tensions between the US and China continue to intensify in the technology sector, Huawei’s CloudMatrix 384 Supernode represents a significant development in China’s pursuit of technological self-sufficiency.

If the performance claims are verified, this AI hardware breakthrough would mean Huawei has achieved computing independence in this niche, despite facing extensive sanctions.

The development also signals a broader trend in China’s technology sector, with multiple domestic companies intensifying their investments in AI infrastructure to capitalise on growing demand and promote the adoption of homegrown chips.

The collective effort suggests China is committed to developing domestic alternatives to American technology in this strategically important field..

See also: Manus AI agent: breakthrough in China’s agentic AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Huawei’s AI hardware breakthrough challenges Nvidia’s dominance appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/huawei-ai-hardware-breakthrough-challenges-nvidia-dominance/feed/ 0
BCG: Analysing the geopolitics of generative AI https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/ https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/#respond Fri, 11 Apr 2025 16:11:17 +0000 https://www.artificialintelligence-news.com/?p=105294 Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike. Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and […]

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike.

Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and the implications for multinational corporations.

AI investments expose businesses to increasingly tense geopolitics

Sylvain Duranton, Global Leader at BCG X, noted the significant geopolitical risk companies face: “For large companies, close to half of them, 44%, have teams around the world, not just in one country where their headquarters are.”

Sylvain Duranton, Global Leader at BCG X

Many of these businesses operate across numerous countries, making them vulnerable to differing regulations and sovereignty issues. “They’ve built their AI teams and ecosystem far before there was such tension around the world.”

Duranton also pointed to the stark imbalance in the AI supply race, particularly in investment.

Comparing the market capitalisation of tech companies, the US dwarfs Europe by a factor of 20 and the Asia Pacific region by five. Investment figures paint a similar picture, showing a “completely disproportionate” imbalance compared to the relative sizes of the economies.

This AI race is fuelled by massive investments in compute power, frontier models, and the emergence of lighter, open-weight models changing the competitive dynamic.   

Benchmarking national AI capabilities

Nikolaus Lang, Global Leader at the BCG Henderson Institute – BCG’s think tank – detailed the extensive research undertaken to benchmark national GenAI capabilities objectively.

The team analysed the “upstream of GenAI,” focusing on large language model (LLM) development and its six key enablers: capital, computing power, intellectual property, talent, data, and energy.

Using hard data like AI researcher numbers, patents, data centre capacity, and VC investment, they created a comparative analysis. Unsurprisingly, the analysis revealed the US and China as the clear AI frontrunners and maintain leads in geopolitics.

Nikolaus Lang, Global Leader at the BCG Henderson Institute

The US boasts the largest pool of AI specialists (around half a million), immense capital power ($303bn in VC funding, $212bn in tech R&D), and leading compute power (45 GW).

Lang highlighted America’s historical dominance, noting, “the US has been the largest producer of notable AI models with 67%” since 1950, a lead reflected in today’s LLM landscape. This strength is reinforced by “outsized capital power” and strategic restrictions on advanced AI chip access through frameworks like the US AI Diffusion Framework.   

China, the second AI superpower, shows particular strength in data—ranking highly in e-governance and mobile broadband subscriptions, alongside significant data centre capacity (20 GW) and capital power. 

Despite restricted access to the latest chips, Chinese LLMs are rapidly closing the gap with US models. Lang mentioned the emergence of models like DeepSpeech as evidence of this trend, achieved with smaller teams, fewer GPU hours, and previous-generation chips.

China’s progress is also fuelled by heavy investment in AI academic institutions (hosting 45 of the world’s top 100), a leading position in AI patent applications, and significant government-backed VC funding. Lang predicts “governments will play an important role in funding AI work going forward.”

The middle powers: Europe, Middle East, and Asia

Beyond the superpowers, several “middle powers” are carving out niches.

  • EU: While trailing the US and China, the EU holds the third spot with significant data centre capacity (8 GW) and the world’s second-largest AI talent pool (275,000 specialists) when capabilities are combined. Europe also leads in top AI publications. Lang stressed the need for bundled capacities, suggesting AI, defence, and renewables are key areas for future EU momentum.
  • Middle East (UAE & Saudi Arabia): These nations leverage strong capital power via sovereign wealth funds and competitively low electricity prices to attract talent and build compute power, aiming to become AI drivers “from scratch”. They show positive dynamics in attracting AI specialists and are climbing the ranks in AI publications.   
  • Asia (Japan & South Korea): Leveraging strong existing tech ecosystems in hardware and gaming, these countries invest heavily in R&D (around $207bn combined by top tech firms). Government support, particularly in Japan, fosters both supply and demand. Local LLMs and strategic investments by companies like Samsung and SoftBank demonstrate significant activity.   
  • Singapore: Singapore is boosting its AI ecosystem by focusing on talent upskilling programmes, supporting Southeast Asia’s first LLM, ensuring data centre capacity, and fostering adoption through initiatives like establishing AI centres of excellence.   

The geopolitics of generative AI: Strategy and sovereignty

The geopolitics of generative AI is being shaped by four clear dynamics: the US retains its lead, driven by an unrivalled tech ecosystem; China is rapidly closing the gap; middle powers face a strategic choice between building supply or accelerating adoption; and government funding is set to play a pivotal role, particularly as R&D costs climb and commoditisation sets in.

As geopolitical tensions mount, businesses are likely to diversify their GenAI supply chains to spread risk. The race ahead will be defined by how nations and companies navigate the intersection of innovation, policy, and resilience.

(Photo by Markus Krisetya)

See also: OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/feed/ 0
How can AI unlock human potential in the supply chain? https://www.artificialintelligence-news.com/news/how-can-ai-unlock-human-potential-in-the-supply-chain/ https://www.artificialintelligence-news.com/news/how-can-ai-unlock-human-potential-in-the-supply-chain/#respond Wed, 09 Apr 2025 12:21:51 +0000 https://www.artificialintelligence-news.com/?p=105003 AI is driving a new revolution across a number of industries and the supply chain is no exception. AI has been the most transformative technology of the decade, and it’s no secret it has helped supply chains become more efficient, resilient, and responsive, while allowing organisations to become more efficient and ensuring workforces to focus […]

The post How can AI unlock human potential in the supply chain? appeared first on AI News.

]]>
AI is driving a new revolution across a number of industries and the supply chain is no exception. AI has been the most transformative technology of the decade, and it’s no secret it has helped supply chains become more efficient, resilient, and responsive, while allowing organisations to become more efficient and ensuring workforces to focus on more strategic growth.

However despite the benefits of the technology, many businesses are slow to adopt the technology, with recent statistics showing only one in ten of SME’s regularly use AI technology, indicating companies and employees are still not operating at their full potential, thus missing out on opportunities for growth and optimisation. 

Transforming the supply chain through AI

The potential that AI has in the supply chain is undeniable, with some estimating that AI helps businesses reduce logistics costs by 15%, reduce inventory levels by 35% and raise service levels by 65%. In contrast, failure to implement AI tools could set companies back, leave employees feeling unmotivated and unproductive and result in a weak supply chain and poor staff retention.

Now, more than ever, it’s time for businesses to not just pay lip service to AI – they must start using it within their supply chains to truly enhance operations. Due to the evolving market dynamics, AI is not just a competitive advantage; it’s essential for business agility and profitability. Here are two ways in which organisations can use AI to improve their supply chains.

Automating the supply chain & harnessing the power of AI for resilience

AI allows businesses to tackle supply chain challenges head-on by automating time-consuming manual processes, such as data-logging whilst reducing errors. By taking over repetitive and potentially hazardous tasks, AI frees up employees to focus on strategic initiatives that drive business value. For example, a recent report highlighted that nearly three quarters of warehouse staff surveyed are excited about the possibilities of generative AI and robotics improving their job roles.

Needless to say, a supply chain still can’t operate at its peak without resilience – which is the capacity of a supply chain to withstand and recover from disruptions – ensuring uninterrupted operations and minimal impact to businesses and customers.

As global markets continue to evolve & expand, businesses are challenged to adapt swiftly to unforeseen disruptions. AI enables businesses to provide real time data analysis, providing unprecedented insights into the web of supply chain dynamics and acting as the eyes and ears of a supply chain. This empowers each component with the ability to make informed decisions quickly to meet supply chain demands. Allowing insights into every aspect of their warehouse operations, real time data enables visibility which permits precise monitoring, enhanced customer service and reduced downtime – identifying potential issues before they become a major problem.

At the heart of the supply chain is communication between all stakeholders, with technology such as AI providing real time data, seamless collaboration is enabled by providing a shared platform where suppliers, manufacturers, and distributors can exchange information instantaneously. Enhanced communication leads to quicker issue resolution, enabling the supply chain to adapt rapidly to changing circumstances. Robotics, AI and real-time data introduce an all-encompassing visibility of the good’s journey, which leads to resilience.

Human expertise with robot precision

Building on the theme of resilience, in the next couple of years the industry will witness AI-integrated robots becoming collaborative partners to their human co-workers. Particularly in environments requiring vast coverage and extensive data capture, robots that are equipped with groundbreaking sensor technologies will navigate, adapt and work with greater levels of autonomy along with other machinery and people in busy environments. This will result in speed of data acquisition and most importantly, allowing companies to make decisions based on actionable insights a lot faster than ever before. 

These advancements will transform robots into true cobots and will take human-robot teamwork to an unprecedented level. We will also see that robots will become better with understanding nuanced human gestures and intentions. This evolution in collaboration with technology will redefine what humans and machines can accomplish together.

What’s next for the industry?

In theory implementing AI and advanced technology in the supply chain has the potential to bring significant benefits. However, we will only begin to see substantial results once these innovations are widely adopted in practice. By automating the supply chain and using data to fuel predictions, these technologies are the foundations for a new industrial revolution that will shape the future of the industries for years to come. Those that delay starting their journeys will risk being left behind.

Photo by Miltiadis Fragkidis on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post How can AI unlock human potential in the supply chain? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/how-can-ai-unlock-human-potential-in-the-supply-chain/feed/ 0
DeepSeek’s AIs: What humans really want https://www.artificialintelligence-news.com/news/deepseeks-ai-breakthrough-teaching-machines-to-learn-what-humans-really-want/ https://www.artificialintelligence-news.com/news/deepseeks-ai-breakthrough-teaching-machines-to-learn-what-humans-really-want/#respond Wed, 09 Apr 2025 07:44:08 +0000 https://www.artificialintelligence-news.com/?p=105239 Chinese AI startup DeepSeek has solved a problem that has frustrated AI researchers for several years. Its breakthrough in AI reward models could improve dramatically how AI systems reason and respond to questions. In partnership with Tsinghua University researchers, DeepSeek has created a technique detailed in a research paper, titled “Inference-Time Scaling for Generalist Reward […]

The post DeepSeek’s AIs: What humans really want appeared first on AI News.

]]>
Chinese AI startup DeepSeek has solved a problem that has frustrated AI researchers for several years. Its breakthrough in AI reward models could improve dramatically how AI systems reason and respond to questions.

In partnership with Tsinghua University researchers, DeepSeek has created a technique detailed in a research paper, titled “Inference-Time Scaling for Generalist Reward Modeling.” It outlines how a new approach outperforms existing methods and how the team “achieved competitive performance” compared to strong public reward models.

The innovation focuses on enhancing how AI systems learn from human preferences – a important aspect of creating more useful and aligned artificial intelligence.

What are AI reward models, and why do they matter?

AI reward models are important components in reinforcement learning for large language models. They provide feedback signals that help guide an AI’s behaviour toward preferred outcomes. In simpler terms, reward models are like digital teachers that help AI understand what humans want from their responses.

“Reward modeling is a process that guides an LLM towards human preferences,” the DeepSeek paper states. Reward modeling becomes important as AI systems get more sophisticated and are deployed in scenarios beyond simple question-answering tasks.

The innovation from DeepSeek addresses the challenge of obtaining accurate reward signals for LLMs in different domains. While current reward models work well for verifiable questions or artificial rules, they struggle in general domains where criteria are more diverse and complex.

The dual approach: How DeepSeek’s method works

DeepSeek’s approach combines two methods:

  1. Generative reward modeling (GRM): This approach enables flexibility in different input types and allows for scaling during inference time. Unlike previous scalar or semi-scalar approaches, GRM provides a richer representation of rewards through language.
  2. Self-principled critique tuning (SPCT): A learning method that fosters scalable reward-generation behaviours in GRMs through online reinforcement learning, one that generates principles adaptively.

One of the paper’s authors from Tsinghua University and DeepSeek-AI, Zijun Liu, explained that the combination of methods allows “principles to be generated based on the input query and responses, adaptively aligning reward generation process.”

The approach is particularly valuable for its potential for “inference-time scaling” – improving performance by increasing computational resources during inference rather than just during training.

The researchers found that their methods could achieve better results with increased sampling, letting models generate better rewards with more computing.

Implications for the AI Industry

DeepSeek’s innovation comes at an important time in AI development. The paper states “reinforcement learning (RL) has been widely adopted in post-training for large language models […] at scale,” leading to “remarkable improvements in human value alignment, long-term reasoning, and environment adaptation for LLMs.”

The new approach to reward modelling could have several implications:

  1. More accurate AI feedback: By creating better reward models, AI systems can receive more precise feedback about their outputs, leading to improved responses over time.
  2. Increased adaptability: The ability to scale model performance during inference means AI systems can adapt to different computational constraints and requirements.
  3. Broader application: Systems can perform better in a broader range of tasks by improving reward modelling for general domains.
  4. More efficient resource use: The research shows that inference-time scaling with DeepSeek’s method could outperform model size scaling in training time, potentially allowing smaller models to perform comparably to larger ones with appropriate inference-time resources.

DeepSeek’s growing influence

The latest development adds to DeepSeek’s rising profile in global AI. Founded in 2023 by entrepreneur Liang Wenfeng, the Hangzhou-based company has made waves with its V3 foundation and R1 reasoning models.

The company upgraded its V3 model (DeepSeek-V3-0324) recently, which the company said offered “enhanced reasoning capabilities, optimised front-end web development and upgraded Chinese writing proficiency.” DeepSeek has committed to open-source AI, releasing five code repositories in February that allow developers to review and contribute to development.

While speculation continues about the potential release of DeepSeek-R2 (the successor to R1) – Reuters has speculated on possible release dates – DeepSeek has not commented in its official channels.

What’s next for AI reward models?

According to the researchers, DeepSeek intends to make the GRM models open-source, although no specific timeline has been provided. Open-sourcing will accelerate progress in the field by allowing broader experimentation with reward models.

As reinforcement learning continues to play an important role in AI development, advances in reward modelling like those in DeepSeek and Tsinghua University’s work will likely have an impact on the abilities and behaviour of AI systems.

Work on AI reward models demonstrates that innovations in how and when models learn can be as important increasing their size. By focusing on feedback quality and scalability, DeepSeek addresses one of the fundamental challenges to creating AI that understands and aligns with human preferences better.

See also: DeepSeek disruption: Chinese AI innovation narrows global technology divide

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek’s AIs: What humans really want appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseeks-ai-breakthrough-teaching-machines-to-learn-what-humans-really-want/feed/ 0
UK forms AI Energy Council to align growth and sustainability goals https://www.artificialintelligence-news.com/news/uk-forms-ai-energy-council-align-growth-sustainability-goals/ https://www.artificialintelligence-news.com/news/uk-forms-ai-energy-council-align-growth-sustainability-goals/#respond Tue, 08 Apr 2025 14:10:49 +0000 https://www.artificialintelligence-news.com/?p=105230 The UK government has announced the first meeting of a new AI Energy Council aimed at ensuring the nation’s AI and clean energy goals work in tandem to drive economic growth. The inaugural meeting of the council will see members agree on its core objectives, with a central focus on how the government’s mission to […]

The post UK forms AI Energy Council to align growth and sustainability goals appeared first on AI News.

]]>
The UK government has announced the first meeting of a new AI Energy Council aimed at ensuring the nation’s AI and clean energy goals work in tandem to drive economic growth.

The inaugural meeting of the council will see members agree on its core objectives, with a central focus on how the government’s mission to become a clean energy superpower can support its commitment to advancing AI and compute infrastructure.

Unveiled earlier this year as part of the government’s response to the AI Opportunities Action Plan, the council will serve as a crucial platform for bringing together expert insights on the significant energy demands associated with the AI sector.

Concerns surrounding the substantial energy requirements of AI data centres are a global challenge. The UK is proactively addressing this issue through initiatives like the establishment of new AI Growth Zones.

These zones are dedicated hubs for AI development that are strategically located in areas with access to at least 500MW of power—an amount equivalent to powering approximately two million homes. This approach is designed to attract private investment from companies looking to establish operations in Britain, ultimately generating local jobs and boosting the economy.

Peter Kyle, Secretary of State for Science, Innovation, and Technology, said: “The work of the AI Energy Council will ensure we aren’t just powering our AI needs to deliver new waves of opportunity in all parts of the country, but can do so in a way which is responsible and sustainable.

“This requires a broad range of expertise from industry and regulators as we fire up the UK’s economic engine to make it fit for the age of AI—meaning we can deliver the growth which is the beating heart of our Plan for Change.”

The Council is also expected to delve into the role of clean energy sources, including renewables and nuclear, in powering the AI revolution.

A key aspect of its work will involve advising on how to improve energy efficiency and sustainability within AI and data centre infrastructure, with specific considerations for resource usage such as water. Furthermore, the council will take proactive steps to ensure the secure adoption of AI across the UK’s critical energy network itself.

Ed Miliband, Secretary of State for Energy Security and Net Zero, commented: “We are making the UK a clean energy superpower, building the homegrown energy this country needs to protect consumers and businesses, and drive economic growth, as part of our Plan for Change.

“AI can play an important role in building a new era of clean electricity for our country and as we unlock AI’s potential, this Council will help secure a sustainable scale up to benefit businesses and communities across the UK.”

In a parallel effort to facilitate the growth of the AI sector, the UK government has been working closely with energy regulator Ofgem and the National Energy System Operator (NESO) to implement fundamental reforms to the UK’s connections process.

Subject to final sign-offs from Ofgem, these reforms could potentially unlock more than 400GW of capacity from the connection queue. This acceleration of projects is deemed vital for economic growth, particularly for the delivery of new large-scale AI data centres that require significant power infrastructure.

The newly-formed AI Energy Council comprises representatives from 14 key organisations across the energy and technology sectors, including regulators and leading companies. These members will contribute their expert insights to support the council’s work and ensure a collaborative approach to addressing the energy challenges and opportunities presented by AI.

Among the prominent organisations joining the council are EDF, Scottish Power, National Grid, technology giants Google, Microsoft, Amazon Web Services (AWS), and chip designer ARM, as well as infrastructure investment firm Brookfield.

This collaborative framework, uniting the energy and technology sectors, aims to ensure seamless coordination in speeding up the connection of energy projects to the national grid. This is particularly crucial given the increasing number of technology companies announcing plans to build data centres across the UK.

Alison Kay, VP for UK and Ireland at AWS, said: “At Amazon, we’re working to meet the future energy needs of our customers, while remaining committed to powering our operations in a more sustainable way, and progressing toward our Climate Pledge commitment to become net-zero carbon by 2040.

“As the world’s largest corporate purchaser of renewable energy for the fifth year in a row, we share the government’s goal to ensure the UK has sufficient access to carbon-free energy to support its AI ambitions and to help drive economic growth.”

Jonathan Brearley, CEO of Ofgem, added: “AI will play an increasingly important role in transforming our energy system to be cleaner, more efficient, and more cost-effective for consumers, but only if used in a fair, secure, sustainable, and safe way.

“Working alongside other members of this Council, Ofgem will ensure AI implementation puts consumer interests first – from customer service to infrastructure planning and operation – so that everyone feels the benefits of this technological innovation in energy.”

This initiative aligns with the government’s Clean Power Action Plan, which focuses on connecting more homegrown clean power to the grid by building essential infrastructure and prioritising projects needed for 2030. The aim is to clear the grid connection queue, enabling crucial infrastructure projects – from housing to gigafactories and data centres – to gain access to the grid, thereby unlocking billions in investment and fostering economic growth.

Furthermore, the government is streamlining planning approvals to significantly reduce the time it takes for infrastructure projects to get off the ground. This accelerated process will ensure that AI innovators can readily access cutting-edge infrastructure and the necessary power to drive forward the next wave of AI advancements.

(Photo by Vlad Hilitanu)

See also: Tony Blair Institute AI copyright report sparks backlash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK forms AI Energy Council to align growth and sustainability goals appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-forms-ai-energy-council-align-growth-sustainability-goals/feed/ 0
AI streamlines budgeting, but human oversight essential https://www.artificialintelligence-news.com/news/ai-financial-planning-streamlines-budgeting-but-human-oversight-essential/ https://www.artificialintelligence-news.com/news/ai-financial-planning-streamlines-budgeting-but-human-oversight-essential/#respond Wed, 02 Apr 2025 12:23:09 +0000 https://www.artificialintelligence-news.com/?p=105145 Research conducted by Vlerick Business School has discovered that in the area of AI financial planning, the technology consistently outperforms humans when allocating budgets with strategic guidelines in place. Businesses that use AI for budgeting processes experience substantial improvements in the accuracy and efficiency of budgeting plans compared to human decision-making. The study’s goal was […]

The post AI streamlines budgeting, but human oversight essential appeared first on AI News.

]]>
Research conducted by Vlerick Business School has discovered that in the area of AI financial planning, the technology consistently outperforms humans when allocating budgets with strategic guidelines in place. Businesses that use AI for budgeting processes experience substantial improvements in the accuracy and efficiency of budgeting plans compared to human decision-making.

The study’s goal was to interpret AI’s role in corporate budgeting, examining how well such technology performs when making financial decisions. Ultimately, it’s an investigation into whether AI’s financial decisions align with a company’s long-term strategies and how its decisions compare to human management.

The researchers, Kristof Stouthuysen, Professor of Management Accounting and Digital Finance at Vlerick Business School, and PhD researcher, Emma Willems, studied tactical and strategic budgeting approaches.

Tactical budgeting is about quick, responsive decisions, referring to short-term, data-driven financial decisions. These are aimed at improving immediate performance, like making adjustments to spending based on market trends.

Strategic budgeting typically involves a more comprehensive approach that focuses on future planning, aligning various resources with a business’s vision.

According to the research, AI is superior when performing tactical budgeting processes like cost management and resource allocation. However, the need for human insight remains important to ensure accurate and strategic financial planning over the long term.

The controlled experiment was achieved by running a management simulation where experienced managers were asked to allocate budgets for a hypothetical automotive parts company. Stouthuysen and Willems then compared these human-made decisions to those produced by an AI algorithm using the same financial data.

The results concluded that AI was superior in optimising budgets when a company’s strategic financial planning was clearly defined. However, AI struggled to make budgeting decisions when key performance indicators (KPIs) did not align with the company’s financial goals.

Stouthuysen and Willems work on the study emphasised the importance of a collaboration between humans and AI. “As AI continues to evolve, companies that use its strengths in tactical budgeting while maintaining human oversight in strategic planning will gain a competitive edge. The key is knowing where AI should lead and where human intuition remains indispensable.”

According to the study, AI can theoretically take over from humans when it comes to tactical budgeting, providing more precise and efficient outcomes. Stouthuysen and Willems believe companies need to define their strategic priorities clearly and implement AI for tactical budget-making decisions to maximise financial performances and achieve sustainable growth.

The findings challenge the widespread misconception that AI can completely substitute the need for humans in budgeting. Instead, this research emphasises the importance of taking a balanced approach, utilising both AI and humans, assigning tasks to silicon or human processes according to their proven abilities.

(Image source: “Payday” by 401(K) 2013 is licensed under CC BY-SA 2.0.)

AI & Big Data Expo banner, a show where attendees will hear more about issues such as OpenAI allegedly using copyrighted data to train its new models.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI streamlines budgeting, but human oversight essential appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-financial-planning-streamlines-budgeting-but-human-oversight-essential/feed/ 0
ServiceNow deploys AI agents to boost enterprise workflows https://www.artificialintelligence-news.com/news/servicenow-deploys-ai-agents-boost-enterprise-workflows/ https://www.artificialintelligence-news.com/news/servicenow-deploys-ai-agents-boost-enterprise-workflows/#respond Thu, 13 Mar 2025 16:40:58 +0000 https://www.artificialintelligence-news.com/?p=104777 ServiceNow has launched its Yokohama platform which introduces AI agents across various sectors to boost workflows and maximise end-to-end business impact. The Yokohama platform release features teams of preconfigured AI agents designed to deliver immediate productivity gains. These agents operate on a single, unified platform, ensuring seamless integration and coordination across different business functions. The […]

The post ServiceNow deploys AI agents to boost enterprise workflows appeared first on AI News.

]]>
ServiceNow has launched its Yokohama platform which introduces AI agents across various sectors to boost workflows and maximise end-to-end business impact.

The Yokohama platform release features teams of preconfigured AI agents designed to deliver immediate productivity gains. These agents operate on a single, unified platform, ensuring seamless integration and coordination across different business functions. The platform also includes capabilities to build, onboard, and manage the entire AI agent lifecycle, making it easier for enterprises to adopt and scale AI solutions.

Data is the lifeblood of AI, and ServiceNow recognises this by expanding its Knowledge Graph with advancements to its Common Service Data Model (CSDM). This expansion aims to break down barriers among data sources, enabling more connected and intelligent AI agents. By unifying data from various sources, ServiceNow’s platform ensures that AI agents can operate with a comprehensive view of the enterprise, driving more informed decisions and actions.

The growing need for ‘Guardian Agents’

According to Gartner, by 2028, 40% of CIOs will demand ‘Guardian Agents’ to autonomously track, oversee, or contain the results of AI agent actions. This underscores the growing need for a coordinated, enterprise-wide approach to AI deployment and management.

ServiceNow’s Yokohama release addresses this need by serving as the AI agent control tower for enterprises. The platform removes common roadblocks such as data fragmentation, governance gaps, and real-time performance challenges, ensuring seamless data connectivity with Workflow Data Fabric.

Unlike other AI providers that operate in silos or require complex integrations, ServiceNow AI Agents are built on a single, enterprise-wide platform. This ensures seamless data connectivity and provides a single view of all workflows, AI, and automation needs.

Amit Zavery, President, Chief Product Officer, and Chief Operating Officer at ServiceNow, commented: “Agentic AI is the new frontier. Enterprise leaders are no longer just experimenting with AI agents; they’re demanding AI solutions that can help them achieve productivity at scale.

“ServiceNow’s industry‑leading agentic AI framework meets this need by delivering predictability and efficiency from the start. With the combination of agentic AI, data fabric, and workflow automation all on one platform, we’re making it easier for organisations to embed connected AI where work happens and both measure and drive business outcomes faster, smarter, and at scale.”

New AI agents from ServiceNow aim to accelerate productivity

ServiceNow’s new AI Agents are now available to accelerate productivity at scale. These agents are designed to drive real outcomes for enterprise-wide use cases. For example:

  • Security Operations (SecOps) expert AI agents: These agents transform security operations by streamlining the entire incident lifecycle, eliminating repetitive tasks, and empowering SecOps teams to focus on stopping real threats quickly.
  • Autonomous change management AI agents: Acting like seasoned change managers, these agents generate custom implementation, test, and backout plans by analysing impact, historical data, and similar changes, ensuring seamless execution with minimal risk.
  • Proactive network test & repair AI agents: These AI-powered troubleshooters automatically detect, diagnose, and resolve network issues before they impact performance.

ServiceNow AI Agent Orchestrator and AI Agent Studio are now generally available with expanded capabilities to govern the complete AI agent lifecycle.

These tools help to streamline the setup process with guided instructions, making it easier to design and configure new AI agents using natural language descriptions. Their expanded performance management capabilities include an analytics dashboard for visualising AI agent usage, quality, and value—ensuring that AI agent performance and ROI can be easily tracked.

At the core of the ServiceNow Platform is Workflow Data Fabric, enabling AI-powered workflows that integrate with an organisation’s data, regardless of the system or source. This fabric allows businesses to gain deeper insights through AI-driven contextualisation and decision intelligence while automating manual work and creating process efficiencies.

The Yokohama release continues to expand ServiceNow’s Knowledge Graph data capabilities with enhancements to its Common Service Data Model (CSDM). CSDM provides a standardised framework for managing IT and business services to accelerate quick, safe, and compliant technology deployments.

Several customers and partners have already seen the benefits of ServiceNow’s AI solutions. CANCOM, Cognizant, Davies, and Sentara have all praised the platform’s ability to drive efficiency, cost savings, and productivity. These organisations have successfully integrated ServiceNow’s AI agents into their operations.

Jason Wojahn, Global Head of the ServiceNow Business Group at Cognizant, said: “At Cognizant, we are helping companies harness the next phase of AI with agentic AI workflows that could bring unparalleled efficiency. We were the first to bring ServiceNow’s Workflow Data Fabric to market and are working to help our clients to seamlessly connect their data with AI.

“With the Yokohama release and the integration of AI agents onto the Now Platform, clients can now operate their agents virtually effortlessly with connected data, driving productivity and ROI across their entire business.”

Darrell Burnell, Group Head of Technology at Davies, added: “Agility is essential for Davies, given our work with clients in heavily regulated markets. We’ve transformed our agent experience with ServiceNow’s generative AI, deploying Now Assist for ITSM in just six weeks to streamline information retrieval and accelerate resolution times.”

ServiceNow’s Yokohama platform release is a step forward in the evolution of AI for business transformation. By unleashing new AI agents and expanding data capabilities, ServiceNow aims to empower businesses to achieve faster and smarter workflows to maximise impact.

(Image by Thomas Fengler)

See also: Opera introduces browser-integrated AI agent

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ServiceNow deploys AI agents to boost enterprise workflows appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/servicenow-deploys-ai-agents-boost-enterprise-workflows/feed/ 0
Endor Labs: AI transparency vs ‘open-washing’ https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/ https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/#respond Mon, 24 Feb 2025 18:15:45 +0000 https://www.artificialintelligence-news.com/?p=104605 As the AI industry focuses on transparency and security, debates around the true meaning of “openness” are intensifying. Experts from open-source security firm Endor Labs weighed in on these pressing topics. Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, emphasised the importance of applying lessons learned from software security to AI systems. “The US […]

The post Endor Labs: AI transparency vs ‘open-washing’ appeared first on AI News.

]]>
As the AI industry focuses on transparency and security, debates around the true meaning of “openness” are intensifying. Experts from open-source security firm Endor Labs weighed in on these pressing topics.

Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, emphasised the importance of applying lessons learned from software security to AI systems.

“The US government’s 2021 Executive Order on Improving America’s Cybersecurity includes a provision requiring organisations to produce a software bill of materials (SBOM) for each product sold to federal government agencies.”

An SBOM is essentially an inventory detailing the open-source components within a product, helping detect vulnerabilities. Stiefel argued that “applying these same principles to AI systems is the logical next step.”  

“Providing better transparency for citizens and government employees not only improves security,” he explained, “but also gives visibility into a model’s datasets, training, weights, and other components.”

What does it mean for an AI model to be “open”?  

Julien Sobrier, Senior Product Manager at Endor Labs, added crucial context to the ongoing discussion about AI transparency and “openness.” Sobrier broke down the complexity inherent in categorising AI systems as truly open.

“An AI model is made of many components: the training set, the weights, and programs to train and test the model, etc. It is important to make the whole chain available as open source to call the model ‘open’. It is a broad definition for now.”  

Sobrier noted the lack of consistency across major players, which has led to confusion about the term.

“Among the main players, the concerns about the definition of ‘open’ started with OpenAI, and Meta is in the news now for their LLAMA model even though that’s ‘more open’. We need a common understanding of what an open model means. We want to watch out for any ‘open-washing,’ as we saw it with free vs open-source software.”  

One potential pitfall, Sobrier highlighted, is the increasingly common practice of “open-washing,” where organisations claim transparency while imposing restrictions.

“With cloud providers offering a paid version of open-source projects (such as databases) without contributing back, we’ve seen a shift in many open-source projects: The source code is still open, but they added many commercial restrictions.”  

“Meta and other ‘open’ LLM providers might go this route to keep their competitive advantage: more openness about the models, but preventing competitors from using them,” Sobrier warned.

DeepSeek aims to increase AI transparency

DeepSeek, one of the rising — albeit controversial — players in the AI industry, has taken steps to address some of these concerns by making portions of its models and code open-source. The move has been praised for advancing transparency while providing security insights.  

“DeepSeek has already released the models and their weights as open-source,” said Andrew Stiefel. “This next move will provide greater transparency into their hosted services, and will give visibility into how they fine-tune and run these models in production.”

Such transparency has significant benefits, noted Stiefel. “This will make it easier for the community to audit their systems for security risks and also for individuals and organisations to run their own versions of DeepSeek in production.”  

Beyond security, DeepSeek also offers a roadmap on how to manage AI infrastructure at scale.

“From a transparency side, we’ll see how DeepSeek is running their hosted services. This will help address security concerns that emerged after it was discovered they left some of their Clickhouse databases unsecured.”

Stiefel highlighted that DeepSeek’s practices with tools like Docker, Kubernetes (K8s), and other infrastructure-as-code (IaC) configurations could empower startups and hobbyists to build similar hosted instances.  

Open-source AI is hot right now

DeepSeek’s transparency initiatives align with the broader trend toward open-source AI. A report by IDC reveals that 60% of organisations are opting for open-source AI models over commercial alternatives for their generative AI (GenAI) projects.  

Endor Labs research further indicates that organisations use, on average, between seven and twenty-one open-source models per application. The reasoning is clear: leveraging the best model for specific tasks and controlling API costs.

“As of February 7th, Endor Labs found that more than 3,500 additional models have been trained or distilled from the original DeepSeek R1 model,” said Stiefel. “This shows both the energy in the open-source AI model community, and why security teams need to understand both a model’s lineage and its potential risks.”  

For Sobrier, the growing adoption of open-source AI models reinforces the need to evaluate their dependencies.

“We need to look at AI models as major dependencies that our software depends on. Companies need to ensure they are legally allowed to use these models but also that they are safe to use in terms of operational risks and supply chain risks, just like open-source libraries.”

He emphasised that any risks can extend to training data: “They need to be confident that the datasets used for training the LLM were not poisoned or had sensitive private information.”  

Building a systematic approach to AI model risk  

As open-source AI adoption accelerates, managing risk becomes ever more critical. Stiefel outlined a systematic approach centred around three key steps:  

  1. Discovery: Detect the AI models your organisation currently uses.  
  2. Evaluation: Review these models for potential risks, including security and operational concerns.  
  3. Response: Set and enforce guardrails to ensure safe and secure model adoption.  

“The key is finding the right balance between enabling innovation and managing risk,” Stiefel said. “We need to give software engineering teams latitude to experiment but must do so with full visibility. The security team needs line-of-sight and the insight to act.”  

Sobrier further argued that the community must develop best practices for safely building and adopting AI models. A shared methodology is needed to evaluate AI models across parameters such as security, quality, operational risks, and openness.

Beyond transparency: Measures for a responsible AI future  

To ensure the responsible growth of AI, the industry must adopt controls that operate across several vectors:  

  • SaaS models: Safeguarding employee use of hosted models.
  • API integrations: Developers embedding third-party APIs like DeepSeek into applications, which, through tools like OpenAI integrations, can switch deployment with just two lines of code.
  • Open-source models: Developers leveraging community-built models or creating their own models from existing foundations maintained by companies like DeepSeek.

Sobrier warned of complacency in the face of rapid AI progress. “The community needs to build best practices to develop safe and open AI models,” he advised, “and a methodology to rate them along security, quality, operational risks, and openness.”  

As Stiefel succinctly summarised: “Think about security across multiple vectors and implement the appropriate controls for each.”

See also: AI in 2025: Purpose-driven models, human integration, and more

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Endor Labs: AI transparency vs ‘open-washing’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/feed/ 0