Dashveenjit Kaur, Author at AI News https://www.artificialintelligence-news.com Artificial Intelligence News Wed, 30 Apr 2025 13:26:00 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Dashveenjit Kaur, Author at AI News https://www.artificialintelligence-news.com 32 32 Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/ https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/#respond Thu, 24 Apr 2025 19:01:38 +0000 https://www.artificialintelligence-news.com/?p=105488 AI-powered scams are evolving rapidly as cybercriminals use new technologies to target victims, according to Microsoft’s latest Cyber Signals report. Over the past year, the tech giant says it has prevented $4 billion in fraud attempts, blocking approximately 1.6 million bot sign-up attempts every hour – showing the scale of this growing threat. The ninth […]

The post Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud appeared first on AI News.

]]>
AI-powered scams are evolving rapidly as cybercriminals use new technologies to target victims, according to Microsoft’s latest Cyber Signals report.

Over the past year, the tech giant says it has prevented $4 billion in fraud attempts, blocking approximately 1.6 million bot sign-up attempts every hour – showing the scale of this growing threat.

The ninth edition of Microsoft’s Cyber Signals report, titled “AI-powered deception: Emerging fraud threats and countermeasures,” reveals how artificial intelligence has lowered the technical barriers for cybercriminals, enabling even low-skilled actors to generate sophisticated scams with minimal effort.

What previously took scammers days or weeks to create can now be accomplished in minutes.

The democratisation of fraud capabilities represents a shift in the criminal landscape that affects consumers and businesses worldwide.

The evolution of AI-enhanced cyber scams

Microsoft’s report highlights how AI tools can now scan and scrape the web for company information, helping cybercriminals build detailed profiles of potential targets for highly-convincing social engineering attacks.

Bad actors can lure victims into complex fraud schemes using fake AI-enhanced product reviews and AI-generated storefronts, which come complete with fabricated business histories and customer testimonials.

According to Kelly Bissell, Corporate Vice President of Anti-Fraud and Product Abuse at Microsoft Security, the threat numbers continue to increase. “Cybercrime is a trillion-dollar problem, and it’s been going up every year for the past 30 years,” per the report.

“I think we have an opportunity today to adopt AI faster so we can detect and close the gap of exposure quickly. Now we have AI that can make a difference at scale and help us build security and fraud protections into our products much faster.”

The Microsoft anti-fraud team reports that AI-powered fraud attacks happen globally, with significant activity originating from China and Europe – particularly Germany, due to its status as one of the largest e-commerce markets in the European Union.

The report notes that the larger a digital marketplace is, the more likely a proportional degree of attempted fraud will occur.

E-commerce and employment scams leading

Two particularly concerning areas of AI-enhanced fraud include e-commerce and job recruitment scams.In the ecommerce space, fraudulent websites can now be created in minutes using AI tools with minimal technical knowledge.

Sites often mimic legitimate businesses, using AI-generated product descriptions, images, and customer reviews to fool consumers into believing they’re interacting with genuine merchants.

Adding another layer of deception, AI-powered customer service chatbots can interact convincingly with customers, delay chargebacks by stalling with scripted excuses, and manipulate complaints with AI-generated responses that make scam sites appear professional.

Job seekers are equally at risk. According to the report, generative AI has made it significantly easier for scammers to create fake listings on various employment platforms. Criminals generate fake profiles with stolen credentials, fake job postings with auto-generated descriptions, and AI-powered email campaigns to phish job seekers.

AI-powered interviews and automated emails enhance the credibility of these scams, making them harder to identify. “Fraudsters often ask for personal information, like resumes or even bank account details, under the guise of verifying the applicant’s information,” the report says.

Red flags include unsolicited job offers, requests for payment and communication through informal platforms like text messages or WhatsApp.

Microsoft’s countermeasures to AI fraud

To combat emerging threats, Microsoft says it has implemented a multi-pronged approach across its products and services. Microsoft Defender for Cloud provides threat protection for Azure resources, while Microsoft Edge, like many browsers, features website typo protection and domain impersonation protection. Edge is noted by the Microsoft report as using deep learning technology to help users avoid fraudulent websites.

The company has also enhanced Windows Quick Assist with warning messages to alert users about possible tech support scams before they grant access to someone claiming to be from IT support. Microsoft now blocks an average of 4,415 suspicious Quick Assist connection attempts daily.

Microsoft has also introduced a new fraud prevention policy as part of its Secure Future Initiative (SFI). As of January 2025, Microsoft product teams must perform fraud prevention assessments and implement fraud controls as part of their design process, ensuring products are “fraud-resistant by design.”

As AI-powered scams continue to evolve, consumer awareness remains important. Microsoft advises users to be cautious of urgency tactics, verify website legitimacy before making purchases, and never provide personal or financial information to unverified sources.

For enterprises, implementing multi-factor authentication and deploying deepfake-detection algorithms can help mitigate risk.

See also: Wozniak warns AI will power next-gen scams

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/feed/ 0
China’s MCP adoption: AI assistants that actually do things https://www.artificialintelligence-news.com/news/chinas-mcp-adoption-ai-assistants-that-actually-do-things/ https://www.artificialintelligence-news.com/news/chinas-mcp-adoption-ai-assistants-that-actually-do-things/#respond Wed, 23 Apr 2025 12:03:11 +0000 https://www.artificialintelligence-news.com/?p=105453 China’s tech companies will drive adoption of the MCP (Model Context Protocol) standard that transforms AI assistants from simple chatbots into powerful digital helpers. MCP works like a universal connector that lets AI assistants interact directly with favourite apps and services – enabling them to make payments, book appointments, check maps, and access information on […]

The post China’s MCP adoption: AI assistants that actually do things appeared first on AI News.

]]>
China’s tech companies will drive adoption of the MCP (Model Context Protocol) standard that transforms AI assistants from simple chatbots into powerful digital helpers.

MCP works like a universal connector that lets AI assistants interact directly with favourite apps and services – enabling them to make payments, book appointments, check maps, and access information on different platforms on users’ behalves.

As reported by the South China Morning Post, companies like Ant Group, Alibaba Cloud, and Baidu are deploying MCP-based services and positioning AI agents as the next step, after chatbots and large language models. But will China’s MCP adoption truly transform the AI landscape, or is it simply another step in the technology’s evolution?

Why China’s MCP adoption matters for AI’s evolution

The Model Context Protocol was initially introduced by Anthropic in November 2024, at the time described as a standard that connects AI agents “to the systems where data lives, including content repositories, business tools and development environments.”

MCP serves as what Ant Group calls a “USB-C port for AI applications” – a universal connector allowing AI agents to integrate with multiple systems.

The standardisation is particularly significant for AI agents like Butterfly Effect’s Manus, which are designed to autonomously perform tasks by creating plans consisting of specific subtasks using available resources.

Unlike traditional chatbots that just respond to queries, AI agents can actively interact with different systems, collect feedback, and incorporate that feedback into new actions.

Chinese tech giants lead the MCP movement

China’s MCP adoption by tech leaders highlights the importance placed on AI agents as the next evolution in artificial intelligence:

  • Ant Group, Alibaba’s fintech affiliate, has unveiled its “MCP server for payment services,” that lets AI agents connect with Alipay’s payment platform. The integration allows users to “easily make payments, check payment statuses and initiate refunds using simple natural language commands,” according to Ant Group’s statement.
  • Additionally, Ant Group’s AI agent development platform, Tbox, now supports deployment of more than 30 MCP services currently on the market, including those for Alipay, Amap Maps, Google MCP, and Amazon Web Services’ knowledge base retrieval server.
  • Alibaba Cloud launched an MCP marketplace through its AI model hosting platform ModelScope, offering more than 1,000 services connecting to mapping tools, office collaboration platforms, online storage services, and various Google services.
  • Baidu, China’s leading search and AI company, has indicated that its support for MCP would foster “abundant use cases for [AI] applications and solutions.”

Beyond chatbots: Why AI agents represent the next frontier

China’s MCP adoption signals a shift in focus from large language models and chatbots to more capable AI agents. As Red Xiao Hong, founder and CEO of Butterfly Effect, described, an AI agent is “more like a human being” compared to how chatbots perform.

The agents not only respond to questions but “interact with the environment, collect feedback and use the feedback as a new prompt.” This distinction is held to be important by companies driving progress in AI.

While chatbots and LLMs can generate text and respond to queries, AI agents can take actions on multiple platforms and services. They represent an advance from the limited capabilities of conventional AI applications toward autonomous systems capable of completing more complex tasks with less human intervention.

The rapid embrace of MCP by Chinese tech companies suggests they view AI agents as a new avenue for innovation and commercial opportunity that go beyond what’s possible with existing chatbots and language models.

China’s MCP adoption could position its tech companies at the forefront of practical AI implementation. By creating standardised ways for AI agents to interact with services, Chinese companies are building ecosystems where AI could deliver more comprehensive experiences.

Challenges and considerations of China’s MCP adoption

Despite the developments in China’s MCP adoption, several factors may influence the standard’s longer-term impact:

  1. International standards competition. While Chinese tech companies are racing to implement MCP, its global success depends on widespread adoption. Originally developed by Anthropic, the protocol faces potential competition from alternative standards that might emerge from other major AI players like OpenAI, Google, or Microsoft.
  2. Regulatory environments. As AI agents gain more autonomy in performing tasks, especially those involving payments and sensitive user data, regulatory scrutiny will inevitably increase. China’s regulatory landscape for AI is still evolving, and how authorities respond to these advancements will significantly impact MCP’s trajectory.
  3. Security and privacy. The integration of AI agents with multiple systems via MCP creates new potential vulnerabilities. Ensuring robust security measures across all connected platforms will be important for maintaining user trust.
  4. Technical integration challenges. While the concept of universal connectivity is appealing, achieving integration across diverse systems with varying architectures, data structures, and security protocols presents significant technical challenges.

The outlook for China’s AI ecosystem

China’s MCP adoption represents a strategic bet on AI agents as the next evolution in artificial intelligence. If successful, it could accelerate the practical implementation of AI in everyday applications, potentially transforming how users interact with digital services.

As Red Xiao Hong noted, AI agents are designed to interact with their environment in ways that more closely resemble human behaviour than traditional AI applications. The capacity for interaction and adaptation could be what finally bridges the gap between narrow AI tools and the more generalised assistants that tech companies have long promised.

See also: Manus AI agent: breakthrough in China’s agentic AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post China’s MCP adoption: AI assistants that actually do things appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chinas-mcp-adoption-ai-assistants-that-actually-do-things/feed/ 0
AI memory demand propels SK Hynix to historic DRAM market leadership https://www.artificialintelligence-news.com/news/ai-memory-demand-propels-sk-hynix-to-historic-dram-market-leadership/ https://www.artificialintelligence-news.com/news/ai-memory-demand-propels-sk-hynix-to-historic-dram-market-leadership/#respond Wed, 23 Apr 2025 11:33:53 +0000 https://www.artificialintelligence-news.com/?p=105416 AI memory demand has catapulted SK Hynix to a top position in the global DRAM market, overtaking longtime leader Samsung for the first time. According to Counterpoint Research data, SK Hynix captured 36% of the DRAM market in Q1 2025, compared to Samsung’s 34% share. HBM chips drive market shift The company’s achievement ends Samsung’s […]

The post AI memory demand propels SK Hynix to historic DRAM market leadership appeared first on AI News.

]]>
AI memory demand has catapulted SK Hynix to a top position in the global DRAM market, overtaking longtime leader Samsung for the first time.

According to Counterpoint Research data, SK Hynix captured 36% of the DRAM market in Q1 2025, compared to Samsung’s 34% share.

HBM chips drive market shift

The company’s achievement ends Samsung’s three-decade dominance in DRAM manufacturing and comes shortly after SK Hynix’s operating profit passed Samsung’s in Q4 2024.

The company’s strategic focus on high-bandwidth memory (HBM) chips, essential components for artificial intelligence applications, has proven to be the decisive factor in the market shift.

“The is a milestone for SK Hynix which is successfully delivering on DRAM to a market that continues to show unfettered demand for HBM memory,” said Jeongku Choi, senior analyst at Counterpoint Research.

“The manufacturing of specialised HBM DRAM chips has been notoriously tricky and those that got it right early on have reaped dividends.”

SK Hynix has taken the overall DRAM market lead and has established its dominance in the HBM sector, occupying 70% of this high-value market segment, according to Counterpoint Research.

HBM chips, which stack multiple DRAM dies to dramatically increase data processing capabilities, have become fundamental components for training AI models.

“It’s another wake-up call for Samsung,” said MS Hwang, research director at Counterpoint Research in Seoul, as quoted by Bloomberg. Hwang noted that SK Hynix’s leadership in HBM chips likely comprised a larger portion of the company’s operating income.

Financial performance and industry outlook

The company is expected to report positive financial results on Thursday, with analysts projecting a 38% quarterly rise in sales and a 129% increase in operating profit for the March quarter, according to Bloomberg data.

The shift in market leadership reflects broader changes in the semiconductor industry as AI applications drive demand for specialised memory solutions.

While traditional DRAM remains essential for computing devices, HBM chips that can handle the enormous data requirements of generative AI systems are becoming increasingly valuable.

Market research firm TrendForce forecasts that SK Hynix will maintain its leadership position throughout 2025, coming to control over 50% of the HBM market in gigabit shipments.

Samsung’s share is expected to decline to under 30%, while Micron Technology is said to gain ground to take close to 20% of the market.

Counterpoint Research expects the overall DRAM market in Q2 2025 to maintain similar patterns across segment growth and vendor share, suggesting SK Hynix’s newfound leadership position may be sustainable in the near term.

Navigating potential AI memory demand headwinds

Despite the current AI memory demand boom, industry analysts identify several challenges on the horizon. “Right now the world is focused on the impact of tariffs, so the question is: what’s going to happen with HBM DRAM?” said MS Hwang.

“At least in the short term, the segment is less likely to be affected by any trade shock as AI demand should remain strong. More significantly, the end product for HBM is AI servers, which – by definition – can be borderless.”

However, longer-term risks remain significant. Counterpoint Research sees potential threats to HBM DRAM market growth “stemming from structural challenges brought on by trade shock that could trigger a recession or even a depression.”

Morgan Stanley analysts, led by Shawn Kim, expressed similar sentiment in a note to investors cited by Bloomberg: “The real tariff impact on memory resembles an iceberg, with most danger unseen below the surface and still approaching.”

The analysts cautioned that earnings reports might be overshadowed by these larger macroeconomic forces. Interestingly, despite SK Hynix’s current advantage, Morgan Stanley still favours Samsung as their top pick in the memory sector.

“It can better withstand a macro slowdown, is priced at trough multiples, has optionality of future growth via HBM, and is buying back shares every day,” analysts wrote.

Samsung is scheduled to provide its complete financial statement with net income and divisional breakdowns on April 30, after reporting preliminary operating profit of 6.6 trillion won ($6 billion) on revenue of 79 trillion won earlier this month.

The shift in competitive positioning between the two South Korean memory giants underscores how specialised AI components are reshaping the semiconductor industry.

SK Hynix’s early and aggressive investment in HBM technology has paid off, though Samsung’s considerable resources ensure the rivalry will continue.

For the broader technology ecosystem, the change in DRAM market leadership signals the growing importance of AI-specific hardware components.

As data centres worldwide continue expanding to support increasingly-sophisticated AI models, AI memory demand should remain robust despite potential macroeconomic headwinds.

(Image credit: SK Hynix)

See also: Samsung aims to boost on-device AI with LPDDR5X DRAM

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI memory demand propels SK Hynix to historic DRAM market leadership appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-memory-demand-propels-sk-hynix-to-historic-dram-market-leadership/feed/ 0
Google introduces AI reasoning control in Gemini 2.5 Flash https://www.artificialintelligence-news.com/news/google-introduces-ai-reasoning-control-gemini-2-5-flash/ https://www.artificialintelligence-news.com/news/google-introduces-ai-reasoning-control-gemini-2-5-flash/#respond Wed, 23 Apr 2025 07:01:20 +0000 https://www.artificialintelligence-news.com/?p=105376 Google has introduced an AI reasoning control mechanism for its Gemini 2.5 Flash model that allows developers to limit how much processing power the system expends on problem-solving. Released on April 17, this “thinking budget” feature responds to a growing industry challenge: advanced AI models frequently overanalyse straightforward queries, consuming unnecessary computational resources and driving […]

The post Google introduces AI reasoning control in Gemini 2.5 Flash appeared first on AI News.

]]>
Google has introduced an AI reasoning control mechanism for its Gemini 2.5 Flash model that allows developers to limit how much processing power the system expends on problem-solving.

Released on April 17, this “thinking budget” feature responds to a growing industry challenge: advanced AI models frequently overanalyse straightforward queries, consuming unnecessary computational resources and driving up operational and environmental costs.

While not revolutionary, the development represents a practical step toward addressing efficiency concerns that have emerged as reasoning capabilities become standard in commercial AI software.

The new mechanism enables precise calibration of processing resources before generating responses, potentially changing how organisations manage financial and environmental impacts of AI deployment.

“The model overthinks,” acknowledges Tulsee Doshi, Director of Product Management at Gemini. “For simple prompts, the model does think more than it needs to.”

The admission reveals the challenge facing advanced reasoning models – the equivalent of using industrial machinery to crack a walnut.

The shift toward reasoning capabilities has created unintended consequences. Where traditional large language models primarily matched patterns from training data, newer iterations attempt to work through problems logically, step by step. While this approach yields better results for complex tasks, it introduces significant inefficiency when handling simpler queries.

Balancing cost and performance

The financial implications of unchecked AI reasoning are substantial. According to Google’s technical documentation, when full reasoning is activated, generating outputs becomes approximately six times more expensive than standard processing. The cost multiplier creates a powerful incentive for fine-tuned control.

Nathan Habib, an engineer at Hugging Face who studies reasoning models, describes the problem as endemic across the industry. “In the rush to show off smarter AI, companies are reaching for reasoning models like hammers even where there’s no nail in sight,” he explained to MIT Technology Review.

The waste isn’t merely theoretical. Habib demonstrated how a leading reasoning model, when attempting to solve an organic chemistry problem, became trapped in a recursive loop, repeating “Wait, but…” hundreds of times – essentially experiencing a computational breakdown and consuming processing resources.

Kate Olszewska, who evaluates Gemini models at DeepMind, confirmed Google’s systems sometimes experience similar issues, getting stuck in loops that drain computing power without improving response quality.

Granular control mechanism

Google’s AI reasoning control provides developers with a degree of precision. The system offers a flexible spectrum ranging from zero (minimal reasoning) to 24,576 tokens of “thinking budget” – the computational units representing the model’s internal processing. The granular approach allows for customised deployment based on specific use cases.

Jack Rae, principal research scientist at DeepMind, says that defining optimal reasoning levels remains challenging: “It’s really hard to draw a boundary on, like, what’s the perfect task right now for thinking.”

Shifting development philosophy

The introduction of AI reasoning control potentially signals a change in how artificial intelligence evolves. Since 2019, companies have pursued improvements by building larger models with more parameters and training data. Google’s approach suggests an alternative path focusing on efficiency rather than scale.

“Scaling laws are being replaced,” says Habib, indicating that future advances may emerge from optimising reasoning processes rather than continuously expanding model size.

The environmental implications are equally significant. As reasoning models proliferate, their energy consumption grows proportionally. Research indicates that inferencing – generating AI responses – now contributes more to the technology’s carbon footprint than the initial training process. Google’s reasoning control mechanism offers a potential mitigating factor for this concerning trend.

Competitive dynamics

Google isn’t operating in isolation. The “open weight” DeepSeek R1 model, which emerged earlier this year, demonstrated powerful reasoning capabilities at potentially lower costs, triggering market volatility that reportedly caused nearly a trillion-dollar stock market fluctuation.

Unlike Google’s proprietary approach, DeepSeek makes its internal settings publicly available for developers to implement locally.

Despite the competition, Google DeepMind’s chief technical officer Koray Kavukcuoglu maintains that proprietary models will maintain advantages in specialised domains requiring exceptional precision: “Coding, math, and finance are cases where there’s high expectation from the model to be very accurate, to be very precise, and to be able to understand really complex situations.”

Industry maturation signs

The development of AI reasoning control reflects an industry now confronting practical limitations beyond technical benchmarks. While companies continue to push reasoning capabilities forward, Google’s approach acknowledges a important reality: efficiency matters as much as raw performance in commercial applications.

The feature also highlights tensions between technological advancement and sustainability concerns. Leaderboards tracking reasoning model performance show that single tasks can cost upwards of $200 to complete – raising questions about scaling such capabilities in production environments.

By allowing developers to dial reasoning up or down based on actual need, Google addresses both financial and environmental aspects of AI deployment.

“Reasoning is the key capability that builds up intelligence,” states Kavukcuoglu. “The moment the model starts thinking, the agency of the model has started.” The statement reveals both the promise and the challenge of reasoning models – their autonomy creates both opportunities and resource management challenges.

For organisations deploying AI solutions, the ability to fine-tune reasoning budgets could democratise access to advanced capabilities while maintaining operational discipline.

Google claims Gemini 2.5 Flash delivers “comparable metrics to other leading models for a fraction of the cost and size” – a value proposition strengthened by the ability to optimise reasoning resources for specific applications.

Practical implications

The AI reasoning control feature has immediate practical applications. Developers building commercial applications can now make informed trade-offs between processing depth and operational costs.

For simple applications like basic customer queries, minimal reasoning settings preserve resources while still using the model’s capabilities. For complex analysis requiring deep understanding, the full reasoning capacity remains available.

Google’s reasoning ‘dial’ provides a mechanism for establishing cost certainty while maintaining performance standards.

See also: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google introduces AI reasoning control in Gemini 2.5 Flash appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-introduces-ai-reasoning-control-gemini-2-5-flash/feed/ 0
Huawei’s AI hardware breakthrough challenges Nvidia’s dominance https://www.artificialintelligence-news.com/news/huawei-ai-hardware-breakthrough-challenges-nvidia-dominance/ https://www.artificialintelligence-news.com/news/huawei-ai-hardware-breakthrough-challenges-nvidia-dominance/#respond Thu, 17 Apr 2025 15:12:36 +0000 https://www.artificialintelligence-news.com/?p=105355 Chinese tech giant Huawei has made a bold move that could potentially change who leads the global AI chip race. The company has unveiled a powerful new computing system called the CloudMatrix 384 Supernode that, according to local media reports, performs better than similar technology from American chip leader Nvidia. If the performance claims prove […]

The post Huawei’s AI hardware breakthrough challenges Nvidia’s dominance appeared first on AI News.

]]>
Chinese tech giant Huawei has made a bold move that could potentially change who leads the global AI chip race. The company has unveiled a powerful new computing system called the CloudMatrix 384 Supernode that, according to local media reports, performs better than similar technology from American chip leader Nvidia.

If the performance claims prove accurate, the AI hardware breakthrough might reshape the technology landscape at a time when AI development is continuing worldwide, and despite US efforts to limit China’s access to advanced technology.

300 petaflops: Challenging Nvidia’s hardware dominance

The CloudMatrix 384 Supernode is described as a “nuclear-level product,” according to reports from STAR Market Daily cited by the South China Morning Post (SCMP). The hardware achieves an impressive 300 petaflops of computing power, in excess of the 180 petaflops delivered by Nvidia’s NVL72 system.

The CloudMatrix 384 Supernode was specifically engineered to address the computing bottlenecks that have become increasingly problematic as artificial intelligence models continue to grow in size and complexity.

The system is designed to compete directly with Nvidia’s offerings, which have dominated the global market for AI accelerator hardware thus far. Huawei’s CloudMatrix infrastructure was first unveiled in September 2024, and was developed specifically to meet surging demand in China’s domestic market.

The 384 Supernode variant represents the most powerful implementation of AI architecture to date, with reports indicating it can achieve a throughput of 1,920 tokens per second and maintain high levels of accuracy, reportedly matching the performance of Nvidia’s H100 chips, but using Chinese-made components instead.

Developing under sanctions: The technical achievement

What makes the AI hardware breakthrough particularly significant is that it has been achieved despite the severe technological restrictions Huawei has faced since being placed on the US Entity List.

Sanctions have limited the company’s access to advanced US semiconductor technology and design software, forcing Huawei to develop alternative approaches and rely on domestic supply chains.

The core technological advancement enabling the CloudMatrix 384’s performance appears to be Huawei’s answer to Nvidia’s NVLink – a high-speed interconnect technology that allows multiple GPUs to communicate efficiently.

Nvidia’s NVL72 system, released in March 2024, features a 72-GPU NVLink domain that functions as a single, powerful GPU, enabling real-time inference for trillion-parameter models at speeds 30 times faster than previous generations.

According to reporting from the SCMP, Huawei is collaborating with Chinese AI infrastructure startup SiliconFlow to implement the CloudMatrix 384 Supernode in supporting DeepSeek-R1, a reasoning model from Hangzhou-based DeepSeek.

Supernodes are AI infrastructure architectures equipped with more resources than standard systems – including enhanced central processing units, neural processing units, network bandwidth, storage, and memory.

The configuration allows them to function as relay servers, enhancing the overall computing performance of clusters and significantly accelerating the training of foundational AI models.

Beyond Huawei: China’s broader AI infrastructure push

The AI hardware breakthrough from Huawei doesn’t exist in isolation but rather represents part of a broader push by Chinese technology companies to build domestic AI computing infrastructure.

In February, e-commerce giant Alibaba Group announced a massive 380 billion yuan ($52.4 billion) investment in computing resources and AI infrastructure over three years – the largest-ever investment by a private Chinese company in a computing project.

For the global AI community, the emergence of viable alternatives to Nvidia’s hardware could eventually address the computing bottlenecks that have limited AI advancement. Competition in this space could potentially increase available computing capacity and provide developers with more options for training and deploying their models.

However, it’s worth noting that as of the report’s publication, Huawei had not yet responded to requests for comment on these claims.

As tensions between the US and China continue to intensify in the technology sector, Huawei’s CloudMatrix 384 Supernode represents a significant development in China’s pursuit of technological self-sufficiency.

If the performance claims are verified, this AI hardware breakthrough would mean Huawei has achieved computing independence in this niche, despite facing extensive sanctions.

The development also signals a broader trend in China’s technology sector, with multiple domestic companies intensifying their investments in AI infrastructure to capitalise on growing demand and promote the adoption of homegrown chips.

The collective effort suggests China is committed to developing domestic alternatives to American technology in this strategically important field..

See also: Manus AI agent: breakthrough in China’s agentic AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Huawei’s AI hardware breakthrough challenges Nvidia’s dominance appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/huawei-ai-hardware-breakthrough-challenges-nvidia-dominance/feed/ 0
DeepSeek’s AIs: What humans really want https://www.artificialintelligence-news.com/news/deepseeks-ai-breakthrough-teaching-machines-to-learn-what-humans-really-want/ https://www.artificialintelligence-news.com/news/deepseeks-ai-breakthrough-teaching-machines-to-learn-what-humans-really-want/#respond Wed, 09 Apr 2025 07:44:08 +0000 https://www.artificialintelligence-news.com/?p=105239 Chinese AI startup DeepSeek has solved a problem that has frustrated AI researchers for several years. Its breakthrough in AI reward models could improve dramatically how AI systems reason and respond to questions. In partnership with Tsinghua University researchers, DeepSeek has created a technique detailed in a research paper, titled “Inference-Time Scaling for Generalist Reward […]

The post DeepSeek’s AIs: What humans really want appeared first on AI News.

]]>
Chinese AI startup DeepSeek has solved a problem that has frustrated AI researchers for several years. Its breakthrough in AI reward models could improve dramatically how AI systems reason and respond to questions.

In partnership with Tsinghua University researchers, DeepSeek has created a technique detailed in a research paper, titled “Inference-Time Scaling for Generalist Reward Modeling.” It outlines how a new approach outperforms existing methods and how the team “achieved competitive performance” compared to strong public reward models.

The innovation focuses on enhancing how AI systems learn from human preferences – a important aspect of creating more useful and aligned artificial intelligence.

What are AI reward models, and why do they matter?

AI reward models are important components in reinforcement learning for large language models. They provide feedback signals that help guide an AI’s behaviour toward preferred outcomes. In simpler terms, reward models are like digital teachers that help AI understand what humans want from their responses.

“Reward modeling is a process that guides an LLM towards human preferences,” the DeepSeek paper states. Reward modeling becomes important as AI systems get more sophisticated and are deployed in scenarios beyond simple question-answering tasks.

The innovation from DeepSeek addresses the challenge of obtaining accurate reward signals for LLMs in different domains. While current reward models work well for verifiable questions or artificial rules, they struggle in general domains where criteria are more diverse and complex.

The dual approach: How DeepSeek’s method works

DeepSeek’s approach combines two methods:

  1. Generative reward modeling (GRM): This approach enables flexibility in different input types and allows for scaling during inference time. Unlike previous scalar or semi-scalar approaches, GRM provides a richer representation of rewards through language.
  2. Self-principled critique tuning (SPCT): A learning method that fosters scalable reward-generation behaviours in GRMs through online reinforcement learning, one that generates principles adaptively.

One of the paper’s authors from Tsinghua University and DeepSeek-AI, Zijun Liu, explained that the combination of methods allows “principles to be generated based on the input query and responses, adaptively aligning reward generation process.”

The approach is particularly valuable for its potential for “inference-time scaling” – improving performance by increasing computational resources during inference rather than just during training.

The researchers found that their methods could achieve better results with increased sampling, letting models generate better rewards with more computing.

Implications for the AI Industry

DeepSeek’s innovation comes at an important time in AI development. The paper states “reinforcement learning (RL) has been widely adopted in post-training for large language models […] at scale,” leading to “remarkable improvements in human value alignment, long-term reasoning, and environment adaptation for LLMs.”

The new approach to reward modelling could have several implications:

  1. More accurate AI feedback: By creating better reward models, AI systems can receive more precise feedback about their outputs, leading to improved responses over time.
  2. Increased adaptability: The ability to scale model performance during inference means AI systems can adapt to different computational constraints and requirements.
  3. Broader application: Systems can perform better in a broader range of tasks by improving reward modelling for general domains.
  4. More efficient resource use: The research shows that inference-time scaling with DeepSeek’s method could outperform model size scaling in training time, potentially allowing smaller models to perform comparably to larger ones with appropriate inference-time resources.

DeepSeek’s growing influence

The latest development adds to DeepSeek’s rising profile in global AI. Founded in 2023 by entrepreneur Liang Wenfeng, the Hangzhou-based company has made waves with its V3 foundation and R1 reasoning models.

The company upgraded its V3 model (DeepSeek-V3-0324) recently, which the company said offered “enhanced reasoning capabilities, optimised front-end web development and upgraded Chinese writing proficiency.” DeepSeek has committed to open-source AI, releasing five code repositories in February that allow developers to review and contribute to development.

While speculation continues about the potential release of DeepSeek-R2 (the successor to R1) – Reuters has speculated on possible release dates – DeepSeek has not commented in its official channels.

What’s next for AI reward models?

According to the researchers, DeepSeek intends to make the GRM models open-source, although no specific timeline has been provided. Open-sourcing will accelerate progress in the field by allowing broader experimentation with reward models.

As reinforcement learning continues to play an important role in AI development, advances in reward modelling like those in DeepSeek and Tsinghua University’s work will likely have an impact on the abilities and behaviour of AI systems.

Work on AI reward models demonstrates that innovations in how and when models learn can be as important increasing their size. By focusing on feedback quality and scalability, DeepSeek addresses one of the fundamental challenges to creating AI that understands and aligns with human preferences better.

See also: DeepSeek disruption: Chinese AI innovation narrows global technology divide

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek’s AIs: What humans really want appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseeks-ai-breakthrough-teaching-machines-to-learn-what-humans-really-want/feed/ 0
DeepSeek disruption: Chinese AI innovation narrows global technology divide https://www.artificialintelligence-news.com/news/deepseek-disruption-chinese-ai-innovation-narrows-global-technology-divide/ https://www.artificialintelligence-news.com/news/deepseek-disruption-chinese-ai-innovation-narrows-global-technology-divide/#respond Thu, 27 Mar 2025 08:33:31 +0000 https://www.artificialintelligence-news.com/?p=105008 Chinese AI innovation is reshaping the global technology landscape, challenging assumptions about Western dominance in advanced computing. Recent developments from companies like DeepSeek illustrate how quickly China has adapted to and overcome international restrictions through creative approaches to AI development. According to Lee Kai-fu, CEO of Chinese startup 01.AI and former head of Google China, […]

The post DeepSeek disruption: Chinese AI innovation narrows global technology divide appeared first on AI News.

]]>
Chinese AI innovation is reshaping the global technology landscape, challenging assumptions about Western dominance in advanced computing.

Recent developments from companies like DeepSeek illustrate how quickly China has adapted to and overcome international restrictions through creative approaches to AI development.

According to Lee Kai-fu, CEO of Chinese startup 01.AI and former head of Google China, the gap between Chinese and American AI capabilities has narrowed dramatically.

“Previously, I think it was a six to nine-month gap and behind in everything. And now I think that’s probably three months behind in some of the core technologies, but ahead in some specific areas,” Lee toldReutersin a recent interview.

DeepSeek has emerged as the poster child for this new wave of Chinese AI innovation. On January 20, 2025, as Donald Trump was inaugurated as US President, DeepSeek quietly launched its R1 model.

The low-cost, open-source large language model reportedly rivals or surpasses OpenAI’s ChatGPT-4, yet was developed at a fraction of the cost.

Algorithmic efficiency over hardware superiority

What makes DeepSeek’s achievements particularly significant is how they’ve been accomplished despite restricted access to the latest silicon. Rather than being limited by US export controls, Chinese AI innovation has flourished by instead focusing on algorithmic efficiency and novel approaches to model architecture.

Different aspects of this innovative approach were demonstrated further when DeepSeek released an upgraded V3 model on March 25, 2025. The DeepSeek-V3-0324 features enhanced reasoning capabilities and improved performance in multiple benchmarks.

The model showed particular strength in mathematics, scoring 59.4 on the American Invitational Mathematics Examination (AIME) compared to its predecessor’s 39.6. It also improved by 10 points on LiveCodeBench to 49.2.

Häme University lecturer Kuittinen Petri noted on social media platform X that “DeepSeek is doing all this with just [roughly] 2% [of the] money resources of OpenAI.”

When he prompted the new model to create a responsive front page for an AI company, it produced a fully functional, mobile-friendly website with just 958 lines of code.

Market reactions and global impact

The financial markets have noticed the shift in the AI landscape. When DeepSeek launched its R1 model in January, America’s Nasdaq plunged 3.1%, while the S&P 500 fell 1.5% – an indication that investors recognise the potential impact of Chinese AI innovation on established Western tech companies.

The developments present opportunities and challenges for the broader global community. China’s focus on open-source, cost-effective models could democratise access to advanced AI capabilities for emerging economies.

Both China and the US are making massive investments in AI infrastructure. The Trump administration has unveiled its $500 billion Stargate Project, and China projects investments of more than 10 trillion yuan (US$1.4 trillion) in technology by 2030.

Supply chain complexities and environmental considerations

The evolving AI landscape creates new geopolitical complexities. Countries like South Korea highlight the situation. As the world’s second-largest producer of semiconductors, Korea became more dependent on China in 2023 for five of the six most important raw materials needed for chipmaking.

Companies like Toyota, SK Hynix, Samsung, and LG Chem remain vulnerable due to China’s supply chain dominance. As AI development accelerates, environmental implications also loom.

According to the think tank, the Institute for Progress, maintaining AI leadership will require the United States to build five gigawatt computing clusters in five years. By 2030, data centres could consume 10% of US electricity, more than double the 4% recorded in 2023.

Similarly, Greenpeace East Asia estimates that China’s digital infrastructure electricity consumption will surge by 289% by 2035.

The path forward in AI development

DeepSeek’s emergence has challenged assumptions about the effectiveness of technology restrictions. As Lee Kai-fu observed, Washington’s semiconductor sanctions were a “double-edged sword” that created short-term challenges but ultimately forced Chinese firms to innovate under constraints.

Jasper Zhang, a mathematics Olympiad gold medalist with a doctoral degree from the University of California, Berkeley, tested DeepSeek-V3-0324 with an AIME 2025 problem and reported that “it solved it smoothly.” Zhang expressed confidence that “open-source AI models will win in the end,” adding that his startup Hyperbolic now supports the new model on its cloud platform.

Industry experts are now speculating that DeepSeek may release its R2 model ahead of schedule. Li Bangzhu, founder of AIcpb.com, a website tracking the popularity of AI applications, noted that “the coding capabilities are much stronger, and the new version may pave the way for the launch of R2.” R2 is slated for an early May release, according toReuters.

Both nations are pushing the boundaries of what’s possible. The implications extend beyond their borders to impact global economics, security, and environmental policy.

(Image credit: engin akyurt/Unsplash)

See also: US-China tech war escalates with new AI chips export controls


Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here

The post DeepSeek disruption: Chinese AI innovation narrows global technology divide appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-disruption-chinese-ai-innovation-narrows-global-technology-divide/feed/ 0
Manus AI agent: breakthrough in China’s agentic AI https://www.artificialintelligence-news.com/news/manus-ai-agent-breakthrough-in-chinas-agentic-ai/ https://www.artificialintelligence-news.com/news/manus-ai-agent-breakthrough-in-chinas-agentic-ai/#respond Fri, 14 Mar 2025 08:35:43 +0000 https://www.artificialintelligence-news.com/?p=104781 Manus AI agent is China’s latest artificial intelligence breakthrough that’s turning heads in Silicon Valley and beyond. Manus was launched last week via an invitation-only preview, and represents China’s most ambitious entry into the emerging AI agent market. Unlike anything seen to date, the Manus AI agent doesn’t just chat with users – it is […]

The post Manus AI agent: breakthrough in China’s agentic AI appeared first on AI News.

]]>
Manus AI agent is China’s latest artificial intelligence breakthrough that’s turning heads in Silicon Valley and beyond. Manus was launched last week via an invitation-only preview, and represents China’s most ambitious entry into the emerging AI agent market.

Unlike anything seen to date, the Manus AI agent doesn’t just chat with users – it is allegedly capable of independently tackling complex multi-step tasks with minimal human guidance.

Developed by Chinese startup Butterfly Effect with financial backing from tech giant Tencent Holdings, Manus AI agent has captured global attention for its ability to bridge the gap between theoretical AI capabilities and practical, real-world applications. It uses an innovative multi-model architecture that combines the strengths of multiple leading language models.

Breakthrough autonomous task execution

In a post on X, Peak Ji Yichao, co-founder and chief scientist at Butterfly Effect, said that the agentic AI was built using existing large language models, including Anthropic’s Claude and fine-tuned versions of Alibaba’s open-source Qwen.

Its multi-model nature allows Manus to use different AI strengths according to what’s demanded of it, resulting in more sophisticated reasoning and execution capabilities.

“The Manus AI agent represents a fundamentally different approach to artificial intelligence,” CNN Business stated. According to coverage, Manus “can carry out complex, multi-step tasks like screening resumés and creating a website,” and “doesn’t only generate ideas but delivers tangible results, like producing a report recommending properties to buy based on specific criteria.”

Real-world performance assessment

In an extensive hands-on evaluation, MIT Technology Review tested the Manus AI agent in three distinct task categories: compiling comprehensive journalist lists, conducting real estate searches with complex parameters, and identifying candidates for its prestigious Innovators Under 35 program.

“Using Manus feels like collaborating with a highly intelligent and efficient intern,” wrote Caiwei Chen in the assessment. “While it occasionally lacks understanding of what it’s being asked to do, makes incorrect assumptions, or cuts corners to expedite tasks, it explains its reasoning clearly, is remarkably adaptable, and can improve substantially when provided with detailed instructions or feedback.”

The evaluation revealed one of the Manus AI agent’s most distinctive features – its “Manus’s Computer” interface, which provides unprecedented transparency into the AI’s decision-making process.

The application window lets users observe the agent’s actions in real time and intervene when necessary, creating a collaborative human-AI workflow that maintains user control while automating complex processes.

Technical implementation challenges

Despite impressive capabilities, the Manus AI agent faces significant technical hurdles in its current implementation.MIT Technology Reviewdocumented frequent system crashes and timeout errors during extended use.

The platform displayed error messages, citing “high service load,” suggesting that computational infrastructure remains a limitation.

The technical constraints have contributed to highly restricted access, with less than 1% of wait-listed users receiving invite codes – the official Manus Discord channel has already accumulated over 186,000 members.

According to reporting from Chinese technology publication36Kr, the Manus AI agent’s operational costs remain relatively competitive at approximately $2 per task.

Strategic partnership with Alibaba Cloud

The creators of the Manus AI agent have announced a partnership with Alibaba’s cloud computing division. According to a South China Morning Post report dated March 11, “Manus will engage in strategic cooperation with Alibaba’s Qwen team to meet the needs of Chinese users.”

The partnership aims to make Manus available on “domestic models and computing platforms,” although implementation timelines remain unspecified.

Parallel advancements in foundation models

The Manus-Alibaba partnership coincides with Alibaba’s advances in AI foundation model technology. On March 6, the company published its QwQ-32B reasoning model, claiming performance characteristics that surpass OpenAI’s o1-mini and rivalling DeepSeek’s R1 model, despite a lower parameter count.

CNN Businessreported, “Alibaba touted its new model, QwQ-32B, in an online statement as delivering exceptional performance, almost entirely surpassing OpenAI-o1-mini and rivalling the strongest open-source reasoning model, DeepSeek-R1.”

The claimed efficiency gains are particularly noteworthy – Alibaba says QwQ-32B achieves competitive performance with just 32 billion parameters, compared to the 671 billion parameters in DeepSeek’s R1 model. The reduced model size suggests substantially lower computational requirements for training and inference with advanced reasoning capabilities.

China’s strategic AI investments

The Manus AI agent and Alibaba’s model advancements reflect China’s broader strategic emphasis on artificial intelligence development. The Chinese government has pledged explicit support for “emerging industries and industries of the future,” with artificial intelligence receiving particular focus alongside quantum computing and robotics.

Alibaba will invest 380 billion yuan (approximately $52.4 billion) in AI and cloud computing infrastructure in the next three years, a figure the company notes exceeds its total investments in these sectors during the previous decade.

As MIT Technology Review’s Caiwei Chen said, “Chinese AI companies are not just following in the footsteps of their Western counterparts. Rather than just innovating on base models, they are actively shaping the adoption of autonomous AI agents in their way.”

The Manus AI agent also exemplifies how China’s artificial intelligence ecosystem has evolved beyond merely replicating Western advances. Government policies promoting technological self-reliance, substantial funding initiatives, and a growing pipeline of specialised AI talent from Chinese universities have created conditions for original innovation.

Rather than a single approach to artificial intelligence, we are witnessing diverse implementation philosophies likely resulting in complementary systems optimised for different uses and cultural contexts.

The post Manus AI agent: breakthrough in China’s agentic AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/manus-ai-agent-breakthrough-in-chinas-agentic-ai/feed/ 0
DeepSeek’s AI dominance expands from EVs to e-scooters in China https://www.artificialintelligence-news.com/news/deepseeks-ai-dominance-expands-from-evs-to-e-scooters-in-china/ https://www.artificialintelligence-news.com/news/deepseeks-ai-dominance-expands-from-evs-to-e-scooters-in-china/#respond Tue, 18 Feb 2025 14:18:18 +0000 https://www.artificialintelligence-news.com/?p=104548 DeepSeek mobility integration is spreading across China’s transport sector, with companies including automotive giants and e-scooter manufacturers incorporating AI into their products. The adoption wave began with primary electric vehicle (EV) manufacturers and has expanded recently to include the country’s leading electric two-wheeler brands. DeepSeek’s mobility integration transforms the auto industry According to the South […]

The post DeepSeek’s AI dominance expands from EVs to e-scooters in China appeared first on AI News.

]]>
DeepSeek mobility integration is spreading across China’s transport sector, with companies including automotive giants and e-scooter manufacturers incorporating AI into their products. The adoption wave began with primary electric vehicle (EV) manufacturers and has expanded recently to include the country’s leading electric two-wheeler brands.

DeepSeek’s mobility integration transforms the auto industry

According to the South China Morning Post, over the past two weeks, more than a dozen Chinese automakers have announced plans to integrate DeepSeek’s AI technology into their vehicles. The roster includes industry leader BYD, established manufacturers like Geely, Great Wall Motor, Chery Automobile, and SAIC Motor, and emerging players like Leapmotor.

BYD’s commitment to the technology is particularly noteworthy, with the company planning to integrate DeepSeek in its Xuanji vehicle software platform. The integration will let BYD offer preliminary self-driving capabilities on nearly all its models with no change to the sticker price, making autonomous driving accessible to more consumers.

The initiative covers around 20 models, including the highly-affordable Seagull hatchback, which is currently priced at 69,800 yuan (US$9,575).

E-scooter brands join the DeepSeek bandwagon

DeepSeek has hit China’s e-scooter sector most recently, as Xiaomi-backed Segway-Ninebot Group and Nasdaq-listed Niu Technologies work to incorporate AI into their electric two-wheelers.

Ninebot stated on Friday that it would “deeply integrate DeepSeek” into its products, promising enhanced features through its mobile app. The improvements are said to include AI-powered content creation, data analytics, personalised recommendations, and intelligent services to riders.

Niu Technologies claims to have integrated DeepSeek’s large language models (LLMs) as of February 9 this year. The company plans to use the technology for:

  • Driver assistance systems
  • Riding safety features
  • AI-powered travel companions
  • Voice interaction
  • Intelligent service recommendations

Yadea Group, the world’s largest by sales electric two-wheeler manufacturer, announced on Saturday that it plans to embed DeepSeek’s technology into its ecosystem.

The rapid adoption of DeepSeek in China’s mobility sector reflects what industry observers call “DeepSeek fever.” The technology’s appeal lies in its cost-effective and cost-efficient approach to AI integration.

The Hangzhou-based company’s open-source AI models, DeepSeek-V3 and DeepSeek-R1, operate at a fraction of the cost and computing power typically required for large language model projects.

“Cars without DeepSeek will either lose market share or be edged out of the market,” said Phate Zhang, founder of Shanghai-based EV data provider CnEVPost.

The expansion of DeepSeek mobility integration comes at a time when Chinese e-scooter brands are gaining traction in overseas markets. According to customs data, the value of electric two-wheeler exports rose 27.6% to US$5.82 billion in 2024, passing the previous peak of US$5.31 billion in 2022. Export volume increased by 47% to 22.13 million units.

Research firm IDC notes that DeepSeek’s open-source model has fostered a collaborative innovation ecosystem via platforms like GitHub, letting developers participate in optimisation and security testing.

The collaborative approach is expected to improve companies’ ability to deploy, train, and utilise large language models.

The impact of DeepSeek mobility integration on China’s transport sector appears to be growing. Zhang Yongwei, general secretary of China EV100, projects that by 2025, approximately 15 million cars – representing two-thirds of national sales – will be equipped with preliminary autonomous driving systems, underscoring the transformative potential of the technology in reshaping China’s transport system.

(Photo by Kenny Leys)

See also: DeepSeek ban? China data transfer boosts security concerns

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek’s AI dominance expands from EVs to e-scooters in China appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseeks-ai-dominance-expands-from-evs-to-e-scooters-in-china/feed/ 0
Could Alibaba’s Qwen AI power the next generation of iPhones in China? https://www.artificialintelligence-news.com/news/could-alibabas-qwen-ai-power-the-next-generation-of-iphones-in-china/ https://www.artificialintelligence-news.com/news/could-alibabas-qwen-ai-power-the-next-generation-of-iphones-in-china/#respond Thu, 13 Feb 2025 14:34:12 +0000 https://www.artificialintelligence-news.com/?p=104418 Apple’s aim to integrate Qwen AI into Chinese iPhones has taken a significant step forward, with sources indicating a potential partnership between the Cupertino giant and Alibaba Group Holding. The development could reshape how AI features are implemented in one of the world’s most regulated tech markets. According to multiple sources familiar with the matter, […]

The post Could Alibaba’s Qwen AI power the next generation of iPhones in China? appeared first on AI News.

]]>
Apple’s aim to integrate Qwen AI into Chinese iPhones has taken a significant step forward, with sources indicating a potential partnership between the Cupertino giant and Alibaba Group Holding. The development could reshape how AI features are implemented in one of the world’s most regulated tech markets.

According to multiple sources familiar with the matter, Apple is in advanced talks to use Alibaba’s Qwen AI models for its iPhone lineup in mainland China. The move would depart from Apple’s global strategy of using OpenAI’s GPT models for its AI features, highlighting the company’s willingness to adapt to local market conditions.

The technical edge of Qwen AI

Qwen AI is attractive to Apple in China because of the former’s proven capabilities in the open-source AI ecosystem. Recent benchmarks from Hugging Face, a leading collaborative machine-learning platform, position Qwen at the forefront of open-source large language models (LLMs).

The platform’s data shows Qwen-powered models dominating the top 10 positions in performance global rankings, demonstrating the technical maturity that Apple seeks for its AI integration.

“The selection of Qwen AI for iPhone integration would validate Alibaba’s AI capabilities,” explains Morningstar’s senior equity analyst Chelsey Lam. “This could be particularly important for Apple’s strategy to re-invigorate iPhone sales in China, where AI features have become increasingly important for smartphone users.”

Regulatory navigation and market impact

The potential partnership reflects an understanding of China’s AI regulatory landscape. While Apple’s global AI features remain unavailable in China due to regulatory requirements, partnering with Alibaba could provide a compliant pathway to introduce advanced AI capabilities.

Market reaction to the news has been notably positive:

  • Alibaba’s stock surged 7.6% on Monday, followed by an additional 1.3% gain on Tuesday
  • Apple shares responded with a 2.2% increase
  • The tech sector has shown renewed interest in China-focused AI integration strategies

Development timeline and expectations

The timing of the potential collaboration aligns with Apple’s upcoming China developer conference in Shanghai, scheduled for March 25. Industry observers speculate the event could serve as a platform on which to announce the integration of Qwen AI features into the iPhone ecosystem.

“The partnership could change how international tech companies approach AI localisation in China,” noted a senior AI researcher at a leading Chinese university, speaking anonymously. “It’s not just about technology integration; it’s about creating a sustainable model for AI development in China’s regulatory framework.”

Implications for developers and users

For Chinese iOS developers, the potential integration of Qwen AI presents opportunity. The partnership could enable:

  • Creation of locally optimised AI applications
  • Enhanced natural language processing capabilities specific to Chinese users
  • Seamless integration with local services and platforms

Prospects and industry impact

The effects of the partnership extend beyond immediate market concerns. As global tech companies navigate operating in China, the Apple-Alibaba collaboration could serve as a blueprint for future integration.

For Alibaba, securing Apple as a flagship partner could catalyse more partnerships with global technology companies seeking AI solutions for China. The collaboration would demonstrate Qwen AI’s capability to meet the stringent requirements of one of the world’s most demanding tech companies.

Looking ahead

While both companies maintain official silence on the partnership, the tech community awaits announcements at the upcoming Shanghai developer conference. The development is important when AI capabilities increasingly influence smartphone purchasing decisions. For Apple, success in China will impact its global growth trajectory, and integrating Qwen AI could provide the competitive edge it needs to maintain its premium market position against local manufacturers offering advanced AI features.

It underscores a broader trend in the tech industry: the growing importance of localised AI solutions in major markets.

See also: Has Huawei outsmarted Apple in the AI race?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here

The post Could Alibaba’s Qwen AI power the next generation of iPhones in China? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/could-alibabas-qwen-ai-power-the-next-generation-of-iphones-in-china/feed/ 0
Big tech’s $320B AI spend defies efficiency race https://www.artificialintelligence-news.com/news/big-techs-320b-ai-spend-defies-efficiency-race/ https://www.artificialintelligence-news.com/news/big-techs-320b-ai-spend-defies-efficiency-race/#respond Wed, 12 Feb 2025 11:57:25 +0000 https://www.artificialintelligence-news.com/?p=104318 Tech giants are beginning an unprecedented $320 billion AI infrastructure spending spree in 2025, brushing aside concerns about more efficient AI models from challengers like DeepSeek. The massive investment push from Amazon, Microsoft, Google, and Meta signals the big players’ unwavering conviction that AI’s future demands bold infrastructure bets, despite (or perhaps because of) emerging […]

The post Big tech’s $320B AI spend defies efficiency race appeared first on AI News.

]]>
Tech giants are beginning an unprecedented $320 billion AI infrastructure spending spree in 2025, brushing aside concerns about more efficient AI models from challengers like DeepSeek. The massive investment push from Amazon, Microsoft, Google, and Meta signals the big players’ unwavering conviction that AI’s future demands bold infrastructure bets, despite (or perhaps because of) emerging efficiency breakthroughs.

The stakes are high, with collective capital expenditure jumping 30% up from 2024’s $246 billion investment. While investors may question the necessity of such aggressive spending, tech leaders are doubling down on their belief that AI represents a transformative opportunity worth every dollar.

Amazon stands at the forefront of this AI arms spend, according toa reportby Business Insider. Amazon is flexing its financial muscle with a planned $100 billion capital expenditure for 2025 – a dramatic leap from its $77 billion last year. AWS chief Andy Jassy isn’t mincing words, calling AI a “once-in-a-lifetime business opportunity” that demands aggressive investment.

Microsoft’s Satya Nadella also has a bullish stance with his own hard numbers. Having earmarked $80 billion for AI infrastructure in 2025, Microsoft’s existing AI ventures are already delivering; Nadella has spoken of $13 billion annual revenue from AI and 175% year-over-year growth.

His perspective draws from economic wisdom: citing the Jevons paradox, he argues that making AI more efficient and accessible will spark an unprecedented surge in demand.

Not to be outdone, Google parent Alphabet is pushing all its chips to the centre of the table, with a $75 billion infrastructure investment in 2025, dwarfing analysts’ expectations of $58 billion. Despite market jitters about cloud growth and AI strategy, CEO Sundar Pichai maintains Google’s product innovation engine is firing on all cylinders.

Meta’s approach is to pour $60-65 billion into capital spending in 2025 – up from $39 billion in 2024. The company is carving its own path by championing an “American standard” for open-source AI models, a strategy has caught investor attention, particularly given Meta’s proven track record in monetising AI through sophisticated ad targeting.

The emergence of DeepSeek’s efficient AI models has sparked some debate in investment circles. Investing.com’s Jesse Cohen voices growing demands for concrete returns on existing AI investments. Yet Wedbush’s Dan Ives dismisses such concerns, likening DeepSeek to “the Temu of AI” and insisting the revolution is just beginning.

The market’s response to these bold plans tells a mixed story. Meta’s strategy has won investor applause, while Amazon and Google face more sceptical reactions, with stock drops of 5% and 8% respectively following spending announcements in earnings calls. Yet tech leaders remain undeterred, viewing robust AI infrastructure as non-negotiable for future success.

The intensity of infrastructure investment suggests a reality: technological breakthroughs in AI efficiency aren’t slowing the race – they’re accelerating it. As big tech pours unprecedented resources into AI development, it’s betting that increased efficiency will expand rather than contract the market for AI services.

The high-stakes gamble on AI’s future reveals a shift in how big tech views investment. Rather than waiting to see how efficiency improvements might reduce costs, it’s are scaling up aggressively, convinced that tomorrow’s AI landscape will demand more infrastructure, not less. In this view, DeepSeek’s breakthroughs aren’t a threat to their strategy – they’re validation of AI’s expanding potential.

The message from Silicon Valley is that the AI revolution demands massive infrastructure investment, and the giants of tech are all in. The question isn’t whether to invest in AI infrastructure, but whether $320 billion will be enough to meet the coming surge in demand.

See also: DeepSeek ban? China data transfer boosts security concerns

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Big tech’s $320B AI spend defies efficiency race appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/big-techs-320b-ai-spend-defies-efficiency-race/feed/ 0
Zebra Technologies and enterprise AI in the APAC https://www.artificialintelligence-news.com/news/zebra-technologies-and-enterprise-ai-in-the-apac/ https://www.artificialintelligence-news.com/news/zebra-technologies-and-enterprise-ai-in-the-apac/#respond Tue, 04 Feb 2025 14:43:12 +0000 https://www.artificialintelligence-news.com/?p=104124 Enterprise AI transformation is reaching a tipping point. In the Asia Pacific, Zebra Technologies has unveiled ambitious plans to change frontline operations across the region. At a time when CISQ estimates poor software quality will cost US businesses $2.41 trillion in 2022, the push for practical, results-driven AI implementation is urgent. “Elements of our three-pillar […]

The post Zebra Technologies and enterprise AI in the APAC appeared first on AI News.

]]>
Enterprise AI transformation is reaching a tipping point. In the Asia Pacific, Zebra Technologies has unveiled ambitious plans to change frontline operations across the region. At a time when CISQ estimates poor software quality will cost US businesses $2.41 trillion in 2022, the push for practical, results-driven AI implementation is urgent.

“Elements of our three-pillar strategy have been around for quite some time, but what’s revolutionising the frontline today is intelligent automation,” Tom Bianculli, Chief Technology Officer at Zebra Technologies, told reporters at a briefing during Zebra’s 2025 Kickoff in Perth, Australia last week. “We’re not just digitising workflows – we’re connecting wearable technology with robotic workflows, enabling frontline workers to seamlessly interact with automation in ways that were impossible just five years ago.”

Practical applications driving change

The real-world impact of enterprise AI transformation is already evident in Zebra’s recent collaboration with a major North American retailer. The solution combines traditional AI with generative AI capabilities, enabling fast shelf analysis and automated task generation.

“You snap a picture of a shelf, [and] within one second, the traditional AI identifies all the products on the shelf, identifies where there’s missing product, maybe misplaced product… and then it makes that information available to a Gen AI agent that then decides what should you do,” Bianculli explains.

This level of automation has demonstrated significant operational improvements, reducing staffing requirements at the retailer by 25%. When it detects missing stock, the system automatically generates tasks for the right personnel, streamlining what was previously a multi-step manual process.

APAC leading AI adoption

The Asia Pacific region is emerging as a frontrunner in enterprise AI transformation. IBM research presented at the briefing indicates that 54% of APAC enterprises now expect AI to deliver longer-term innovation and revenue generation benefits. The region’s AI investment priorities for 2025 are clearly defined:

– 21% focused on enhancing customer experiences

– 18% directed toward business process automation

– 16% invested in sales automation and customer lifecycle management

Ryan Goh, Senior Vice President and General Manager of Asia Pacific at Zebra Technologies, points to practical implementations that are already driving results: “We have customers in e-commerce using ring scanners to scan packages, significantly improving their productivity compared to traditional scanning methods.”

Innovation at the edge

Zebra’s approach to AI deployment encompasses:

– AI devices with native neural architecture for on-device processing

– Multimodal experiences that mirror human cognitive capabilities

– Gen AI agents optimising workload distribution between edge and cloud

The company is advancing its activities in edge computing, with Bianculli revealing plans for on-device language models. This innovation mainly targets environments where internet connectivity is restricted or prohibited, ensuring AI capabilities remain accessible regardless of network conditions.

Regional market dynamics

The enterprise AI transformation journey varies significantly across APAC markets. India’s landscape is particularly dynamic, with the country’s GDP projected to grow 6.6% and manufacturing expected to surge by 7% YOY. Its commitment to AI is evident, with 96% of organisations surveyed by WEF actively running AI programmes.

Japan presents a different scenario, with 1.2% projected GDP growth and some unique challenges to automation adoption. “We used to think that tablets are for retail, but the Bay Area proved us wrong,” Goh notes, highlighting unexpected applications in manufacturing and customer self-service solutions.

Future trajectory

Gartner’s projections indicate that by 2027, 25% of CIOs will implement augmented connected workforce initiatives that will halve the time required for competency development. Zebra is already moving in this direction with its Z word companion, which uses generative AI and large language models and is scheduled for pilot deployment with select customers in Q2 of this year.

With a global presence spanning 120+ offices in 55 countries and 10,000+ channel partners across 185 countries, Zebra is positioned play strongly in the enterprise AI transformation across APAC. As the region moves from AI experimentation to full-scale deployment, the focus remains on delivering practical innovations that drive measurable business outcomes and operational efficiency.

(Photo by )

See also: Walmart and Amazon drive retail transformation with AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here

The post Zebra Technologies and enterprise AI in the APAC appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/zebra-technologies-and-enterprise-ai-in-the-apac/feed/ 0