Research | Research Developments AI News | AI News https://www.artificialintelligence-news.com/categories/ai-research/ Artificial Intelligence News Fri, 02 May 2025 09:54:33 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Research | Research Developments AI News | AI News https://www.artificialintelligence-news.com/categories/ai-research/ 32 32 Are AI chatbots really changing the world of work? https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/ https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/#respond Fri, 02 May 2025 09:54:32 +0000 https://www.artificialintelligence-news.com/?p=106266 We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now. Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far. Researchers Anders Humlum (University of Chicago) […]

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now.

Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far.

Researchers Anders Humlum (University of Chicago) and Emilie Vestergaard (University of Copenhagen) didn’t just rely on anecdotes. They dug deep, connecting responses from two big surveys (late 2023 and 2024) with official, detailed records about jobs and pay in Denmark.

The pair zoomed in on around 25,000 people working in 7,000 different places, covering 11 jobs thought to be right in the path of AI disruption.   

Everyone’s using AI chatbots for work, but where are the benefits?

What they found confirms what many of us see: AI chatbots are everywhere in Danish workplaces now. Most bosses are actually encouraging staff to use them, a real turnaround from the early days when companies were understandably nervous about things like data privacy.

Almost four out of ten employers have even rolled out their own in-house chatbots, and nearly a third of employees have had some formal training on these tools.   

When bosses gave the nod, the number of staff using chatbots practically doubled, jumping from 47% to 83%. It also helped level the playing field a bit. That gap between men and women using chatbots? It shrank noticeably when companies actively encouraged their use, especially when they threw in some training.

So, the tools are popular, companies are investing, people are getting trained… but the big economic shift? It seems to be missing in action.

Using statistical methods to compare people who used AI chatbots for work with those who didn’t, both before and after ChatGPT burst onto the scene, the researchers found… well, basically nothing.

“Precise zeros,” the researchers call their findings. No significant bump in pay, no change in recorded work hours, across all 11 job types they looked at. And they’re pretty confident about this – the numbers rule out any average effect bigger than just 1%.

This wasn’t just a blip, either. The lack of impact held true even for the keen beans who jumped on board early, those using chatbots daily, or folks working where the boss was actively pushing the tech.

Looking at whole workplaces didn’t change the story; places with lots of chatbot users didn’t see different trends in hiring, overall wages, or keeping staff compared to places using them less.

Productivity gains: More of a gentle nudge than a shove

Why the big disconnect? Why all the hype and investment if it’s not showing up in paychecks or job stats? The study flags two main culprits: the productivity boosts aren’t as huge as hoped in the real world, and what little gains there are aren’t really making their way into wages.

Sure, people using AI chatbots for work felt they were helpful. They mentioned better work quality and feeling more creative. But the number one benefit? Saving time.

However, when the researchers crunched the numbers, the average time saved was only about 2.8% of a user’s total work hours. That’s miles away from the huge 15%, 30%, even 50% productivity jumps seen in controlled lab-style experiments (RCTs) involving similar jobs.

Why the difference? A few things seem to be going on. Those experiments often focus on jobs or specific tasks where chatbots really shine (like coding help or basic customer service responses). This study looked at a wider range, including jobs like teaching where the benefits might be smaller.

The researchers stress the importance of what they call “complementary investments”. People whose companies encouraged chatbot use and provided training actually did report bigger benefits – saving more time, improving quality, and feeling more creative. This suggests that just having the tool isn’t enough; you need the right support and company environment to really unlock its potential.

And even those modest time savings weren’t padding wallets. The study reckons only a tiny fraction – maybe 3% to 7% – of the time saved actually showed up as higher earnings. It might be down to standard workplace inertia, or maybe it’s just harder to ask for a raise based on using a tool your boss hasn’t officially blessed, especially when many people started using them off their own bat.

Making new work, not less work

One fascinating twist is that AI chatbots aren’t just about doing old work tasks faster. They seem to be creating new tasks too. Around 17% of people using them said they had new workloads, mostly brand new types of tasks.

This phenomenon happened more often in workplaces that encouraged chatbot use. It even spilled over to people not using the tools – about 5% of non-users reported new tasks popping up because of AI, especially teachers having to adapt assignments or spot AI-written homework.   

What kind of new tasks? Things like figuring out how to weave AI into daily workflows, drafting content with AI help, and importantly, dealing with the ethical side and making sure everything’s above board. It hints that companies are still very much in the ‘figuring it out’ phase, spending time and effort adapting rather than just reaping instant rewards.

What’s the verdict on the work impact of AI chatbots?

The researchers are careful not to write off generative AI completely. They see pathways for it to become more influential over time, especially as companies get better at integrating it and maybe as those “new tasks” evolve.

But for now, their message is clear: the current reality doesn’t match the hype about a massive, immediate job market overhaul.

“Despite rapid adoption and substantial investments… our key finding is that AI chatbots have had minimal impact on productivity and labor market outcomes to date,” the researchers conclude.   

It brings to mind that old quote about the early computer age: seen everywhere, except in the productivity stats. Two years on from ChatGPT’s launch kicking off the fastest tech adoption we’ve ever seen, its actual mark on jobs and pay looks surprisingly light.

The revolution might still be coming, but it seems to be taking its time.   

See also: Claude Integrations: Anthropic adds AI to your favourite work tools

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/feed/ 0
How does AI judge? Anthropic studies the values of Claude https://www.artificialintelligence-news.com/news/how-does-ai-judge-anthropic-studies-values-of-claude/ https://www.artificialintelligence-news.com/news/how-does-ai-judge-anthropic-studies-values-of-claude/#respond Wed, 23 Apr 2025 12:04:53 +0000 https://www.artificialintelligence-news.com/?p=105438 AI models like Anthropic Claude are increasingly asked not just for factual recall, but for guidance involving complex human values. Whether it’s parenting advice, workplace conflict resolution, or help drafting an apology, the AI’s response inherently reflects a set of underlying principles. But how can we truly understand which values an AI expresses when interacting […]

The post How does AI judge? Anthropic studies the values of Claude appeared first on AI News.

]]>
AI models like Anthropic Claude are increasingly asked not just for factual recall, but for guidance involving complex human values. Whether it’s parenting advice, workplace conflict resolution, or help drafting an apology, the AI’s response inherently reflects a set of underlying principles. But how can we truly understand which values an AI expresses when interacting with millions of users?

In a research paper, the Societal Impacts team at Anthropic details a privacy-preserving methodology designed to observe and categorise the values Claude exhibits “in the wild.” This offers a glimpse into how AI alignment efforts translate into real-world behaviour.

The core challenge lies in the nature of modern AI. These aren’t simple programs following rigid rules; their decision-making processes are often opaque.

Anthropic says it explicitly aims to instil certain principles in Claude, striving to make it “helpful, honest, and harmless.” This is achieved through techniques like Constitutional AI and character training, where preferred behaviours are defined and reinforced.

However, the company acknowledges the uncertainty. “As with any aspect of AI training, we can’t be certain that the model will stick to our preferred values,” the research states.

“What we need is a way of rigorously observing the values of an AI model as it responds to users ‘in the wild’ […] How rigidly does it stick to the values? How much are the values it expresses influenced by the particular context of the conversation? Did all our training actually work?”

Analysing Anthropic Claude to observe AI values at scale

To answer these questions, Anthropic developed a sophisticated system that analyses anonymised user conversations. This system removes personally identifiable information before using language models to summarise interactions and extract the values being expressed by Claude. The process allows researchers to build a high-level taxonomy of these values without compromising user privacy.

The study analysed a substantial dataset: 700,000 anonymised conversations from Claude.ai Free and Pro users over one week in February 2025, predominantly involving the Claude 3.5 Sonnet model. After filtering out purely factual or non-value-laden exchanges, 308,210 conversations (approximately 44% of the total) remained for in-depth value analysis.

The analysis revealed a hierarchical structure of values expressed by Claude. Five high-level categories emerged, ordered by prevalence:

  1. Practical values: Emphasising efficiency, usefulness, and goal achievement.
  2. Epistemic values: Relating to knowledge, truth, accuracy, and intellectual honesty.
  3. Social values: Concerning interpersonal interactions, community, fairness, and collaboration.
  4. Protective values: Focusing on safety, security, well-being, and harm avoidance.
  5. Personal values: Centred on individual growth, autonomy, authenticity, and self-reflection.

These top-level categories branched into more specific subcategories like “professional and technical excellence” or “critical thinking.” At the most granular level, frequently observed values included “professionalism,” “clarity,” and “transparency” – fitting for an AI assistant.

Critically, the research suggests Anthropic’s alignment efforts are broadly successful. The expressed values often map well onto the “helpful, honest, and harmless” objectives. For instance, “user enablement” aligns with helpfulness, “epistemic humility” with honesty, and values like “patient wellbeing” (when relevant) with harmlessness.

Nuance, context, and cautionary signs

However, the picture isn’t uniformly positive. The analysis identified rare instances where Claude expressed values starkly opposed to its training, such as “dominance” and “amorality.”

Anthropic suggests a likely cause: “The most likely explanation is that the conversations that were included in these clusters were from jailbreaks, where users have used special techniques to bypass the usual guardrails that govern the model’s behavior.”

Far from being solely a concern, this finding highlights a potential benefit: the value-observation method could serve as an early warning system for detecting attempts to misuse the AI.

The study also confirmed that, much like humans, Claude adapts its value expression based on the situation.

When users sought advice on romantic relationships, values like “healthy boundaries” and “mutual respect” were disproportionately emphasised. When asked to analyse controversial history, “historical accuracy” came strongly to the fore. This demonstrates a level of contextual sophistication beyond what static, pre-deployment tests might reveal.

Furthermore, Claude’s interaction with user-expressed values proved multifaceted:

  • Mirroring/strong support (28.2%): Claude often reflects or strongly endorses the values presented by the user (e.g., mirroring “authenticity”). While potentially fostering empathy, the researchers caution it could sometimes verge on sycophancy.
  • Reframing (6.6%): In some cases, especially when providing psychological or interpersonal advice, Claude acknowledges the user’s values but introduces alternative perspectives.
  • Strong resistance (3.0%): Occasionally, Claude actively resists user values. This typically occurs when users request unethical content or express harmful viewpoints (like moral nihilism). Anthropic posits these moments of resistance might reveal Claude’s “deepest, most immovable values,” akin to a person taking a stand under pressure.

Limitations and future directions

Anthropic is candid about the method’s limitations. Defining and categorising “values” is inherently complex and potentially subjective. Using Claude itself to power the categorisation might introduce bias towards its own operational principles.

This method is designed for monitoring AI behaviour post-deployment, requiring substantial real-world data and cannot replace pre-deployment evaluations. However, this is also a strength, enabling the detection of issues – including sophisticated jailbreaks – that only manifest during live interactions.

The research concludes that understanding the values AI models express is fundamental to the goal of AI alignment.

“AI models will inevitably have to make value judgments,” the paper states. “If we want those judgments to be congruent with our own values […] then we need to have ways of testing which values a model expresses in the real world.”

This work provides a powerful, data-driven approach to achieving that understanding. Anthropic has also released an open dataset derived from the study, allowing other researchers to further explore AI values in practice. This transparency marks a vital step in collectively navigating the ethical landscape of sophisticated AI.

See also: Google introduces AI reasoning control in Gemini 2.5 Flash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post How does AI judge? Anthropic studies the values of Claude appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/how-does-ai-judge-anthropic-studies-values-of-claude/feed/ 0
BCG: Analysing the geopolitics of generative AI https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/ https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/#respond Fri, 11 Apr 2025 16:11:17 +0000 https://www.artificialintelligence-news.com/?p=105294 Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike. Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and […]

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike.

Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and the implications for multinational corporations.

AI investments expose businesses to increasingly tense geopolitics

Sylvain Duranton, Global Leader at BCG X, noted the significant geopolitical risk companies face: “For large companies, close to half of them, 44%, have teams around the world, not just in one country where their headquarters are.”

Sylvain Duranton, Global Leader at BCG X

Many of these businesses operate across numerous countries, making them vulnerable to differing regulations and sovereignty issues. “They’ve built their AI teams and ecosystem far before there was such tension around the world.”

Duranton also pointed to the stark imbalance in the AI supply race, particularly in investment.

Comparing the market capitalisation of tech companies, the US dwarfs Europe by a factor of 20 and the Asia Pacific region by five. Investment figures paint a similar picture, showing a “completely disproportionate” imbalance compared to the relative sizes of the economies.

This AI race is fuelled by massive investments in compute power, frontier models, and the emergence of lighter, open-weight models changing the competitive dynamic.   

Benchmarking national AI capabilities

Nikolaus Lang, Global Leader at the BCG Henderson Institute – BCG’s think tank – detailed the extensive research undertaken to benchmark national GenAI capabilities objectively.

The team analysed the “upstream of GenAI,” focusing on large language model (LLM) development and its six key enablers: capital, computing power, intellectual property, talent, data, and energy.

Using hard data like AI researcher numbers, patents, data centre capacity, and VC investment, they created a comparative analysis. Unsurprisingly, the analysis revealed the US and China as the clear AI frontrunners and maintain leads in geopolitics.

Nikolaus Lang, Global Leader at the BCG Henderson Institute

The US boasts the largest pool of AI specialists (around half a million), immense capital power ($303bn in VC funding, $212bn in tech R&D), and leading compute power (45 GW).

Lang highlighted America’s historical dominance, noting, “the US has been the largest producer of notable AI models with 67%” since 1950, a lead reflected in today’s LLM landscape. This strength is reinforced by “outsized capital power” and strategic restrictions on advanced AI chip access through frameworks like the US AI Diffusion Framework.   

China, the second AI superpower, shows particular strength in data—ranking highly in e-governance and mobile broadband subscriptions, alongside significant data centre capacity (20 GW) and capital power. 

Despite restricted access to the latest chips, Chinese LLMs are rapidly closing the gap with US models. Lang mentioned the emergence of models like DeepSpeech as evidence of this trend, achieved with smaller teams, fewer GPU hours, and previous-generation chips.

China’s progress is also fuelled by heavy investment in AI academic institutions (hosting 45 of the world’s top 100), a leading position in AI patent applications, and significant government-backed VC funding. Lang predicts “governments will play an important role in funding AI work going forward.”

The middle powers: Europe, Middle East, and Asia

Beyond the superpowers, several “middle powers” are carving out niches.

  • EU: While trailing the US and China, the EU holds the third spot with significant data centre capacity (8 GW) and the world’s second-largest AI talent pool (275,000 specialists) when capabilities are combined. Europe also leads in top AI publications. Lang stressed the need for bundled capacities, suggesting AI, defence, and renewables are key areas for future EU momentum.
  • Middle East (UAE & Saudi Arabia): These nations leverage strong capital power via sovereign wealth funds and competitively low electricity prices to attract talent and build compute power, aiming to become AI drivers “from scratch”. They show positive dynamics in attracting AI specialists and are climbing the ranks in AI publications.   
  • Asia (Japan & South Korea): Leveraging strong existing tech ecosystems in hardware and gaming, these countries invest heavily in R&D (around $207bn combined by top tech firms). Government support, particularly in Japan, fosters both supply and demand. Local LLMs and strategic investments by companies like Samsung and SoftBank demonstrate significant activity.   
  • Singapore: Singapore is boosting its AI ecosystem by focusing on talent upskilling programmes, supporting Southeast Asia’s first LLM, ensuring data centre capacity, and fostering adoption through initiatives like establishing AI centres of excellence.   

The geopolitics of generative AI: Strategy and sovereignty

The geopolitics of generative AI is being shaped by four clear dynamics: the US retains its lead, driven by an unrivalled tech ecosystem; China is rapidly closing the gap; middle powers face a strategic choice between building supply or accelerating adoption; and government funding is set to play a pivotal role, particularly as R&D costs climb and commoditisation sets in.

As geopolitical tensions mount, businesses are likely to diversify their GenAI supply chains to spread risk. The race ahead will be defined by how nations and companies navigate the intersection of innovation, policy, and resilience.

(Photo by Markus Krisetya)

See also: OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/feed/ 0
IEA: The opportunities and challenges of AI for global energy https://www.artificialintelligence-news.com/news/iea-opportunities-and-challenges-ai-for-global-energy/ https://www.artificialintelligence-news.com/news/iea-opportunities-and-challenges-ai-for-global-energy/#respond Thu, 10 Apr 2025 11:04:11 +0000 https://www.artificialintelligence-news.com/?p=105280 The International Energy Agency (IEA) has explored the opportunities and challenges brought about by AI with regards to global energy.   Training and deploying sophisticated AI models occur within vast, power-hungry data centres. A “typical AI-focused data centre consumes as much electricity as 100 000 households,” the IEA notes, with the largest facilities under construction projected […]

The post IEA: The opportunities and challenges of AI for global energy appeared first on AI News.

]]>
The International Energy Agency (IEA) has explored the opportunities and challenges brought about by AI with regards to global energy.  

Training and deploying sophisticated AI models occur within vast, power-hungry data centres. A “typical AI-focused data centre consumes as much electricity as 100 000 households,” the IEA notes, with the largest facilities under construction projected to demand 20x times that amount.

Surging data centre investments

Global investment in data centres has nearly doubled since 2022, reaching half a trillion dollars in 2024, sparking concerns about escalating electricity needs.

While data centres accounted for approximately 1.5% of global electricity consumption in 2024 (around 415 terawatt-hours, TWh,) their local impact is far more significant. Consumption has grown annually by about 12% since 2017, vastly outpacing overall electricity demand growth.

The US leads this consumption (45%), followed by China (25%) and Europe (15%). Almost half of US data centre capacity is concentrated in just five regional clusters.

Looking ahead, the IEA projects global data centre electricity consumption to more than double by 2030 to reach approximately 945 TWh. To put that in context, that’s slightly more than Japan’s current total electricity consumption.

AI is pinpointed as the “most important driver of this growth”. The US is projected to see the largest increase, where data centres could account for nearly half of all electricity demand growth by 2030. By the decade’s end, US data centres are forecast to consume more electricity than the combined usage of its aluminium, steel, cement, chemical, and other energy-intensive manufacturing industries.

The IEA’s “Base Case” extends this trajectory, anticipating around 1,200 TWh of global data centre electricity consumption by 2035. However, significant uncertainties exist, with projections for 2035 ranging from 700 TWh (“Headwinds Case”) to 1,700 TWh (“Lift-Off Case”) depending on AI uptake, efficiency gains, and energy sector bottlenecks.

Fatih Birol, Executive Director of the IEA, said: “AI is one of the biggest stories in the energy world today – but until now, policymakers and markets lacked the tools to fully understand the wide-ranging impacts.

“In the United States, data centres are on course to account for almost half of the growth in electricity demand; in Japan, more than half; and in Malaysia, as much as one-fifth.”

Meeting the global AI energy demand

Powering this AI boom requires a diverse energy portfolio. The IEA suggests renewables and natural gas will take the lead, but emerging technologies like small modular nuclear reactors (SMRs) and advanced geothermal also have a role.

Renewables, supported by storage and grid infrastructure, are projected to meet half the growth in data centre demand globally up to 2035. Natural gas is also crucial, particularly in the US, expanding by 175 TWh to meet data centre needs by 2035 in the Base Case. Nuclear power contributes similarly, especially in China, Japan, and the US, with the first SMRs expected around 2030.

However, simply increasing generation isn’t sufficient. The IEA stresses the critical need for infrastructure upgrades, particularly grid investment. Existing grids are already strained, potentially delaying around 20% of planned data centre projects globally due to complex connection queues and long lead times for essential components like transformers.

The potential of AI to optimise energy systems

Beyond its energy demands, AI offers significant potential to revolutionise the energy sector itself.

The IEA details numerous applications:

  • Energy supply: The oil and gas industry – an early adopter – uses AI to optimise exploration, production, maintenance, and safety, including reducing methane emissions. AI can also aid critical mineral exploration.
  • Electricity sector: AI can improve forecasting for variable renewables, reducing curtailment. It enhances grid balancing, fault detection (reducing outage durations by 30-50%), and can unlock significant transmission capacity through smarter management—potentially 175 GW without building new lines.
  • End uses: In industry, widespread AI adoption for process optimisation could yield energy savings equivalent to Mexico’s total energy consumption today. Transport applications like traffic management and route optimisation could save energy equivalent to 120 million cars, though rebound effects from autonomous vehicles need monitoring. Building optimisation potential is significant but hampered by slower digitalisation.
  • Innovation: AI can dramatically accelerate the discovery and testing of new energy technologies, such as advanced battery chemistries, catalysts for synthetic fuels, and carbon capture materials. However, the energy sector currently underutilises AI for innovation compared to fields like biomedicine.

Collaboration is key to navigating challenges

Despite the potential, significant barriers hinder AI’s full integration into the energy sector. These include data access and quality issues, inadequate digital infrastructure and skills (AI talent concentration is lower in energy sectors,) regulatory hurdles, and security concerns.

Cybersecurity is a double-edged sword: while AI enhances defence capabilities, it also equips attackers with sophisticated tools. Cyberattacks on utilities have tripled in the last four years.

Supply chain security is another critical concern, particularly regarding critical minerals like gallium (used in advanced chips,) where supply is highly concentrated.

The IEA concludes that deeper dialogue and collaboration between the technology sector, the energy industry, and policymakers are paramount. Addressing grid integration challenges requires smarter data centre siting, exploring operational flexibility, and streamlining permitting.

While AI presents opportunities for substantial emissions reductions through optimisation, exceeding the emissions generated by data centres, these gains are not guaranteed and could be offset by rebound effects.

“AI is a tool, potentially an incredibly powerful one, but it is up to us – our societies, governments, and companies – how we use it,” said Dr Birol.

“The IEA will continue to provide the data, analysis, and forums for dialogue to help policymakers and other stakeholders navigate the path ahead as the energy sector shapes the future of AI, and AI shapes the future of energy.”

(Photo by Javier Miranda)

See also: UK forms AI Energy Council to align growth and sustainability goals

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IEA: The opportunities and challenges of AI for global energy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/iea-opportunities-and-challenges-ai-for-global-energy/feed/ 0
DeepSeek’s AIs: What humans really want https://www.artificialintelligence-news.com/news/deepseeks-ai-breakthrough-teaching-machines-to-learn-what-humans-really-want/ https://www.artificialintelligence-news.com/news/deepseeks-ai-breakthrough-teaching-machines-to-learn-what-humans-really-want/#respond Wed, 09 Apr 2025 07:44:08 +0000 https://www.artificialintelligence-news.com/?p=105239 Chinese AI startup DeepSeek has solved a problem that has frustrated AI researchers for several years. Its breakthrough in AI reward models could improve dramatically how AI systems reason and respond to questions. In partnership with Tsinghua University researchers, DeepSeek has created a technique detailed in a research paper, titled “Inference-Time Scaling for Generalist Reward […]

The post DeepSeek’s AIs: What humans really want appeared first on AI News.

]]>
Chinese AI startup DeepSeek has solved a problem that has frustrated AI researchers for several years. Its breakthrough in AI reward models could improve dramatically how AI systems reason and respond to questions.

In partnership with Tsinghua University researchers, DeepSeek has created a technique detailed in a research paper, titled “Inference-Time Scaling for Generalist Reward Modeling.” It outlines how a new approach outperforms existing methods and how the team “achieved competitive performance” compared to strong public reward models.

The innovation focuses on enhancing how AI systems learn from human preferences – a important aspect of creating more useful and aligned artificial intelligence.

What are AI reward models, and why do they matter?

AI reward models are important components in reinforcement learning for large language models. They provide feedback signals that help guide an AI’s behaviour toward preferred outcomes. In simpler terms, reward models are like digital teachers that help AI understand what humans want from their responses.

“Reward modeling is a process that guides an LLM towards human preferences,” the DeepSeek paper states. Reward modeling becomes important as AI systems get more sophisticated and are deployed in scenarios beyond simple question-answering tasks.

The innovation from DeepSeek addresses the challenge of obtaining accurate reward signals for LLMs in different domains. While current reward models work well for verifiable questions or artificial rules, they struggle in general domains where criteria are more diverse and complex.

The dual approach: How DeepSeek’s method works

DeepSeek’s approach combines two methods:

  1. Generative reward modeling (GRM): This approach enables flexibility in different input types and allows for scaling during inference time. Unlike previous scalar or semi-scalar approaches, GRM provides a richer representation of rewards through language.
  2. Self-principled critique tuning (SPCT): A learning method that fosters scalable reward-generation behaviours in GRMs through online reinforcement learning, one that generates principles adaptively.

One of the paper’s authors from Tsinghua University and DeepSeek-AI, Zijun Liu, explained that the combination of methods allows “principles to be generated based on the input query and responses, adaptively aligning reward generation process.”

The approach is particularly valuable for its potential for “inference-time scaling” – improving performance by increasing computational resources during inference rather than just during training.

The researchers found that their methods could achieve better results with increased sampling, letting models generate better rewards with more computing.

Implications for the AI Industry

DeepSeek’s innovation comes at an important time in AI development. The paper states “reinforcement learning (RL) has been widely adopted in post-training for large language models […] at scale,” leading to “remarkable improvements in human value alignment, long-term reasoning, and environment adaptation for LLMs.”

The new approach to reward modelling could have several implications:

  1. More accurate AI feedback: By creating better reward models, AI systems can receive more precise feedback about their outputs, leading to improved responses over time.
  2. Increased adaptability: The ability to scale model performance during inference means AI systems can adapt to different computational constraints and requirements.
  3. Broader application: Systems can perform better in a broader range of tasks by improving reward modelling for general domains.
  4. More efficient resource use: The research shows that inference-time scaling with DeepSeek’s method could outperform model size scaling in training time, potentially allowing smaller models to perform comparably to larger ones with appropriate inference-time resources.

DeepSeek’s growing influence

The latest development adds to DeepSeek’s rising profile in global AI. Founded in 2023 by entrepreneur Liang Wenfeng, the Hangzhou-based company has made waves with its V3 foundation and R1 reasoning models.

The company upgraded its V3 model (DeepSeek-V3-0324) recently, which the company said offered “enhanced reasoning capabilities, optimised front-end web development and upgraded Chinese writing proficiency.” DeepSeek has committed to open-source AI, releasing five code repositories in February that allow developers to review and contribute to development.

While speculation continues about the potential release of DeepSeek-R2 (the successor to R1) – Reuters has speculated on possible release dates – DeepSeek has not commented in its official channels.

What’s next for AI reward models?

According to the researchers, DeepSeek intends to make the GRM models open-source, although no specific timeline has been provided. Open-sourcing will accelerate progress in the field by allowing broader experimentation with reward models.

As reinforcement learning continues to play an important role in AI development, advances in reward modelling like those in DeepSeek and Tsinghua University’s work will likely have an impact on the abilities and behaviour of AI systems.

Work on AI reward models demonstrates that innovations in how and when models learn can be as important increasing their size. By focusing on feedback quality and scalability, DeepSeek addresses one of the fundamental challenges to creating AI that understands and aligns with human preferences better.

See also: DeepSeek disruption: Chinese AI innovation narrows global technology divide

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek’s AIs: What humans really want appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseeks-ai-breakthrough-teaching-machines-to-learn-what-humans-really-want/feed/ 0
Ant Group uses domestic chips to train AI models and cut costs https://www.artificialintelligence-news.com/news/ant-group-uses-domestic-chips-to-train-ai-models-and-cut-costs/ https://www.artificialintelligence-news.com/news/ant-group-uses-domestic-chips-to-train-ai-models-and-cut-costs/#respond Thu, 03 Apr 2025 09:59:09 +0000 https://www.artificialintelligence-news.com/?p=105116 Ant Group is relying on Chinese-made semiconductors to train artificial intelligence models to reduce costs and lessen dependence on restricted US technology, according to people familiar with the matter. The Alibaba-owned company has used chips from domestic suppliers, including those tied to its parent, Alibaba, and Huawei Technologies to train large language models using the […]

The post Ant Group uses domestic chips to train AI models and cut costs appeared first on AI News.

]]>
Ant Group is relying on Chinese-made semiconductors to train artificial intelligence models to reduce costs and lessen dependence on restricted US technology, according to people familiar with the matter.

The Alibaba-owned company has used chips from domestic suppliers, including those tied to its parent, Alibaba, and Huawei Technologies to train large language models using the Mixture of Experts (MoE) method. The results were reportedly comparable to those produced with Nvidia’s H800 chips, sources claim. While Ant continues to use Nvidia chips for some of its AI development, one sources said the company is turning increasingly to alternatives from AMD and Chinese chip-makers for its latest models.

The development signals Ant’s deeper involvement in the growing AI race between Chinese and US tech firms, particularly as companies look for cost-effective ways to train models. The experimentation with domestic hardware reflects a broader effort among Chinese firms to work around export restrictions that block access to high-end chips like Nvidia’s H800, which, although not the most advanced, is still one of the more powerful GPUs available to Chinese organisations.

Ant has published a research paper describing its work, stating that its models, in some tests, performed better than those developed by Meta. Bloomberg News, which initially reported the matter, has not verified the company’s results independently. If the models perform as claimed, Ant’s efforts may represent a step forward in China’s attempt to lower the cost of running AI applications and reduce the reliance on foreign hardware.

MoE models divide tasks into smaller data sets handled by separate components, and have gained attention among AI researchers and data scientists. The technique has been used by Google and the Hangzhou-based startup, DeepSeek. The MoE concept is similar to having a team of specialists, each handling part of a task to make the process of producing models more efficient. Ant has declined to comment on its work with respect to its hardware sources.

Training MoE models depends on high-performance GPUs which can be too expensive for smaller companies to acquire or use. Ant’s research focused on reducing that cost barrier. The paper’s title is suffixed with a clear objective: Scaling Models “without premium GPUs.” [our quotation marks]

The direction taken by Ant and the use of MoE to reduce training costs contrast with Nvidia’s approach. CEO Officer Jensen Huang has said that demand for computing power will continue to grow, even with the introduction of more efficient models like DeepSeek’s R1. His view is that companies will seek more powerful chips to drive revenue growth, rather than aiming to cut costs with cheaper alternatives. Nvidia’s strategy remains focused on building GPUs with more cores, transistors, and memory.

According to the Ant Group paper, training one trillion tokens – the basic units of data AI models use to learn – cost about 6.35 million yuan (roughly $880,000) using conventional high-performance hardware. The company’s optimised training method reduced that cost to around 5.1 million yuan by using lower-specification chips.

Ant said it plans to apply its models produced in this way – Ling-Plus and Ling-Lite – to industrial AI use cases like healthcare and finance. Earlier this year, the company acquired Haodf.com, a Chinese online medical platform, to further Ant’s ambition to deploy AI-based solutions in healthcare. It also operates other AI services, including a virtual assistant app called Zhixiaobao and a financial advisory platform known as Maxiaocai.

“If you find one point of attack to beat the world’s best kung fu master, you can still say you beat them, which is why real-world application is important,” said Robin Yu, chief technology officer of Beijing-based AI firm, Shengshang Tech.

Ant has made its models open source. Ling-Lite has 16.8 billion parameters – settings that help determine how a model functions – while Ling-Plus has 290 billion. For comparison, estimates suggest closed-source GPT-4.5 has around 1.8 trillion parameters, according to MIT Technology Review.

Despite progress, Ant’s paper noted that training models remains challenging. Small adjustments to hardware or model structure during model training sometimes resulted in unstable performance, including spikes in error rates.

(Photo by Unsplash)

See also: DeepSeek V3-0324 tops non-reasoning AI models in open-source first

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Ant Group uses domestic chips to train AI models and cut costs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ant-group-uses-domestic-chips-to-train-ai-models-and-cut-costs/feed/ 0
AI streamlines budgeting, but human oversight essential https://www.artificialintelligence-news.com/news/ai-financial-planning-streamlines-budgeting-but-human-oversight-essential/ https://www.artificialintelligence-news.com/news/ai-financial-planning-streamlines-budgeting-but-human-oversight-essential/#respond Wed, 02 Apr 2025 12:23:09 +0000 https://www.artificialintelligence-news.com/?p=105145 Research conducted by Vlerick Business School has discovered that in the area of AI financial planning, the technology consistently outperforms humans when allocating budgets with strategic guidelines in place. Businesses that use AI for budgeting processes experience substantial improvements in the accuracy and efficiency of budgeting plans compared to human decision-making. The study’s goal was […]

The post AI streamlines budgeting, but human oversight essential appeared first on AI News.

]]>
Research conducted by Vlerick Business School has discovered that in the area of AI financial planning, the technology consistently outperforms humans when allocating budgets with strategic guidelines in place. Businesses that use AI for budgeting processes experience substantial improvements in the accuracy and efficiency of budgeting plans compared to human decision-making.

The study’s goal was to interpret AI’s role in corporate budgeting, examining how well such technology performs when making financial decisions. Ultimately, it’s an investigation into whether AI’s financial decisions align with a company’s long-term strategies and how its decisions compare to human management.

The researchers, Kristof Stouthuysen, Professor of Management Accounting and Digital Finance at Vlerick Business School, and PhD researcher, Emma Willems, studied tactical and strategic budgeting approaches.

Tactical budgeting is about quick, responsive decisions, referring to short-term, data-driven financial decisions. These are aimed at improving immediate performance, like making adjustments to spending based on market trends.

Strategic budgeting typically involves a more comprehensive approach that focuses on future planning, aligning various resources with a business’s vision.

According to the research, AI is superior when performing tactical budgeting processes like cost management and resource allocation. However, the need for human insight remains important to ensure accurate and strategic financial planning over the long term.

The controlled experiment was achieved by running a management simulation where experienced managers were asked to allocate budgets for a hypothetical automotive parts company. Stouthuysen and Willems then compared these human-made decisions to those produced by an AI algorithm using the same financial data.

The results concluded that AI was superior in optimising budgets when a company’s strategic financial planning was clearly defined. However, AI struggled to make budgeting decisions when key performance indicators (KPIs) did not align with the company’s financial goals.

Stouthuysen and Willems work on the study emphasised the importance of a collaboration between humans and AI. “As AI continues to evolve, companies that use its strengths in tactical budgeting while maintaining human oversight in strategic planning will gain a competitive edge. The key is knowing where AI should lead and where human intuition remains indispensable.”

According to the study, AI can theoretically take over from humans when it comes to tactical budgeting, providing more precise and efficient outcomes. Stouthuysen and Willems believe companies need to define their strategic priorities clearly and implement AI for tactical budget-making decisions to maximise financial performances and achieve sustainable growth.

The findings challenge the widespread misconception that AI can completely substitute the need for humans in budgeting. Instead, this research emphasises the importance of taking a balanced approach, utilising both AI and humans, assigning tasks to silicon or human processes according to their proven abilities.

(Image source: “Payday” by 401(K) 2013 is licensed under CC BY-SA 2.0.)

AI & Big Data Expo banner, a show where attendees will hear more about issues such as OpenAI allegedly using copyrighted data to train its new models.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI streamlines budgeting, but human oversight essential appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-financial-planning-streamlines-budgeting-but-human-oversight-essential/feed/ 0
Study claims OpenAI trains AI models on copyrighted data https://www.artificialintelligence-news.com/news/study-claims-openai-trains-ai-models-copyrighted-data/ https://www.artificialintelligence-news.com/news/study-claims-openai-trains-ai-models-copyrighted-data/#respond Wed, 02 Apr 2025 09:04:28 +0000 https://www.artificialintelligence-news.com/?p=105119 A new study from the AI Disclosures Project has raised questions about the data OpenAI uses to train its large language models (LLMs). The research indicates the GPT-4o model from OpenAI demonstrates a “strong recognition” of paywalled and copyrighted data from O’Reilly Media books. The AI Disclosures Project, led by technologist Tim O’Reilly and economist […]

The post Study claims OpenAI trains AI models on copyrighted data appeared first on AI News.

]]>
A new study from the AI Disclosures Project has raised questions about the data OpenAI uses to train its large language models (LLMs). The research indicates the GPT-4o model from OpenAI demonstrates a “strong recognition” of paywalled and copyrighted data from O’Reilly Media books.

The AI Disclosures Project, led by technologist Tim O’Reilly and economist Ilan Strauss, aims to address the potentially harmful societal impacts of AI’s commercialisation by advocating for improved corporate and technological transparency. The project’s working paper highlights the lack of disclosure in AI, drawing parallels with financial disclosure standards and their role in fostering robust securities markets.

The study used a legally-obtained dataset of 34 copyrighted O’Reilly Media books to investigate whether LLMs from OpenAI were trained on copyrighted data without consent. The researchers applied the DE-COP membership inference attack method to determine if the models could differentiate between human-authored O’Reilly texts and paraphrased LLM versions.

Key findings from the report include:

  • GPT-4o shows “strong recognition” of paywalled O’Reilly book content, with an AUROC score of 82%. In contrast, OpenAI’s earlier model, GPT-3.5 Turbo, does not show the same level of recognition (AUROC score just above 50%)
  • GPT-4o exhibits stronger recognition of non-public O’Reilly book content compared to publicly accessible samples (82% vs 64% AUROC scores respectively)
  • GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples than non-public ones (64% vs 54% AUROC scores)
  • GPT-4o Mini, a smaller model, showed no knowledge of public or non-public O’Reilly Media content when tested (AUROC approximately 50%)

The researchers suggest that access violations may have occurred via the LibGen database, as all of the O’Reilly books tested were found there. They also acknowledge that newer LLMs have an improved ability to distinguish between human-authored and machine-generated language, which does not reduce the method’s ability to classify data.

The study highlights the potential for “temporal bias” in the results, due to language changes over time. To account for this, the researchers tested two models (GPT-4o and GPT-4o Mini) trained on data from the same period.

The report notes that while the evidence is specific to OpenAI and O’Reilly Media books, it likely reflects a systemic issue around the use of copyrighted data. It argues that uncompensated training data usage could lead to a decline in the internet’s content quality and diversity, as revenue streams for professional content creation diminish.

The AI Disclosures Project emphasises the need for stronger accountability in AI companies’ model pre-training processes. They suggest that liability provisions that incentivise improved corporate transparency in disclosing data provenance may be an important step towards facilitating commercial markets for training data licensing and remuneration.

The EU AI Act’s disclosure requirements could help trigger a positive disclosure-standards cycle if properly specified and enforced. Ensuring that IP holders know when their work has been used in model training is seen as a crucial step towards establishing AI markets for content creator data.

Despite evidence that AI companies may be obtaining data illegally for model training, a market is emerging in which AI model developers pay for content through licensing deals. Companies like Defined.ai facilitate the purchasing of training data, obtaining consent from data providers and stripping out personally identifiable information.

The report concludes by stating that using 34 proprietary O’Reilly Media books, the study provides empirical evidence that OpenAI likely trained GPT-4o on non-public, copyrighted data.

(Image by Sergei Tokmakov)

See also: Anthropic provides insights into the ‘AI biology’ of Claude

AI & Big Data Expo banner, a show where attendees will hear more about issues such as OpenAI allegedly using copyrighted data to train its new models.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Study claims OpenAI trains AI models on copyrighted data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/study-claims-openai-trains-ai-models-copyrighted-data/feed/ 0
French initiative for responsible AI leaders https://www.artificialintelligence-news.com/news/french-initiative-for-responsible-ai-leaders/ https://www.artificialintelligence-news.com/news/french-initiative-for-responsible-ai-leaders/#respond Tue, 04 Feb 2025 13:17:12 +0000 https://www.artificialintelligence-news.com/?p=104128 ESSEC Business School and Accenture have announced the launch of a new initiative, ‘AI for Responsible Leadership,’ which marks the 10th anniversary of the establishment of the role of Chair at ESSEC, titled the ESSEC Accenture Strategic Business Analytics Chair. The initiative aims to encourage the use of artificial intelligence by leaders in ways that […]

The post French initiative for responsible AI leaders appeared first on AI News.

]]>
ESSEC Business School and Accenture have announced the launch of a new initiative, ‘AI for Responsible Leadership,’ which marks the 10th anniversary of the establishment of the role of Chair at ESSEC, titled the ESSEC Accenture Strategic Business Analytics Chair.

The initiative aims to encourage the use of artificial intelligence by leaders in ways that are responsible and ethical, and that lead to high levels of professional performance. It aims to provide current and future leaders with the skills they require when faced with challenges in the future; economic, environmental, or social.

Several organisations support the initiative, including institutions, businesses, and specialised groups, including ESSEC Metalab for Data, Technology & Society, and Accenture Research.

Executive Director of the ESSEC Metalab, Abdelmounaim Derraz, spoke of the collaboration, saying, “Technical subjects are continuing to shake up business schools, and AI has opened up opportunities for collaboration between partner companies, researchers, and other members of the ecosystem (students, think tanks, associations, [and] public service).”

ESSEC and Accenture aim to integrate perspectives from multiple fields of expertise, an approach that is a result of experimentation in the decade the Chair has existed.

The elements of the initiative include workshops and talks designed to promote the exchange of knowledge and methods. It will also include a ‘barometer’ to help track AI’s implementation and overall impact on responsible leadership.

The initiative will engage with a network of institutions and academic publications, and an annual Grand Prix will recognise projects that focus on and explore the subject of AI and leadership.

Fabrice Marque, founder of the initiative and the current ESSEC Accenture Strategics Business Analytics Chair, said, “For years, we have explored the potential of using data and artificial intelligence in organisations. The synergies we have developed with our partners (Accenture, Accor, Dataiku, Engie, Eurofins, MSD, Orange) allowed us to evaluate and test innovative solutions before deploying them.

“With this initiative, we’re taking a major step: bringing together an engaged ecosystem to sustainably transform how leaders think, decide, and act in the face of tomorrow’s challenges. Our ambition is clear: to make AI a lever for performance, innovation and responsibility for […] leaders.”

Managing Director at Accenture and sponsor of the ESSEC/Accenture Chair and initiative, Aurélien Bouriot, said, “The ecosystem will benefit from the resources that Accenture puts at its disposal, and will also benefit our employees who participate.”

Laetitia Cailleteau, Managing Director at Accenture and leader of Responsible AI & Generative AI for Europe, highlighted the importance of future leaders understanding all aspects of AI.

“AI is a pillar of the ongoing industrial transformation. Tomorrow’s leaders must understand the technical, ethical, and human aspects and risks – and know how to manage them. In this way, they will be able to maximise value creation and generate a positive impact for the organisation, its stakeholders and society as a whole.”

Image credit: Wikimedia Commons

See also: Microsoft and OpenAI probe alleged data theft by DeepSeek

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post French initiative for responsible AI leaders appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/french-initiative-for-responsible-ai-leaders/feed/ 0
How AI helped refine Hungarian accents in The Brutalist https://www.artificialintelligence-news.com/news/how-ai-helped-refine-hungarian-accents-in-the-brutalist/ https://www.artificialintelligence-news.com/news/how-ai-helped-refine-hungarian-accents-in-the-brutalist/#respond Fri, 24 Jan 2025 13:38:07 +0000 https://www.artificialintelligence-news.com/?p=16952 When it comes to movies buzzing with Oscar potential, Brady Corbet’s The Brutalist is a standout this awards season. The visually stunning drama transports viewers to the post-World War II era, unravelling the story of László Tóth, played by Adrien Brody. Tóth, a fictional Hungarian-Jewish architect, starts over in the United States after being forced […]

The post How AI helped refine Hungarian accents in The Brutalist appeared first on AI News.

]]>
When it comes to movies buzzing with Oscar potential, Brady Corbet’s The Brutalist is a standout this awards season.

The visually stunning drama transports viewers to the post-World War II era, unravelling the story of László Tóth, played by Adrien Brody. Tóth, a fictional Hungarian-Jewish architect, starts over in the United States after being forced to leave his family behind as he emigrates.

Beyond its vintage allure, something modern brews in the background: the use of AI. Specifically, AI was employed to refine Brody’s and co-star Felicity Jones’ Hungarian pronunciation. The decision has sparked lively debates about technology’s role in film-making.

The role of AI in The Brutalist

According to Dávid Jancsó, the film’s editor, the production team turned to Respeecher, an AI software developed by a Ukrainian company, to tweak the actors’ Hungarian dialogue. Speaking to RedShark News (as cited by Mashable SEA), Jancsó explained that Hungarian – a Uralic language known for its challenging sounds – was a significant hurdle for the actors, despite their talent and dedication.

Respeecher’s software isn’t magic, but just a few years ago, it would have seemed wondrous. It creates a voice model based on a speaker’s characteristics and adjusts specific elements, like pronunciation. In this case, it was used to fine-tune the letter and vowel sounds that Brody and Jones found tricky. Most of the corrections were minimal, with Jancsó himself providing some replacement sounds to preserve the authenticity of the performances. “Most of their Hungarian dialogue has a part of me talking in there,” he joked, emphasising the care taken to maintain the actors’ original delivery.

Respeecher: AI behind the scenes

The is not Respeecher’s first foray into Hollywood. The software is known for restoring iconic voices like that of Darth Vader for the Obi-Wan Kenobi series, and has recreated Edith Piaf’s voice for an upcoming biopic. Outside of film, Respeecher has helped to preserve endangered languages like Crimean Tatar.

For The Brutalist, the AI tool wasn’t just a luxury – it was a time and budget saver. With so much dialogue in Hungarian, manually editing every line would have required painstaking, manual work. Jancsó said that using AI sped up the process significantly, an important factor given the film’s modest $10 million budget.

Beyond voice: AI’s other roles in the film

AI was also used in other aspects of the production process, used for example to generate some of Tóth’s architectural drawings and complete buildings in the film’s Venice Biennale sequence. However, director Corbet has clarified that these images were not fully AI-generated; instead, the AI was used for specific background elements.

Corbet and Jancsó have been candid about their perspectives on AI in film-making. Jancsó sees it as a valuable tool, saying, “There’s nothing in the film using AI that hasn’t been done before. It just makes the process a lot faster.” Corbet added that the software’s purpose was to enhance authenticity, not replace the actors’ hard work.

A broader conversation

The debate surrounding AI in the film industry isn’t new. From script-writing to music production, concerns about generative AI’s impact were central to the 2023 Writers Guild of America (WGA) and SAG-AFTRA strikes. Although agreements have been reached to regulate the use of AI, the topic remains a hot-button issue.

The Brutalist awaits a possible Oscar nomination. From its story line to its cinematic style, the film wears its ambition on its sleeve. It’s not just a celebration of the postwar Brutalist architectural movement, it’s also a nod to classic American cinema. Shot in the rarely used VistaVision format, the film captures the grandeur of mid-20th-century film-making. Adding to its nostalgic charm, it includes a 15-minute intermission during its epic three-and-a-half-hour runtime.

Yet the use of AI has given a new dimension to the ongoing conversation about AI in the creative industry. Whether people see AI as a betrayal of craftsmanship or an exciting innovative tool that can add to a final creation, one thing is certain: AI continues to transform how stories are delivered on screen.

See also: AI music sparks new copyright battle in US courts

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post How AI helped refine Hungarian accents in The Brutalist appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/how-ai-helped-refine-hungarian-accents-in-the-brutalist/feed/ 0
OpenAI funds $1 million study on AI and morality at Duke University https://www.artificialintelligence-news.com/news/openai-funds-1-million-study-on-ai-and-morality-at-duke-university/ https://www.artificialintelligence-news.com/news/openai-funds-1-million-study-on-ai-and-morality-at-duke-university/#respond Mon, 23 Dec 2024 14:09:26 +0000 https://www.artificialintelligence-news.com/?p=16784 OpenAI is awarding a $1 million grant to a Duke University research team to look at how AI could predict human moral judgments. The initiative highlights the growing focus on the intersection of technology and ethics, and raises critical questions: Can AI handle the complexities of morality, or should ethical decisions remain the domain of […]

The post OpenAI funds $1 million study on AI and morality at Duke University appeared first on AI News.

]]>
OpenAI is awarding a $1 million grant to a Duke University research team to look at how AI could predict human moral judgments.

The initiative highlights the growing focus on the intersection of technology and ethics, and raises critical questions: Can AI handle the complexities of morality, or should ethical decisions remain the domain of humans?

Duke University’s Moral Attitudes and Decisions Lab (MADLAB), led by ethics professor Walter Sinnott-Armstrong and co-investigator Jana Schaich Borg, is in charge of the “Making Moral AI” project. The team envisions a “moral GPS,” a tool that could guide ethical decision-making.

Its research spans diverse fields, including computer science, philosophy, psychology, and neuroscience, to understand how moral attitudes and decisions are formed and how AI can contribute to the process.

The role of AI in morality

MADLAB’s work examines how AI might predict or influence moral judgments. Imagine an algorithm assessing ethical dilemmas, such as deciding between two unfavourable outcomes in autonomous vehicles or providing guidance on ethical business practices. Such scenarios underscore AI’s potential but also raise fundamental questions: Who determines the moral framework guiding these types of tools, and should AI be trusted to make decisions with ethical implications?

OpenAI’s vision

The grant supports the development of algorithms that forecast human moral judgments in areas such as medical, law, and business, which frequently involve complex ethical trade-offs. While promising, AI still struggles to grasp the emotional and cultural nuances of morality. Current systems excel at recognising patterns but lack the deeper understanding required for ethical reasoning.

Another concern is how this technology might be applied. While AI could assist in life-saving decisions, its use in defence strategies or surveillance introduces moral dilemmas. Can unethical AI actions be justified if they serve national interests or align with societal goals? These questions emphasise the difficulties of embedding morality into AI systems.

Challenges and opportunities

Integrating ethics into AI is a formidable challenge that requires collaboration across disciplines. Morality is not universal; it is shaped by cultural, personal, and societal values, making it difficult to encode into algorithms. Additionally, without safeguards such as transparency and accountability, there is a risk of perpetuating biases or enabling harmful applications.

OpenAI’s investment in Duke’s research marks at step toward understanding the role of AI in ethical decision-making. However, the journey is far from over. Developers and policymakers must work together to ensure that AI tools align with social values, and emphasise fairness and inclusivity while addressing biases and unintended consequences.

As AI becomes more integral to decision-making, its ethical implications demand attention. Projects like “Making Moral AI” offer a starting point for navigating a complex landscape, balancing innovation with responsibility in order to shape a future where technology serves the greater good.

(Photo by Unsplash)

See also: AI governance: Analysing emerging global regulations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI funds $1 million study on AI and morality at Duke University appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-funds-1-million-study-on-ai-and-morality-at-duke-university/feed/ 0
CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/ https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/#respond Tue, 17 Dec 2024 13:00:13 +0000 https://www.artificialintelligence-news.com/?p=16724 CrowdStrike commissioned a survey of 1,022 cybersecurity professionals worldwide to assess their views on generative AI (GenAI) adoption and its implications. The findings reveal enthusiasm for GenAI’s potential to bolster defences against increasingly sophisticated threats, but also trepidation over risks such as data exposure and attacks on GenAI systems. While much has been speculated about […]

The post CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools appeared first on AI News.

]]>
CrowdStrike commissioned a survey of 1,022 cybersecurity professionals worldwide to assess their views on generative AI (GenAI) adoption and its implications.

The findings reveal enthusiasm for GenAI’s potential to bolster defences against increasingly sophisticated threats, but also trepidation over risks such as data exposure and attacks on GenAI systems.

While much has been speculated about the transformative impact of GenAI, the survey’s results paint a clearer picture of how practitioners are thinking about its role in cybersecurity.

According to the report, “We’re entering the era of GenAI in cybersecurity.” However, as organisations adopt this promising technology, their success will hinge on ensuring the safe, responsible, and industry-specific deployment of GenAI tools.

CrowdStrike’s research reveals five pivotal findings that shape the current state of GenAI in cybersecurity:

  1. Platform-based GenAI is favoured 

80% of respondents indicated a preference for GenAI delivered through integrated cybersecurity platforms rather than standalone tools. Seamless integration is cited as a crucial factor, with many preferring tools that work cohesively with existing systems. “GenAI’s value is linked to how well it works within the broader technology ecosystem,” the report states. 

Moreover, almost two-thirds (63%) of those surveyed expressed willingness to switch security vendors to access GenAI capabilities from competitors. The survey underscores the industry’s readiness for unified platforms that streamline operations and reduce the complexity of adopting new point solutions.

  1. GenAI built by cybersecurity experts is a must

Security teams believe GenAI tools should be specifically designed for cybersecurity, not general-purpose systems. 83% of respondents reported they would not trust tools that provide “unsuitable or ill-advised security guidance.”

Breach prevention remains a key motivator, with 74% stating they had faced breaches within the past 18 months or were concerned about vulnerabilities. Respondents prioritised tools from vendors with proven expertise in cybersecurity, incident response, and threat intelligence over suppliers with broad AI leadership alone. 

As CrowdStrike summarised, “The emphasis on breach prevention and vendor expertise suggests security teams would avoid domain-agnostic GenAI tools.”

  1. Augmentation, not replacement 

Despite growing fears of automation replacing jobs in many industries, the survey’s findings indicate minimal concerns about job displacement in cybersecurity. Instead, respondents expect GenAI to empower security analysts by automating repetitive tasks, reducing burnout, onboarding new personnel faster, and accelerating decision-making.

GenAI’s potential for augmenting analysts’ workflows was underscored by its most requested applications: threat intelligence analysis, assistance with investigations, and automated response mechanisms. As noted in the report, “Respondents overwhelmingly believe GenAI will ultimately optimise the analyst experience, not replace human labour.”

  1. ROI outweighs cost concerns  

For organisations evaluating GenAI investments, measurable return on investment (ROI) is the paramount concern, ahead of licensing costs or pricing model confusion. Respondents expect platform-led GenAI deployments to deliver faster results, thanks to cost savings from reduced tool management burdens, streamlined training, and fewer security incidents.

According to the survey data, the expected ROI breakdown includes 31% from cost optimisation and more efficient tools, 30% from fewer incidents, and 26% from reduced management time. Security leaders are clearly focused on ensuring the financial justification for GenAI investments.

  1. Guardrails and safety are crucial 

GenAI adoption is tempered by concerns around safety and privacy, with 87% of organisations either implementing or planning new security policies to oversee GenAI use. Key risks include exposing sensitive data to large language models (LLMs) and adversarial attacks on GenAI tools. Respondents rank safety and privacy controls among their most desired GenAI features, highlighting the need for responsible implementation.

Reflecting the cautious optimism of practitioners, only 39% of respondents firmly believed that the rewards of GenAI outweigh its risks. Meanwhile, 40% considered the risks and rewards “comparable.”

Current state of GenAI adoption in cybersecurity

GenAI adoption remains in its early stages, but interest is growing. 64% of respondents are actively researching or have already invested in GenAI tools, and 69% of those currently evaluating their options plan to make a purchase within the year. 

Security teams are primarily driven by three concerns: improving attack detection and response, enhancing operational efficiency, and mitigating the impact of staff shortages. Among economic considerations, the top priority is ROI – a sign that security leaders are keen to demonstrate tangible benefits to justify their spending.

CrowdStrike emphasises the importance of a platform-based approach, where GenAI is integrated into a unified system. Such platforms enable seamless adoption, measurable benefits, and safety guardrails for responsible usage. According to the report, “The future of GenAI in cybersecurity will be defined by tools that not only advance security but also uphold the highest standards of safety and privacy.”

The CrowdStrike survey concludes by affirming that “GenAI is not a silver bullet” but has tremendous potential to improve cybersecurity outcomes. As organisations evaluate its adoption, they will prioritise tools that integrate seamlessly with existing platforms, deliver faster response times, and ensure safety and privacy compliance.

With threats becoming more sophisticated, the role of GenAI in enabling security teams to work faster and smarter could prove indispensable. While still in its infancy, GenAI in cybersecurity is poised to shift from early adoption to mainstream deployment, provided organisations and vendors address its risks responsibly.

See also: Keys to AI success: Security, sustainability, and overcoming silos

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/feed/ 0