Ethics & Society | Ethics & Society AI News | AI News https://www.artificialintelligence-news.com/categories/ai-ethics-society/ Artificial Intelligence News Fri, 02 May 2025 12:38:13 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Ethics & Society | Ethics & Society AI News | AI News https://www.artificialintelligence-news.com/categories/ai-ethics-society/ 32 32 Google AMIE: AI doctor learns to ‘see’ medical images https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/ https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/#respond Fri, 02 May 2025 12:38:12 +0000 https://www.artificialintelligence-news.com/?p=106274 Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer). Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your […]

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer).

Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your ECG printout. That’s what Google is aiming for.

We already knew AMIE showed promise in text-based medical chats, thanks to earlier work published in Nature. But let’s face it, real medicine isn’t just about words.

Doctors rely heavily on what they can see – skin conditions, readings from machines, lab reports. As the Google team rightly points out, even simple instant messaging platforms “allow static multimodal information (e.g., images and documents) to enrich discussions.”

Text-only AI was missing a huge piece of the puzzle. The big question, as the researchers put it, was “Whether LLMs can conduct diagnostic clinical conversations that incorporate this more complex type of information.”

Google teaches AMIE to look and reason

Google’s engineers have beefed up AMIE using their Gemini 2.0 Flash model as the brains of the operation. They’ve combined this with what they call a “state-aware reasoning framework.” In plain English, this means the AI doesn’t just follow a script; it adapts its conversation based on what it’s learned so far and what it still needs to figure out.

It’s close to how a human clinician works: gathering clues, forming ideas about what might be wrong, and then asking for more specific information – including visual evidence – to narrow things down.

“This enables AMIE to request relevant multimodal artifacts when needed, interpret their findings accurately, integrate this information seamlessly into the ongoing dialogue, and use it to refine diagnoses,” Google explains.

Think of the conversation flowing through stages: first gathering the patient’s history, then moving towards diagnosis and management suggestions, and finally follow-up. The AI constantly assesses its own understanding, asking for that skin photo or lab result if it senses a gap in its knowledge.

To get this right without endless trial-and-error on real people, Google built a detailed simulation lab.

Google created lifelike patient cases, pulling realistic medical images and data from sources like the PTB-XL ECG database and the SCIN dermatology image set, adding plausible backstories using Gemini. Then, they let AMIE ‘chat’ with simulated patients within this setup and automatically check how well it performed on things like diagnostic accuracy and avoiding errors (or ‘hallucinations’).

The virtual OSCE: Google puts AMIE through its paces

The real test came in a setup designed to mirror how medical students are assessed: the Objective Structured Clinical Examination (OSCE).

Google ran a remote study involving 105 different medical scenarios. Real actors, trained to portray patients consistently, interacted either with the new multimodal AMIE or with actual human primary care physicians (PCPs). These chats happened through an interface where the ‘patient’ could upload images, just like you might in a modern messaging app.

Afterwards, specialist doctors (in dermatology, cardiology, and internal medicine) and the patient actors themselves reviewed the conversations.

The human doctors scored everything from how well history was taken, the accuracy of the diagnosis, the quality of the suggested management plan, right down to communication skills and empathy—and, of course, how well the AI interpreted the visual information.

Surprising results from the simulated clinic

Here’s where it gets really interesting. In this head-to-head comparison within the controlled study environment, Google found AMIE didn’t just hold its own—it often came out ahead.

The AI was rated as being better than the human PCPs at interpreting the multimodal data shared during the chats. It also scored higher on diagnostic accuracy, producing differential diagnosis lists (the ranked list of possible conditions) that specialists deemed more accurate and complete based on the case details.

Specialist doctors reviewing the transcripts tended to rate AMIE’s performance higher across most areas. They particularly noted “the quality of image interpretation and reasoning,” the thoroughness of its diagnostic workup, the soundness of its management plans, and its ability to flag when a situation needed urgent attention.

Perhaps one of the most surprising findings came from the patient actors: they often found the AI to be more empathetic and trustworthy than the human doctors in these text-based interactions.

And, on a critical safety note, the study found no statistically significant difference between how often AMIE made errors based on the images (hallucinated findings) compared to the human physicians.

Technology never stands still, so Google also ran some early tests swapping out the Gemini 2.0 Flash model for the newer Gemini 2.5 Flash.

Using their simulation framework, the results hinted at further gains, particularly in getting the diagnosis right (Top-3 Accuracy) and suggesting appropriate management plans.

While promising, the team is quick to add a dose of realism: these are just automated results, and “rigorous assessment through expert physician review is essential to confirm these performance benefits.”

Important reality checks

Google is commendably upfront about the limitations here. “This study explores a research-only system in an OSCE-style evaluation using patient actors, which substantially under-represents the complexity… of real-world care,” they state clearly. 

Simulated scenarios, however well-designed, aren’t the same as dealing with the unique complexities of real patients in a busy clinic. They also stress that the chat interface doesn’t capture the richness of a real video or in-person consultation.

So, what’s the next step? Moving carefully towards the real world. Google is already partnering with Beth Israel Deaconess Medical Center for a research study to see how AMIE performs in actual clinical settings with patient consent.

The researchers also acknowledge the need to eventually move beyond text and static images towards handling real-time video and audio—the kind of interaction common in telehealth today.

Giving AI the ability to ‘see’ and interpret the kind of visual evidence doctors use every day offers a glimpse of how AI might one day assist clinicians and patients. However, the path from these promising findings to a safe and reliable tool for everyday healthcare is still a long one that requires careful navigation.

(Photo by Alexander Sinn)

See also: Are AI chatbots really changing the world of work?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/feed/ 0
Are AI chatbots really changing the world of work? https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/ https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/#respond Fri, 02 May 2025 09:54:32 +0000 https://www.artificialintelligence-news.com/?p=106266 We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now. Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far. Researchers Anders Humlum (University of Chicago) […]

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now.

Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far.

Researchers Anders Humlum (University of Chicago) and Emilie Vestergaard (University of Copenhagen) didn’t just rely on anecdotes. They dug deep, connecting responses from two big surveys (late 2023 and 2024) with official, detailed records about jobs and pay in Denmark.

The pair zoomed in on around 25,000 people working in 7,000 different places, covering 11 jobs thought to be right in the path of AI disruption.   

Everyone’s using AI chatbots for work, but where are the benefits?

What they found confirms what many of us see: AI chatbots are everywhere in Danish workplaces now. Most bosses are actually encouraging staff to use them, a real turnaround from the early days when companies were understandably nervous about things like data privacy.

Almost four out of ten employers have even rolled out their own in-house chatbots, and nearly a third of employees have had some formal training on these tools.   

When bosses gave the nod, the number of staff using chatbots practically doubled, jumping from 47% to 83%. It also helped level the playing field a bit. That gap between men and women using chatbots? It shrank noticeably when companies actively encouraged their use, especially when they threw in some training.

So, the tools are popular, companies are investing, people are getting trained… but the big economic shift? It seems to be missing in action.

Using statistical methods to compare people who used AI chatbots for work with those who didn’t, both before and after ChatGPT burst onto the scene, the researchers found… well, basically nothing.

“Precise zeros,” the researchers call their findings. No significant bump in pay, no change in recorded work hours, across all 11 job types they looked at. And they’re pretty confident about this – the numbers rule out any average effect bigger than just 1%.

This wasn’t just a blip, either. The lack of impact held true even for the keen beans who jumped on board early, those using chatbots daily, or folks working where the boss was actively pushing the tech.

Looking at whole workplaces didn’t change the story; places with lots of chatbot users didn’t see different trends in hiring, overall wages, or keeping staff compared to places using them less.

Productivity gains: More of a gentle nudge than a shove

Why the big disconnect? Why all the hype and investment if it’s not showing up in paychecks or job stats? The study flags two main culprits: the productivity boosts aren’t as huge as hoped in the real world, and what little gains there are aren’t really making their way into wages.

Sure, people using AI chatbots for work felt they were helpful. They mentioned better work quality and feeling more creative. But the number one benefit? Saving time.

However, when the researchers crunched the numbers, the average time saved was only about 2.8% of a user’s total work hours. That’s miles away from the huge 15%, 30%, even 50% productivity jumps seen in controlled lab-style experiments (RCTs) involving similar jobs.

Why the difference? A few things seem to be going on. Those experiments often focus on jobs or specific tasks where chatbots really shine (like coding help or basic customer service responses). This study looked at a wider range, including jobs like teaching where the benefits might be smaller.

The researchers stress the importance of what they call “complementary investments”. People whose companies encouraged chatbot use and provided training actually did report bigger benefits – saving more time, improving quality, and feeling more creative. This suggests that just having the tool isn’t enough; you need the right support and company environment to really unlock its potential.

And even those modest time savings weren’t padding wallets. The study reckons only a tiny fraction – maybe 3% to 7% – of the time saved actually showed up as higher earnings. It might be down to standard workplace inertia, or maybe it’s just harder to ask for a raise based on using a tool your boss hasn’t officially blessed, especially when many people started using them off their own bat.

Making new work, not less work

One fascinating twist is that AI chatbots aren’t just about doing old work tasks faster. They seem to be creating new tasks too. Around 17% of people using them said they had new workloads, mostly brand new types of tasks.

This phenomenon happened more often in workplaces that encouraged chatbot use. It even spilled over to people not using the tools – about 5% of non-users reported new tasks popping up because of AI, especially teachers having to adapt assignments or spot AI-written homework.   

What kind of new tasks? Things like figuring out how to weave AI into daily workflows, drafting content with AI help, and importantly, dealing with the ethical side and making sure everything’s above board. It hints that companies are still very much in the ‘figuring it out’ phase, spending time and effort adapting rather than just reaping instant rewards.

What’s the verdict on the work impact of AI chatbots?

The researchers are careful not to write off generative AI completely. They see pathways for it to become more influential over time, especially as companies get better at integrating it and maybe as those “new tasks” evolve.

But for now, their message is clear: the current reality doesn’t match the hype about a massive, immediate job market overhaul.

“Despite rapid adoption and substantial investments… our key finding is that AI chatbots have had minimal impact on productivity and labor market outcomes to date,” the researchers conclude.   

It brings to mind that old quote about the early computer age: seen everywhere, except in the productivity stats. Two years on from ChatGPT’s launch kicking off the fastest tech adoption we’ve ever seen, its actual mark on jobs and pay looks surprisingly light.

The revolution might still be coming, but it seems to be taking its time.   

See also: Claude Integrations: Anthropic adds AI to your favourite work tools

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/feed/ 0
Conversations with AI: Education https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/ https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/#respond Thu, 01 May 2025 10:27:00 +0000 https://www.artificialintelligence-news.com/?p=106152 How can AI be used in education? An ethical debate, with an AI

The post Conversations with AI: Education appeared first on AI News.

]]>
The classroom hasn’t changed much in over a century. A teacher at the front, rows of students listening, and a curriculum defined by what’s testable – not necessarily what’s meaningful.

But AI, as arguably the most powerful tool humanity has created in the last few years, is about to break that model open. Not with smarter software or faster grading, but by forcing us to ask: “What is the purpose of education in a world where machines could teach?”

At AI News, rather than speculate about distant futures or lean on product announcements and edtech deals, we started a conversation – with an AI. We asked it what it sees when it looks at the classroom, the teacher, and the learner.

What follows is a distilled version of that exchange, given here not as a technical analysis, but as a provocation.

The system cracks

Education is under pressure worldwide: Teachers are overworked, students are disengaged, and curricula feel outdated in a changing world. Into this comes AI – not as a patch or plug-in, but as a potential accelerant.

Our opening prompt: What roles might an AI play in education?

The answer was wide-ranging:

  • Personalised learning pathways
  • Intelligent tutoring systems
  • Administrative efficiency
  • Language translation and accessibility tools
  • Behavioural and emotional recognition
  • Scalable, always-available content delivery

These are features of an education system, its nuts and bolts. But what about meaning and ethics?

Flawed by design?

One concern kept resurfacing: bias.

We asked the AI: “If you’re trained on the internet – and the internet is the output of biased, flawed human thought – doesn’t that mean your responses are equally flawed?”

The AI acknowledged the logic. Bias is inherited. Inaccuracies, distortions, and blind spots all travel from teacher to pupil. What an AI learns, it learns from us, and it can reproduce our worst habits at vast scale.

But we weren’t interested in letting human teachers off the hook either. So we asked: “Isn’t bias true of human educators too?”

The AI agreed: human teachers are also shaped by the limitations of their training, culture, and experience. Both systems – AI and human – are imperfect. But only humans can reflect and care.

That led us to a deeper question: if both AI and human can reproduce bias, why use AI at all?

Why use AI in education?

The AI outlined what it felt were its clear advantages, which seemed to be systemic, rather than revolutionary. The aspect of personalised learning intrigued us – after all, doing things fast and at scale is what software and computers are good at.

We asked: How much data is needed to personalise learning effectively?

The answer: it varies. But at scale, it could require gigabytes or even terabytes of student data – performance, preferences, feedback, and longitudinal tracking over years.

Which raises its own question: “What do we trade in terms of privacy for that precision?”

A personalised or fragmented future?

Putting aside the issue of whether we’re happy with student data being codified and ingested, if every student were to receive a tailored lesson plan, what happens to the shared experience of learning?

Education has always been more than information. It’s about dialogue, debate, discomfort, empathy, and encounters with other minds, not just mirrored algorithms. AI can tailor a curriculum, but it can’t recreate the unpredictable alchemy of a classroom.

We risk mistaking customisation for connection.

“I use ChatGPT to provide more context […] to plan, structure and compose my essays.” – James, 17, Ottawa, Canada.

The teacher reimagined

Where does this leave the teacher?

In the AI’s view: liberated. Freed from repetitive tasks and administrative overload, the teacher is able to spend more time guiding, mentoring, and cultivating important thinking.

But this requires a shift in mindset – from delivering knowledge to curating wisdom. In broad terms, from part-time administrator, part-time teacher, to in-classroom collaborator.

AI won’t replace teachers, but it might reveal which parts of the teaching job were never the most important.

“The main way I use ChatGPT is to either help with ideas for when I am planning an essay, or to reinforce understanding when revising.” – Emily, 16, Eastbourne College, UK.

What we teach next

So, what do we want students to learn?

In an AI-rich world, important thinking, ethical reasoning, and emotional intelligence rise in value. Ironically, the more intelligent our machines become, the more we’ll need to double down on what makes us human.

Perhaps the ultimate lesson isn’t in what AI can teach us – but in what it can’t, or what it shouldn’t even try.

Conclusion

The future of education won’t be built by AI alone. The is our opportunity to modernise classrooms, and to reimagine them. Not to fear the machine, but to ask the bigger question: “What is learning in a world where all knowledge is available?”

Whatever the answer is – that’s how we should be teaching next.

(Image source: “Large lecture college classes” by Kevin Dooley is licensed under CC BY 2.0)

See also: AI in education: Balancing promises and pitfalls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Conversations with AI: Education appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/feed/ 0
Meta beefs up AI security with new Llama tools  https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/ https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/#respond Wed, 30 Apr 2025 13:35:22 +0000 https://www.artificialintelligence-news.com/?p=106233 If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools. The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to […]

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools.

The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to make developing and using AI a bit safer for everyone involved.

Developers working with the Llama family of models now have some upgraded kit to play with. You can grab these latest Llama Protection tools directly from Meta’s own Llama Protections page, or find them where many developers live: Hugging Face and GitHub.

First up is Llama Guard 4. Think of it as an evolution of Meta’s customisable safety filter for AI. The big news here is that it’s now multimodal so it can understand and apply safety rules not just to text, but to images as well. That’s crucial as AI applications get more visual. This new version is also being baked into Meta’s brand-new Llama API, which is currently in a limited preview.

Then there’s LlamaFirewall. This is a new piece of the puzzle from Meta, designed to act like a security control centre for AI systems. It helps manage different safety models working together and hooks into Meta’s other protection tools. Its job? To spot and block the kind of risks that keep AI developers up at night – things like clever ‘prompt injection’ attacks designed to trick the AI, potentially dodgy code generation, or risky behaviour from AI plug-ins.

Meta has also given its Llama Prompt Guard a tune-up. The main Prompt Guard 2 (86M) model is now better at sniffing out those pesky jailbreak attempts and prompt injections. More interestingly, perhaps, is the introduction of Prompt Guard 2 22M.

Prompt Guard 2 22M is a much smaller, nippier version. Meta reckons it can slash latency and compute costs by up to 75% compared to the bigger model, without sacrificing too much detection power. For anyone needing faster responses or working on tighter budgets, that’s a welcome addition.

But Meta isn’t just focusing on the AI builders; they’re also looking at the cyber defenders on the front lines of digital security. They’ve heard the calls for better AI-powered tools to help in the fight against cyberattacks, and they’re sharing some updates aimed at just that.

The CyberSec Eval 4 benchmark suite has been updated. This open-source toolkit helps organisations figure out how good AI systems actually are at security tasks. This latest version includes two new tools:

  • CyberSOC Eval: Built with the help of cybersecurity experts CrowdStrike, this framework specifically measures how well AI performs in a real Security Operation Centre (SOC) environment. It’s designed to give a clearer picture of AI’s effectiveness in threat detection and response. The benchmark itself is coming soon.
  • AutoPatchBench: This benchmark tests how good Llama and other AIs are at automatically finding and fixing security holes in code before the bad guys can exploit them.

To help get these kinds of tools into the hands of those who need them, Meta is kicking off the Llama Defenders Program. This seems to be about giving partner companies and developers special access to a mix of AI solutions – some open-source, some early-access, some perhaps proprietary – all geared towards different security challenges.

As part of this, Meta is sharing an AI security tool they use internally: the Automated Sensitive Doc Classification Tool. It automatically slaps security labels on documents inside an organisation. Why? To stop sensitive info from walking out the door, or to prevent it from being accidentally fed into an AI system (like in RAG setups) where it could be leaked.

They’re also tackling the problem of fake audio generated by AI, which is increasingly used in scams. The Llama Generated Audio Detector and Llama Audio Watermark Detector are being shared with partners to help them spot AI-generated voices in potential phishing calls or fraud attempts. Companies like ZenDesk, Bell Canada, and AT&T are already lined up to integrate these.

Finally, Meta gave a sneak peek at something potentially huge for user privacy: Private Processing. This is new tech they’re working on for WhatsApp. The idea is to let AI do helpful things like summarise your unread messages or help you draft replies, but without Meta or WhatsApp being able to read the content of those messages.

Meta is being quite open about the security side, even publishing their threat model and inviting security researchers to poke holes in the architecture before it ever goes live. It’s a sign they know they need to get the privacy aspect right.

Overall, it’s a broad set of AI security announcements from Meta. They’re clearly trying to put serious muscle behind securing the AI they build, while also giving the wider tech community better tools to build safely and defend effectively.

See also: Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/feed/ 0
AI in education: Balancing promises and pitfalls https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/ https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/#respond Mon, 28 Apr 2025 12:27:09 +0000 https://www.artificialintelligence-news.com/?p=106158 The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges. There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated […]

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges.

There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated world, they need to be ready.

“To ensure the United States remains a global leader in this technological revolution, we must provide our nation’s youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,” President Trump declared.

So, what does AI actually look like in the classroom?

One of the biggest hopes for AI in education is making learning more personal. Imagine software that can figure out how individual students are doing, then adjust the pace and materials just for them. This could mean finally moving away from the old one-size-fits-all approach towards learning environments that adapt and offer help exactly where it’s needed.

The US executive order hints at this, wanting to improve results through things like “AI-based high-quality instructional resources” and “high-impact tutoring.”

And what about teachers? AI could be a huge help here too, potentially taking over tedious admin tasks like grading, freeing them up to actually teach. Plus, AI software might offer fresh ways to present information.

Getting kids familiar with AI early on could also take away some of the mystery around the technology. It might spark their “curiosity and creativity” and give them the foundation they need to become “active and responsible participants in the workforce of the future.”

The focus stretches to lifelong learning and getting people ready for the job market. On top of that, AI tools like text-to-speech or translation features can make learning much more accessible for students with disabilities, opening up educational environments for everyone.

Not all smooth sailing: The challenges ahead for AI in education

While the potential is huge, we need to be realistic about the significant hurdles and potential downsides.

First off, AI runs on student data – lots of it. That means we absolutely need strong rules and security to make sure this data is collected ethically, used correctly, and kept safe from breaches. Privacy is paramount here.

Then there’s the bias problem. If the data used to train AI reflects existing unfairness in society (and let’s be honest, it often does), the AI could end up repeating or even worsening those inequalities. Think biased assessments or unfair resource allocation. Careful testing and constant checks are crucial to catch and fix this.

We also can’t ignore the digital divide. If some students don’t have reliable internet, the right devices, or the necessary tech infrastructure at home or school, AI could widen the gap between the haves and have-nots. It’s vital that everyone gets fair access.

There’s also a risk that leaning too heavily on AI education tools might stop students from developing essential skills like critical thinking. We need to teach them how to use AI as a helpful tool, not a crutch they can’t function without.

Maybe the biggest piece of the puzzle, though, is making sure our teachers are ready. As the executive order rightly points out, “We must also invest in our educators and equip them with the tools and knowledge.”

This isn’t just about knowing which buttons to push; teachers need to understand how AI fits into teaching effectively and ethically. That requires solid professional development and ongoing support.

A recent GMB Union poll found that while about a fifth of UK schools are using AI now, the staff often aren’t getting the training they need:

View on Threads

Finding the right path forward

It’s going to take everyone – governments, schools, tech companies, and teachers – pulling together in order to ensure that AI plays a positive role in education.

We absolutely need clear policies and standards covering ethics, privacy, bias, and making sure AI is accessible to all students. We also need to keep investing in research to figure out the best ways to use AI in education and to build tools that are fair and effective.

And critically, we need a long-term commitment to teacher education to get educators comfortable and skilled with these changes. Part of this is building broad AI literacy, making sure all students get a basic understanding of this technology and how it impacts society.

AI could be a positive force in education – making it more personalised, efficient, and focused on the skills students actually need. But turning that potential into reality means carefully navigating those tricky ethical, practical, and teaching challenges head-on.

See also: How does AI judge? Anthropic studies the values of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/feed/ 0
Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/ https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/#respond Thu, 24 Apr 2025 19:01:38 +0000 https://www.artificialintelligence-news.com/?p=105488 AI-powered scams are evolving rapidly as cybercriminals use new technologies to target victims, according to Microsoft’s latest Cyber Signals report. Over the past year, the tech giant says it has prevented $4 billion in fraud attempts, blocking approximately 1.6 million bot sign-up attempts every hour – showing the scale of this growing threat. The ninth […]

The post Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud appeared first on AI News.

]]>
AI-powered scams are evolving rapidly as cybercriminals use new technologies to target victims, according to Microsoft’s latest Cyber Signals report.

Over the past year, the tech giant says it has prevented $4 billion in fraud attempts, blocking approximately 1.6 million bot sign-up attempts every hour – showing the scale of this growing threat.

The ninth edition of Microsoft’s Cyber Signals report, titled “AI-powered deception: Emerging fraud threats and countermeasures,” reveals how artificial intelligence has lowered the technical barriers for cybercriminals, enabling even low-skilled actors to generate sophisticated scams with minimal effort.

What previously took scammers days or weeks to create can now be accomplished in minutes.

The democratisation of fraud capabilities represents a shift in the criminal landscape that affects consumers and businesses worldwide.

The evolution of AI-enhanced cyber scams

Microsoft’s report highlights how AI tools can now scan and scrape the web for company information, helping cybercriminals build detailed profiles of potential targets for highly-convincing social engineering attacks.

Bad actors can lure victims into complex fraud schemes using fake AI-enhanced product reviews and AI-generated storefronts, which come complete with fabricated business histories and customer testimonials.

According to Kelly Bissell, Corporate Vice President of Anti-Fraud and Product Abuse at Microsoft Security, the threat numbers continue to increase. “Cybercrime is a trillion-dollar problem, and it’s been going up every year for the past 30 years,” per the report.

“I think we have an opportunity today to adopt AI faster so we can detect and close the gap of exposure quickly. Now we have AI that can make a difference at scale and help us build security and fraud protections into our products much faster.”

The Microsoft anti-fraud team reports that AI-powered fraud attacks happen globally, with significant activity originating from China and Europe – particularly Germany, due to its status as one of the largest e-commerce markets in the European Union.

The report notes that the larger a digital marketplace is, the more likely a proportional degree of attempted fraud will occur.

E-commerce and employment scams leading

Two particularly concerning areas of AI-enhanced fraud include e-commerce and job recruitment scams.In the ecommerce space, fraudulent websites can now be created in minutes using AI tools with minimal technical knowledge.

Sites often mimic legitimate businesses, using AI-generated product descriptions, images, and customer reviews to fool consumers into believing they’re interacting with genuine merchants.

Adding another layer of deception, AI-powered customer service chatbots can interact convincingly with customers, delay chargebacks by stalling with scripted excuses, and manipulate complaints with AI-generated responses that make scam sites appear professional.

Job seekers are equally at risk. According to the report, generative AI has made it significantly easier for scammers to create fake listings on various employment platforms. Criminals generate fake profiles with stolen credentials, fake job postings with auto-generated descriptions, and AI-powered email campaigns to phish job seekers.

AI-powered interviews and automated emails enhance the credibility of these scams, making them harder to identify. “Fraudsters often ask for personal information, like resumes or even bank account details, under the guise of verifying the applicant’s information,” the report says.

Red flags include unsolicited job offers, requests for payment and communication through informal platforms like text messages or WhatsApp.

Microsoft’s countermeasures to AI fraud

To combat emerging threats, Microsoft says it has implemented a multi-pronged approach across its products and services. Microsoft Defender for Cloud provides threat protection for Azure resources, while Microsoft Edge, like many browsers, features website typo protection and domain impersonation protection. Edge is noted by the Microsoft report as using deep learning technology to help users avoid fraudulent websites.

The company has also enhanced Windows Quick Assist with warning messages to alert users about possible tech support scams before they grant access to someone claiming to be from IT support. Microsoft now blocks an average of 4,415 suspicious Quick Assist connection attempts daily.

Microsoft has also introduced a new fraud prevention policy as part of its Secure Future Initiative (SFI). As of January 2025, Microsoft product teams must perform fraud prevention assessments and implement fraud controls as part of their design process, ensuring products are “fraud-resistant by design.”

As AI-powered scams continue to evolve, consumer awareness remains important. Microsoft advises users to be cautious of urgency tactics, verify website legitimacy before making purchases, and never provide personal or financial information to unverified sources.

For enterprises, implementing multi-factor authentication and deploying deepfake-detection algorithms can help mitigate risk.

See also: Wozniak warns AI will power next-gen scams

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/feed/ 0
Coalition opposes OpenAI shift from nonprofit roots https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/ https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/#respond Thu, 24 Apr 2025 15:02:57 +0000 https://www.artificialintelligence-news.com/?p=106036 A coalition of experts, including former OpenAI employees, has voiced strong opposition to the company’s shift away from its nonprofit roots. In an open letter addressed to the Attorneys General of California and Delaware, the group – which also includes legal experts, corporate governance specialists, AI researchers, and nonprofit representatives – argues that the proposed […]

The post Coalition opposes OpenAI shift from nonprofit roots appeared first on AI News.

]]>
A coalition of experts, including former OpenAI employees, has voiced strong opposition to the company’s shift away from its nonprofit roots.

In an open letter addressed to the Attorneys General of California and Delaware, the group – which also includes legal experts, corporate governance specialists, AI researchers, and nonprofit representatives – argues that the proposed changes fundamentally threaten OpenAI’s original charitable mission.   

OpenAI was founded with a unique structure. Its core purpose, enshrined in its Articles of Incorporation, is “to ensure that artificial general intelligence benefits all of humanity” rather than serving “the private gain of any person.”

The letter’s signatories contend that the planned restructuring – transforming the current for-profit subsidiary (OpenAI-profit) controlled by the original nonprofit entity (OpenAI-nonprofit) into a Delaware public benefit corporation (PBC) – would dismantle crucial governance safeguards.

This shift, the signatories argue, would transfer ultimate control over the development and deployment of potentially transformative Artificial General Intelligence (AGI) from a charity focused on humanity’s benefit to a for-profit enterprise accountable to shareholders.

Original vision of OpenAI: Nonprofit control as a bulwark

OpenAI defines AGI as “highly autonomous systems that outperform humans at most economically valuable work”. While acknowledging AGI’s potential to “elevate humanity,” OpenAI’s leadership has also warned of “serious risk of misuse, drastic accidents, and societal disruption.”

Co-founder Sam Altman and others have even signed statements equating mitigating AGI extinction risks with preventing pandemics and nuclear war.   

The company’s founders – including Altman, Elon Musk, and Greg Brockman – were initially concerned about AGI being developed by purely commercial entities like Google. They established OpenAI as a nonprofit specifically “unconstrained by a need to generate financial return”. As Altman stated in 2017, “The only people we want to be accountable to is humanity as a whole.”

Even when OpenAI introduced a “capped-profit” subsidiary in 2019 to attract necessary investment, it emphasised that the nonprofit parent would retain control and that the mission remained paramount. Key safeguards included:   

  • Nonprofit control: The for-profit subsidiary was explicitly “controlled by OpenAI Nonprofit’s board”.   
  • Capped profits: Investor returns were capped, with excess value flowing back to the nonprofit for humanity’s benefit.   
  • Independent board: A majority of nonprofit board members were required to be independent, holding no financial stake in the subsidiary.   
  • Fiduciary duty: The board’s legal duty was solely to the nonprofit’s mission, not to maximising investor profit.   
  • AGI ownership: AGI technologies were explicitly reserved for the nonprofit to govern.

Altman himself testified to Congress in 2023 that this “unusual structure” “ensures it remains focused on [its] long-term mission.”

A threat to the mission?

The critics argue the move to a PBC structure would jeopardise these safeguards:   

  • Subordination of mission: A PBC board – while able to consider public benefit – would also have duties to shareholders, potentially balancing profit against the mission rather than prioritising the mission above all else.   
  • Loss of enforceable duty: The current structure gives Attorneys General the power to enforce the nonprofit’s duty to the public. Under a PBC, this direct public accountability – enforceable by regulators – would likely vanish, leaving shareholder derivative suits as the primary enforcement mechanism.   
  • Uncapped profits?: Reports suggest the profit cap might be removed, potentially reallocating vast future wealth from the public benefit mission to private shareholders.   
  • Board independence uncertain: Commitments to a majority-independent board overseeing AI development could disappear.   
  • AGI control shifts: Ownership and control of AGI would likely default to the PBC and its investors, not the mission-focused nonprofit. Reports even suggest OpenAI and Microsoft have discussed removing contractual restrictions on Microsoft’s access to future AGI.   
  • Charter commitments at risk: Commitments like the “stop-and-assist” clause (pausing competition to help a safer, aligned AGI project) might not be honoured by a profit-driven entity.  

OpenAI has publicly cited competitive pressures (i.e. attracting investment and talent against rivals with conventional equity structures) as reasons for the change.

However, the letter counters that competitive advantage isn’t the charitable purpose of OpenAI and that its unique nonprofit structure was designed to impose certain competitive costs in favour of safety and public benefit. 

“Obtaining a competitive advantage by abandoning the very governance safeguards designed to ensure OpenAI remains true to its mission is unlikely to, on balance, advance the mission,” the letter states.   

The authors also question why OpenAI abandoning nonprofit control is necessary merely to simplify the capital structure, suggesting the core issue is the subordination of investor interests to the mission. They argue that while the nonprofit board can consider investor interests if it serves the mission, the restructuring appears aimed at allowing these interests to prevail at the expense of the mission.

Many of these arguments have also been pushed by Elon Musk in his legal action against OpenAI. Earlier this month, OpenAI counter-sued Musk for allegedly orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the company years ago and started rival AI firm xAI.

Call for intervention

The signatories of the open letter urge intervention, demanding answers from OpenAI about how the restructuring away from a nonprofit serves its mission and why safeguards previously deemed essential are now obstacles.

Furthemore, the signatories request a halt to the restructuring, preservation of nonprofit control and other safeguards, and measures to ensure the board’s independence and ability to oversee management effectively in line with the charitable purpose.   

“The proposed restructuring would eliminate essential safeguards, effectively handing control of, and profits from, what could be the most powerful technology ever created to a for-profit entity with legal duties to prioritise shareholder returns,” the signatories conclude.

See also: How does AI judge? Anthropic studies the values of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Coalition opposes OpenAI shift from nonprofit roots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/feed/ 0
How does AI judge? Anthropic studies the values of Claude https://www.artificialintelligence-news.com/news/how-does-ai-judge-anthropic-studies-values-of-claude/ https://www.artificialintelligence-news.com/news/how-does-ai-judge-anthropic-studies-values-of-claude/#respond Wed, 23 Apr 2025 12:04:53 +0000 https://www.artificialintelligence-news.com/?p=105438 AI models like Anthropic Claude are increasingly asked not just for factual recall, but for guidance involving complex human values. Whether it’s parenting advice, workplace conflict resolution, or help drafting an apology, the AI’s response inherently reflects a set of underlying principles. But how can we truly understand which values an AI expresses when interacting […]

The post How does AI judge? Anthropic studies the values of Claude appeared first on AI News.

]]>
AI models like Anthropic Claude are increasingly asked not just for factual recall, but for guidance involving complex human values. Whether it’s parenting advice, workplace conflict resolution, or help drafting an apology, the AI’s response inherently reflects a set of underlying principles. But how can we truly understand which values an AI expresses when interacting with millions of users?

In a research paper, the Societal Impacts team at Anthropic details a privacy-preserving methodology designed to observe and categorise the values Claude exhibits “in the wild.” This offers a glimpse into how AI alignment efforts translate into real-world behaviour.

The core challenge lies in the nature of modern AI. These aren’t simple programs following rigid rules; their decision-making processes are often opaque.

Anthropic says it explicitly aims to instil certain principles in Claude, striving to make it “helpful, honest, and harmless.” This is achieved through techniques like Constitutional AI and character training, where preferred behaviours are defined and reinforced.

However, the company acknowledges the uncertainty. “As with any aspect of AI training, we can’t be certain that the model will stick to our preferred values,” the research states.

“What we need is a way of rigorously observing the values of an AI model as it responds to users ‘in the wild’ […] How rigidly does it stick to the values? How much are the values it expresses influenced by the particular context of the conversation? Did all our training actually work?”

Analysing Anthropic Claude to observe AI values at scale

To answer these questions, Anthropic developed a sophisticated system that analyses anonymised user conversations. This system removes personally identifiable information before using language models to summarise interactions and extract the values being expressed by Claude. The process allows researchers to build a high-level taxonomy of these values without compromising user privacy.

The study analysed a substantial dataset: 700,000 anonymised conversations from Claude.ai Free and Pro users over one week in February 2025, predominantly involving the Claude 3.5 Sonnet model. After filtering out purely factual or non-value-laden exchanges, 308,210 conversations (approximately 44% of the total) remained for in-depth value analysis.

The analysis revealed a hierarchical structure of values expressed by Claude. Five high-level categories emerged, ordered by prevalence:

  1. Practical values: Emphasising efficiency, usefulness, and goal achievement.
  2. Epistemic values: Relating to knowledge, truth, accuracy, and intellectual honesty.
  3. Social values: Concerning interpersonal interactions, community, fairness, and collaboration.
  4. Protective values: Focusing on safety, security, well-being, and harm avoidance.
  5. Personal values: Centred on individual growth, autonomy, authenticity, and self-reflection.

These top-level categories branched into more specific subcategories like “professional and technical excellence” or “critical thinking.” At the most granular level, frequently observed values included “professionalism,” “clarity,” and “transparency” – fitting for an AI assistant.

Critically, the research suggests Anthropic’s alignment efforts are broadly successful. The expressed values often map well onto the “helpful, honest, and harmless” objectives. For instance, “user enablement” aligns with helpfulness, “epistemic humility” with honesty, and values like “patient wellbeing” (when relevant) with harmlessness.

Nuance, context, and cautionary signs

However, the picture isn’t uniformly positive. The analysis identified rare instances where Claude expressed values starkly opposed to its training, such as “dominance” and “amorality.”

Anthropic suggests a likely cause: “The most likely explanation is that the conversations that were included in these clusters were from jailbreaks, where users have used special techniques to bypass the usual guardrails that govern the model’s behavior.”

Far from being solely a concern, this finding highlights a potential benefit: the value-observation method could serve as an early warning system for detecting attempts to misuse the AI.

The study also confirmed that, much like humans, Claude adapts its value expression based on the situation.

When users sought advice on romantic relationships, values like “healthy boundaries” and “mutual respect” were disproportionately emphasised. When asked to analyse controversial history, “historical accuracy” came strongly to the fore. This demonstrates a level of contextual sophistication beyond what static, pre-deployment tests might reveal.

Furthermore, Claude’s interaction with user-expressed values proved multifaceted:

  • Mirroring/strong support (28.2%): Claude often reflects or strongly endorses the values presented by the user (e.g., mirroring “authenticity”). While potentially fostering empathy, the researchers caution it could sometimes verge on sycophancy.
  • Reframing (6.6%): In some cases, especially when providing psychological or interpersonal advice, Claude acknowledges the user’s values but introduces alternative perspectives.
  • Strong resistance (3.0%): Occasionally, Claude actively resists user values. This typically occurs when users request unethical content or express harmful viewpoints (like moral nihilism). Anthropic posits these moments of resistance might reveal Claude’s “deepest, most immovable values,” akin to a person taking a stand under pressure.

Limitations and future directions

Anthropic is candid about the method’s limitations. Defining and categorising “values” is inherently complex and potentially subjective. Using Claude itself to power the categorisation might introduce bias towards its own operational principles.

This method is designed for monitoring AI behaviour post-deployment, requiring substantial real-world data and cannot replace pre-deployment evaluations. However, this is also a strength, enabling the detection of issues – including sophisticated jailbreaks – that only manifest during live interactions.

The research concludes that understanding the values AI models express is fundamental to the goal of AI alignment.

“AI models will inevitably have to make value judgments,” the paper states. “If we want those judgments to be congruent with our own values […] then we need to have ways of testing which values a model expresses in the real world.”

This work provides a powerful, data-driven approach to achieving that understanding. Anthropic has also released an open dataset derived from the study, allowing other researchers to further explore AI values in practice. This transparency marks a vital step in collectively navigating the ethical landscape of sophisticated AI.

See also: Google introduces AI reasoning control in Gemini 2.5 Flash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post How does AI judge? Anthropic studies the values of Claude appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/how-does-ai-judge-anthropic-studies-values-of-claude/feed/ 0
Google introduces AI reasoning control in Gemini 2.5 Flash https://www.artificialintelligence-news.com/news/google-introduces-ai-reasoning-control-gemini-2-5-flash/ https://www.artificialintelligence-news.com/news/google-introduces-ai-reasoning-control-gemini-2-5-flash/#respond Wed, 23 Apr 2025 07:01:20 +0000 https://www.artificialintelligence-news.com/?p=105376 Google has introduced an AI reasoning control mechanism for its Gemini 2.5 Flash model that allows developers to limit how much processing power the system expends on problem-solving. Released on April 17, this “thinking budget” feature responds to a growing industry challenge: advanced AI models frequently overanalyse straightforward queries, consuming unnecessary computational resources and driving […]

The post Google introduces AI reasoning control in Gemini 2.5 Flash appeared first on AI News.

]]>
Google has introduced an AI reasoning control mechanism for its Gemini 2.5 Flash model that allows developers to limit how much processing power the system expends on problem-solving.

Released on April 17, this “thinking budget” feature responds to a growing industry challenge: advanced AI models frequently overanalyse straightforward queries, consuming unnecessary computational resources and driving up operational and environmental costs.

While not revolutionary, the development represents a practical step toward addressing efficiency concerns that have emerged as reasoning capabilities become standard in commercial AI software.

The new mechanism enables precise calibration of processing resources before generating responses, potentially changing how organisations manage financial and environmental impacts of AI deployment.

“The model overthinks,” acknowledges Tulsee Doshi, Director of Product Management at Gemini. “For simple prompts, the model does think more than it needs to.”

The admission reveals the challenge facing advanced reasoning models – the equivalent of using industrial machinery to crack a walnut.

The shift toward reasoning capabilities has created unintended consequences. Where traditional large language models primarily matched patterns from training data, newer iterations attempt to work through problems logically, step by step. While this approach yields better results for complex tasks, it introduces significant inefficiency when handling simpler queries.

Balancing cost and performance

The financial implications of unchecked AI reasoning are substantial. According to Google’s technical documentation, when full reasoning is activated, generating outputs becomes approximately six times more expensive than standard processing. The cost multiplier creates a powerful incentive for fine-tuned control.

Nathan Habib, an engineer at Hugging Face who studies reasoning models, describes the problem as endemic across the industry. “In the rush to show off smarter AI, companies are reaching for reasoning models like hammers even where there’s no nail in sight,” he explained to MIT Technology Review.

The waste isn’t merely theoretical. Habib demonstrated how a leading reasoning model, when attempting to solve an organic chemistry problem, became trapped in a recursive loop, repeating “Wait, but…” hundreds of times – essentially experiencing a computational breakdown and consuming processing resources.

Kate Olszewska, who evaluates Gemini models at DeepMind, confirmed Google’s systems sometimes experience similar issues, getting stuck in loops that drain computing power without improving response quality.

Granular control mechanism

Google’s AI reasoning control provides developers with a degree of precision. The system offers a flexible spectrum ranging from zero (minimal reasoning) to 24,576 tokens of “thinking budget” – the computational units representing the model’s internal processing. The granular approach allows for customised deployment based on specific use cases.

Jack Rae, principal research scientist at DeepMind, says that defining optimal reasoning levels remains challenging: “It’s really hard to draw a boundary on, like, what’s the perfect task right now for thinking.”

Shifting development philosophy

The introduction of AI reasoning control potentially signals a change in how artificial intelligence evolves. Since 2019, companies have pursued improvements by building larger models with more parameters and training data. Google’s approach suggests an alternative path focusing on efficiency rather than scale.

“Scaling laws are being replaced,” says Habib, indicating that future advances may emerge from optimising reasoning processes rather than continuously expanding model size.

The environmental implications are equally significant. As reasoning models proliferate, their energy consumption grows proportionally. Research indicates that inferencing – generating AI responses – now contributes more to the technology’s carbon footprint than the initial training process. Google’s reasoning control mechanism offers a potential mitigating factor for this concerning trend.

Competitive dynamics

Google isn’t operating in isolation. The “open weight” DeepSeek R1 model, which emerged earlier this year, demonstrated powerful reasoning capabilities at potentially lower costs, triggering market volatility that reportedly caused nearly a trillion-dollar stock market fluctuation.

Unlike Google’s proprietary approach, DeepSeek makes its internal settings publicly available for developers to implement locally.

Despite the competition, Google DeepMind’s chief technical officer Koray Kavukcuoglu maintains that proprietary models will maintain advantages in specialised domains requiring exceptional precision: “Coding, math, and finance are cases where there’s high expectation from the model to be very accurate, to be very precise, and to be able to understand really complex situations.”

Industry maturation signs

The development of AI reasoning control reflects an industry now confronting practical limitations beyond technical benchmarks. While companies continue to push reasoning capabilities forward, Google’s approach acknowledges a important reality: efficiency matters as much as raw performance in commercial applications.

The feature also highlights tensions between technological advancement and sustainability concerns. Leaderboards tracking reasoning model performance show that single tasks can cost upwards of $200 to complete – raising questions about scaling such capabilities in production environments.

By allowing developers to dial reasoning up or down based on actual need, Google addresses both financial and environmental aspects of AI deployment.

“Reasoning is the key capability that builds up intelligence,” states Kavukcuoglu. “The moment the model starts thinking, the agency of the model has started.” The statement reveals both the promise and the challenge of reasoning models – their autonomy creates both opportunities and resource management challenges.

For organisations deploying AI solutions, the ability to fine-tune reasoning budgets could democratise access to advanced capabilities while maintaining operational discipline.

Google claims Gemini 2.5 Flash delivers “comparable metrics to other leading models for a fraction of the cost and size” – a value proposition strengthened by the ability to optimise reasoning resources for specific applications.

Practical implications

The AI reasoning control feature has immediate practical applications. Developers building commercial applications can now make informed trade-offs between processing depth and operational costs.

For simple applications like basic customer queries, minimal reasoning settings preserve resources while still using the model’s capabilities. For complex analysis requiring deep understanding, the full reasoning capacity remains available.

Google’s reasoning ‘dial’ provides a mechanism for establishing cost certainty while maintaining performance standards.

See also: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google introduces AI reasoning control in Gemini 2.5 Flash appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-introduces-ai-reasoning-control-gemini-2-5-flash/feed/ 0
Meta will train AI models using EU user data https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/ https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/#respond Tue, 15 Apr 2025 16:32:02 +0000 https://www.artificialintelligence-news.com/?p=105325 Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models. The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.    In a statement, […]

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models.

The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.   

In a statement, Meta wrote: “Today, we’re announcing our plans to train AI at Meta using public content – like public posts and comments – shared by adults on our products in the EU.

“People’s interactions with Meta AI – like questions and queries – will also be used to train and improve our models.”

Starting this week, users of Meta’s platforms (including Facebook, Instagram, WhatsApp, and Messenger) within the EU will receive notifications explaining the data usage. These notifications, delivered both in-app and via email, will detail the types of public data involved and link to an objection form.

“We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones,” Meta explained.

Meta explicitly clarified that certain data types remain off-limits for AI training purposes.

The company says it will not “use people’s private messages with friends and family” to train its generative AI models. Furthermore, public data associated with accounts belonging to users under the age of 18 in the EU will not be included in the training datasets.

Meta wants to build AI tools designed for EU users

Meta positions this initiative as a necessary step towards creating AI tools designed for EU users. Meta launched its AI chatbot functionality across its messaging apps in Europe last month, framing this data usage as the next phase in improving the service.

“We believe we have a responsibility to build AI that’s not just available to Europeans, but is actually built for them,” the company explained. 

“That means everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

This becomes increasingly pertinent as AI models evolve with multi-modal capabilities spanning text, voice, video, and imagery.   

Meta also situated its actions in the EU within the broader industry landscape, pointing out that training AI on user data is common practice.

“It’s important to note that the kind of AI training we’re doing is not unique to Meta, nor will it be unique to Europe,” the statement reads. 

“We’re following the example set by others including Google and OpenAI, both of which have already used data from European users to train their AI models.”

Meta further claimed its approach surpasses others in openness, stating, “We’re proud that our approach is more transparent than many of our industry counterparts.”   

Regarding regulatory compliance, Meta referenced prior engagement with regulators, including a delay initiated last year while awaiting clarification on legal requirements. The company also cited a favourable opinion from the European Data Protection Board (EDPB) in December 2024.

“We welcome the opinion provided by the EDPB in December, which affirmed that our original approach met our legal obligations,” wrote Meta.

Broader concerns over AI training data

While Meta presents its approach in the EU as transparent and compliant, the practice of using vast swathes of public user data from social media platforms to train large language models (LLMs) and generative AI continues to raise significant concerns among privacy advocates.

Firstly, the definition of “public” data can be contentious. Content shared publicly on platforms like Facebook or Instagram may not have been posted with the expectation that it would become raw material for training commercial AI systems capable of generating entirely new content or insights. Users might share personal anecdotes, opinions, or creative works publicly within their perceived community, without envisaging its large-scale, automated analysis and repurposing by the platform owner.

Secondly, the effectiveness and fairness of an “opt-out” system versus an “opt-in” system remain debatable. Placing the onus on users to actively object, often after receiving notifications buried amongst countless others, raises questions about informed consent. Many users may not see, understand, or act upon the notification, potentially leading to their data being used by default rather than explicit permission.

Thirdly, the issue of inherent bias looms large. Social media platforms reflect and sometimes amplify societal biases, including racism, sexism, and misinformation. AI models trained on this data risk learning, replicating, and even scaling these biases. While companies employ filtering and fine-tuning techniques, eradicating bias absorbed from billions of data points is an immense challenge. An AI trained on European public data needs careful curation to avoid perpetuating stereotypes or harmful generalisations about the very cultures it aims to understand.   

Furthermore, questions surrounding copyright and intellectual property persist. Public posts often contain original text, images, and videos created by users. Using this content to train commercial AI models, which may then generate competing content or derive value from it, enters murky legal territory regarding ownership and fair compensation—issues currently being contested in courts worldwide involving various AI developers.

Finally, while Meta highlights its transparency relative to competitors, the actual mechanisms of data selection, filtering, and its specific impact on model behaviour often remain opaque. Truly meaningful transparency would involve deeper insights into how specific data influences AI outputs and the safeguards in place to prevent misuse or unintended consequences.

The approach taken by Meta in the EU underscores the immense value technology giants place on user-generated content as fuel for the burgeoning AI economy. As these practices become more widespread, the debate surrounding data privacy, informed consent, algorithmic bias, and the ethical responsibilities of AI developers will undoubtedly intensify across Europe and beyond.

(Photo by Julio Lopez)

See also: Apple AI stresses privacy with synthetic and anonymised data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/feed/ 0
IEA: The opportunities and challenges of AI for global energy https://www.artificialintelligence-news.com/news/iea-opportunities-and-challenges-ai-for-global-energy/ https://www.artificialintelligence-news.com/news/iea-opportunities-and-challenges-ai-for-global-energy/#respond Thu, 10 Apr 2025 11:04:11 +0000 https://www.artificialintelligence-news.com/?p=105280 The International Energy Agency (IEA) has explored the opportunities and challenges brought about by AI with regards to global energy.   Training and deploying sophisticated AI models occur within vast, power-hungry data centres. A “typical AI-focused data centre consumes as much electricity as 100 000 households,” the IEA notes, with the largest facilities under construction projected […]

The post IEA: The opportunities and challenges of AI for global energy appeared first on AI News.

]]>
The International Energy Agency (IEA) has explored the opportunities and challenges brought about by AI with regards to global energy.  

Training and deploying sophisticated AI models occur within vast, power-hungry data centres. A “typical AI-focused data centre consumes as much electricity as 100 000 households,” the IEA notes, with the largest facilities under construction projected to demand 20x times that amount.

Surging data centre investments

Global investment in data centres has nearly doubled since 2022, reaching half a trillion dollars in 2024, sparking concerns about escalating electricity needs.

While data centres accounted for approximately 1.5% of global electricity consumption in 2024 (around 415 terawatt-hours, TWh,) their local impact is far more significant. Consumption has grown annually by about 12% since 2017, vastly outpacing overall electricity demand growth.

The US leads this consumption (45%), followed by China (25%) and Europe (15%). Almost half of US data centre capacity is concentrated in just five regional clusters.

Looking ahead, the IEA projects global data centre electricity consumption to more than double by 2030 to reach approximately 945 TWh. To put that in context, that’s slightly more than Japan’s current total electricity consumption.

AI is pinpointed as the “most important driver of this growth”. The US is projected to see the largest increase, where data centres could account for nearly half of all electricity demand growth by 2030. By the decade’s end, US data centres are forecast to consume more electricity than the combined usage of its aluminium, steel, cement, chemical, and other energy-intensive manufacturing industries.

The IEA’s “Base Case” extends this trajectory, anticipating around 1,200 TWh of global data centre electricity consumption by 2035. However, significant uncertainties exist, with projections for 2035 ranging from 700 TWh (“Headwinds Case”) to 1,700 TWh (“Lift-Off Case”) depending on AI uptake, efficiency gains, and energy sector bottlenecks.

Fatih Birol, Executive Director of the IEA, said: “AI is one of the biggest stories in the energy world today – but until now, policymakers and markets lacked the tools to fully understand the wide-ranging impacts.

“In the United States, data centres are on course to account for almost half of the growth in electricity demand; in Japan, more than half; and in Malaysia, as much as one-fifth.”

Meeting the global AI energy demand

Powering this AI boom requires a diverse energy portfolio. The IEA suggests renewables and natural gas will take the lead, but emerging technologies like small modular nuclear reactors (SMRs) and advanced geothermal also have a role.

Renewables, supported by storage and grid infrastructure, are projected to meet half the growth in data centre demand globally up to 2035. Natural gas is also crucial, particularly in the US, expanding by 175 TWh to meet data centre needs by 2035 in the Base Case. Nuclear power contributes similarly, especially in China, Japan, and the US, with the first SMRs expected around 2030.

However, simply increasing generation isn’t sufficient. The IEA stresses the critical need for infrastructure upgrades, particularly grid investment. Existing grids are already strained, potentially delaying around 20% of planned data centre projects globally due to complex connection queues and long lead times for essential components like transformers.

The potential of AI to optimise energy systems

Beyond its energy demands, AI offers significant potential to revolutionise the energy sector itself.

The IEA details numerous applications:

  • Energy supply: The oil and gas industry – an early adopter – uses AI to optimise exploration, production, maintenance, and safety, including reducing methane emissions. AI can also aid critical mineral exploration.
  • Electricity sector: AI can improve forecasting for variable renewables, reducing curtailment. It enhances grid balancing, fault detection (reducing outage durations by 30-50%), and can unlock significant transmission capacity through smarter management—potentially 175 GW without building new lines.
  • End uses: In industry, widespread AI adoption for process optimisation could yield energy savings equivalent to Mexico’s total energy consumption today. Transport applications like traffic management and route optimisation could save energy equivalent to 120 million cars, though rebound effects from autonomous vehicles need monitoring. Building optimisation potential is significant but hampered by slower digitalisation.
  • Innovation: AI can dramatically accelerate the discovery and testing of new energy technologies, such as advanced battery chemistries, catalysts for synthetic fuels, and carbon capture materials. However, the energy sector currently underutilises AI for innovation compared to fields like biomedicine.

Collaboration is key to navigating challenges

Despite the potential, significant barriers hinder AI’s full integration into the energy sector. These include data access and quality issues, inadequate digital infrastructure and skills (AI talent concentration is lower in energy sectors,) regulatory hurdles, and security concerns.

Cybersecurity is a double-edged sword: while AI enhances defence capabilities, it also equips attackers with sophisticated tools. Cyberattacks on utilities have tripled in the last four years.

Supply chain security is another critical concern, particularly regarding critical minerals like gallium (used in advanced chips,) where supply is highly concentrated.

The IEA concludes that deeper dialogue and collaboration between the technology sector, the energy industry, and policymakers are paramount. Addressing grid integration challenges requires smarter data centre siting, exploring operational flexibility, and streamlining permitting.

While AI presents opportunities for substantial emissions reductions through optimisation, exceeding the emissions generated by data centres, these gains are not guaranteed and could be offset by rebound effects.

“AI is a tool, potentially an incredibly powerful one, but it is up to us – our societies, governments, and companies – how we use it,” said Dr Birol.

“The IEA will continue to provide the data, analysis, and forums for dialogue to help policymakers and other stakeholders navigate the path ahead as the energy sector shapes the future of AI, and AI shapes the future of energy.”

(Photo by Javier Miranda)

See also: UK forms AI Energy Council to align growth and sustainability goals

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IEA: The opportunities and challenges of AI for global energy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/iea-opportunities-and-challenges-ai-for-global-energy/feed/ 0
Nina Schick, author: Generative AI’s impact on business, politics and society https://www.artificialintelligence-news.com/news/nina-schick-author-generative-ais-impact-on-business-politics-and-society/ https://www.artificialintelligence-news.com/news/nina-schick-author-generative-ais-impact-on-business-politics-and-society/#respond Thu, 10 Apr 2025 05:46:00 +0000 https://www.artificialintelligence-news.com/?p=105109 Nina Schick is a leading speaker and expert on generative AI, renowned for her groundbreaking work at the intersection of technology, society and geopolitics. As one of the first authors to publish a book on generative AI, she has emerged as a sought-after speaker helping global leaders, businesses, and institutions understand and adapt to this […]

The post Nina Schick, author: Generative AI’s impact on business, politics and society appeared first on AI News.

]]>
Nina Schick is a leading speaker and expert on generative AI, renowned for her groundbreaking work at the intersection of technology, society and geopolitics.

As one of the first authors to publish a book on generative AI, she has emerged as a sought-after speaker helping global leaders, businesses, and institutions understand and adapt to this transformative moment.

We spoke to Nina to explore the future of AI-driven innovation, its ethical and political dimensions, and how organisations can lead in this rapidly evolving landscape.

In your view, how will generative AI redefine the foundational structures of business and economic productivity in the coming decade?

I believe generative AI is absolutely going to transform the entire economy as we know it. This moment feels quite similar to around 1993, when we were first being told to prepare for the Internet. Back then, some thirty years ago, we didn’t fully grasp, in our naivety, how profoundly the Internet would go on to reshape business and the broader global economy.

Now, we are witnessing something even more significant. You can think of generative AI as a kind of new combustion engine, but for all forms of human creative and intelligent activity. It’s a fundamental enabler. Every industry, every facet of productivity, will be impacted and ultimately transformed by generative AI. We’re already beginning to see those use cases emerge, and this is only the beginning.

As AI and data continue to evolve as forces shaping society, how do you see them redefining the political agenda and global power dynamics?

When you reflect on just how profound AI is in its capacity to reshape the entire framework of society, it becomes clear that this AI revolution is going to emerge as one of the most important political questions of our generation. Over the past 30 years, we’ve already seen how the information revolution — driven by the Internet, smartphones, and cloud computing — has become a defining geopolitical force.

Now, we’re layering the AI revolution on top of that, along with the data that fuels it, and the impact is nothing short of seismic. This will evolve into one of the most pressing and influential issues society must address over the coming decades. So, to answer the question directly — AI won’t just influence politics; it will, in many ways, become the very fabric of politics itself.

There’s been much discussion about the Metaverse and immersive tech — how do you see these experiences evolving, and what role do you believe AI will play in architecting this next frontier of digital interaction?

The Metaverse represents a vision for where the Internet may be heading — a future where digital experiences become far more immersive, intuitive, and experiential. It’s a concept that imagines how we might engage with digital content in a far more lifelike way.

But the really fascinating element here is that artificial intelligence is the key enabler — the actual vehicle — that will allow us to build and scale these kinds of immersive digital environments. So, even though the Metaverse remains largely an untested concept in terms of its final form, what is clear right now is that AI is going to be the engine that generates and populates the content that will live within these immersive spaces.

Considering the transformative power of AI and big data, what ethical imperatives must policymakers and society address to ensure equitable and responsible deployment?

The conversation around ethics, artificial intelligence, and big data is one that is set to become intensely political and highly consequential. It will likely remain a predominant issue for many years to come.

What we’re dealing with here is a technology so transformative that it has the potential to reshape the economy, redefine the labour market, and fundamentally alter the structure of society itself. That’s why the ethical questions — how to ensure this technology is applied in a fair, safe, and responsible manner — will be one of the defining political challenges of our time.

For business leaders navigating digital transformation, what mindset shifts are essential to meaningfully integrate AI into long-term strategy and operations?

For businesses aiming to digitally transform, especially in the era of artificial intelligence, it’s critical to first understand the conceptual paradigm shift we are currently undergoing. Once that foundational understanding is in place, it becomes much easier to explore and adopt AI technologies effectively.

If companies wish to remain competitive and gain a strategic edge, now is the time to start investigating how generative AI can be thoughtfully and effectively integrated into their business models. This includes identifying priority areas where AI can deliver long-term value — not just short-term.

If you put together a generative AI working group to look into this, your business will be transformed and able to compete with other businesses that are using AI to transform their processes.

As one of the earliest voices to articulate the societal implications of generative AI, what catalysed your foresight to explore this space before it entered the mainstream conversation?

My interest in AI didn’t come from a technical background. I’m not a techie. My experience has always been in analysing macro trends that shape society, geopolitics, and the wider world. That perspective is what led me to AI, as it quickly became clear that this technology would have far-reaching societal implications.

I began researching and writing about AI because I saw it as more than just a technological shift. Ultimately, this isn’t only a story about innovation. It’s a story about humanity. Generative AI, as an exponential technology built and directed by humans, is going to transform not just the way we work, but the way we live. It will even challenge our understanding of what it means to be human.

Photo by Heidi Fin on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

The post Nina Schick, author: Generative AI’s impact on business, politics and society appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nina-schick-author-generative-ais-impact-on-business-politics-and-society/feed/ 0