ethics Archives - AI News https://www.artificialintelligence-news.com/news/tag/ethics/ Artificial Intelligence News Fri, 02 May 2025 12:38:13 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png ethics Archives - AI News https://www.artificialintelligence-news.com/news/tag/ethics/ 32 32 Google AMIE: AI doctor learns to ‘see’ medical images https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/ https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/#respond Fri, 02 May 2025 12:38:12 +0000 https://www.artificialintelligence-news.com/?p=106274 Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer). Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your […]

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer).

Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your ECG printout. That’s what Google is aiming for.

We already knew AMIE showed promise in text-based medical chats, thanks to earlier work published in Nature. But let’s face it, real medicine isn’t just about words.

Doctors rely heavily on what they can see – skin conditions, readings from machines, lab reports. As the Google team rightly points out, even simple instant messaging platforms “allow static multimodal information (e.g., images and documents) to enrich discussions.”

Text-only AI was missing a huge piece of the puzzle. The big question, as the researchers put it, was “Whether LLMs can conduct diagnostic clinical conversations that incorporate this more complex type of information.”

Google teaches AMIE to look and reason

Google’s engineers have beefed up AMIE using their Gemini 2.0 Flash model as the brains of the operation. They’ve combined this with what they call a “state-aware reasoning framework.” In plain English, this means the AI doesn’t just follow a script; it adapts its conversation based on what it’s learned so far and what it still needs to figure out.

It’s close to how a human clinician works: gathering clues, forming ideas about what might be wrong, and then asking for more specific information – including visual evidence – to narrow things down.

“This enables AMIE to request relevant multimodal artifacts when needed, interpret their findings accurately, integrate this information seamlessly into the ongoing dialogue, and use it to refine diagnoses,” Google explains.

Think of the conversation flowing through stages: first gathering the patient’s history, then moving towards diagnosis and management suggestions, and finally follow-up. The AI constantly assesses its own understanding, asking for that skin photo or lab result if it senses a gap in its knowledge.

To get this right without endless trial-and-error on real people, Google built a detailed simulation lab.

Google created lifelike patient cases, pulling realistic medical images and data from sources like the PTB-XL ECG database and the SCIN dermatology image set, adding plausible backstories using Gemini. Then, they let AMIE ‘chat’ with simulated patients within this setup and automatically check how well it performed on things like diagnostic accuracy and avoiding errors (or ‘hallucinations’).

The virtual OSCE: Google puts AMIE through its paces

The real test came in a setup designed to mirror how medical students are assessed: the Objective Structured Clinical Examination (OSCE).

Google ran a remote study involving 105 different medical scenarios. Real actors, trained to portray patients consistently, interacted either with the new multimodal AMIE or with actual human primary care physicians (PCPs). These chats happened through an interface where the ‘patient’ could upload images, just like you might in a modern messaging app.

Afterwards, specialist doctors (in dermatology, cardiology, and internal medicine) and the patient actors themselves reviewed the conversations.

The human doctors scored everything from how well history was taken, the accuracy of the diagnosis, the quality of the suggested management plan, right down to communication skills and empathy—and, of course, how well the AI interpreted the visual information.

Surprising results from the simulated clinic

Here’s where it gets really interesting. In this head-to-head comparison within the controlled study environment, Google found AMIE didn’t just hold its own—it often came out ahead.

The AI was rated as being better than the human PCPs at interpreting the multimodal data shared during the chats. It also scored higher on diagnostic accuracy, producing differential diagnosis lists (the ranked list of possible conditions) that specialists deemed more accurate and complete based on the case details.

Specialist doctors reviewing the transcripts tended to rate AMIE’s performance higher across most areas. They particularly noted “the quality of image interpretation and reasoning,” the thoroughness of its diagnostic workup, the soundness of its management plans, and its ability to flag when a situation needed urgent attention.

Perhaps one of the most surprising findings came from the patient actors: they often found the AI to be more empathetic and trustworthy than the human doctors in these text-based interactions.

And, on a critical safety note, the study found no statistically significant difference between how often AMIE made errors based on the images (hallucinated findings) compared to the human physicians.

Technology never stands still, so Google also ran some early tests swapping out the Gemini 2.0 Flash model for the newer Gemini 2.5 Flash.

Using their simulation framework, the results hinted at further gains, particularly in getting the diagnosis right (Top-3 Accuracy) and suggesting appropriate management plans.

While promising, the team is quick to add a dose of realism: these are just automated results, and “rigorous assessment through expert physician review is essential to confirm these performance benefits.”

Important reality checks

Google is commendably upfront about the limitations here. “This study explores a research-only system in an OSCE-style evaluation using patient actors, which substantially under-represents the complexity… of real-world care,” they state clearly. 

Simulated scenarios, however well-designed, aren’t the same as dealing with the unique complexities of real patients in a busy clinic. They also stress that the chat interface doesn’t capture the richness of a real video or in-person consultation.

So, what’s the next step? Moving carefully towards the real world. Google is already partnering with Beth Israel Deaconess Medical Center for a research study to see how AMIE performs in actual clinical settings with patient consent.

The researchers also acknowledge the need to eventually move beyond text and static images towards handling real-time video and audio—the kind of interaction common in telehealth today.

Giving AI the ability to ‘see’ and interpret the kind of visual evidence doctors use every day offers a glimpse of how AI might one day assist clinicians and patients. However, the path from these promising findings to a safe and reliable tool for everyday healthcare is still a long one that requires careful navigation.

(Photo by Alexander Sinn)

See also: Are AI chatbots really changing the world of work?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/feed/ 0
Are AI chatbots really changing the world of work? https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/ https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/#respond Fri, 02 May 2025 09:54:32 +0000 https://www.artificialintelligence-news.com/?p=106266 We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now. Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far. Researchers Anders Humlum (University of Chicago) […]

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now.

Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far.

Researchers Anders Humlum (University of Chicago) and Emilie Vestergaard (University of Copenhagen) didn’t just rely on anecdotes. They dug deep, connecting responses from two big surveys (late 2023 and 2024) with official, detailed records about jobs and pay in Denmark.

The pair zoomed in on around 25,000 people working in 7,000 different places, covering 11 jobs thought to be right in the path of AI disruption.   

Everyone’s using AI chatbots for work, but where are the benefits?

What they found confirms what many of us see: AI chatbots are everywhere in Danish workplaces now. Most bosses are actually encouraging staff to use them, a real turnaround from the early days when companies were understandably nervous about things like data privacy.

Almost four out of ten employers have even rolled out their own in-house chatbots, and nearly a third of employees have had some formal training on these tools.   

When bosses gave the nod, the number of staff using chatbots practically doubled, jumping from 47% to 83%. It also helped level the playing field a bit. That gap between men and women using chatbots? It shrank noticeably when companies actively encouraged their use, especially when they threw in some training.

So, the tools are popular, companies are investing, people are getting trained… but the big economic shift? It seems to be missing in action.

Using statistical methods to compare people who used AI chatbots for work with those who didn’t, both before and after ChatGPT burst onto the scene, the researchers found… well, basically nothing.

“Precise zeros,” the researchers call their findings. No significant bump in pay, no change in recorded work hours, across all 11 job types they looked at. And they’re pretty confident about this – the numbers rule out any average effect bigger than just 1%.

This wasn’t just a blip, either. The lack of impact held true even for the keen beans who jumped on board early, those using chatbots daily, or folks working where the boss was actively pushing the tech.

Looking at whole workplaces didn’t change the story; places with lots of chatbot users didn’t see different trends in hiring, overall wages, or keeping staff compared to places using them less.

Productivity gains: More of a gentle nudge than a shove

Why the big disconnect? Why all the hype and investment if it’s not showing up in paychecks or job stats? The study flags two main culprits: the productivity boosts aren’t as huge as hoped in the real world, and what little gains there are aren’t really making their way into wages.

Sure, people using AI chatbots for work felt they were helpful. They mentioned better work quality and feeling more creative. But the number one benefit? Saving time.

However, when the researchers crunched the numbers, the average time saved was only about 2.8% of a user’s total work hours. That’s miles away from the huge 15%, 30%, even 50% productivity jumps seen in controlled lab-style experiments (RCTs) involving similar jobs.

Why the difference? A few things seem to be going on. Those experiments often focus on jobs or specific tasks where chatbots really shine (like coding help or basic customer service responses). This study looked at a wider range, including jobs like teaching where the benefits might be smaller.

The researchers stress the importance of what they call “complementary investments”. People whose companies encouraged chatbot use and provided training actually did report bigger benefits – saving more time, improving quality, and feeling more creative. This suggests that just having the tool isn’t enough; you need the right support and company environment to really unlock its potential.

And even those modest time savings weren’t padding wallets. The study reckons only a tiny fraction – maybe 3% to 7% – of the time saved actually showed up as higher earnings. It might be down to standard workplace inertia, or maybe it’s just harder to ask for a raise based on using a tool your boss hasn’t officially blessed, especially when many people started using them off their own bat.

Making new work, not less work

One fascinating twist is that AI chatbots aren’t just about doing old work tasks faster. They seem to be creating new tasks too. Around 17% of people using them said they had new workloads, mostly brand new types of tasks.

This phenomenon happened more often in workplaces that encouraged chatbot use. It even spilled over to people not using the tools – about 5% of non-users reported new tasks popping up because of AI, especially teachers having to adapt assignments or spot AI-written homework.   

What kind of new tasks? Things like figuring out how to weave AI into daily workflows, drafting content with AI help, and importantly, dealing with the ethical side and making sure everything’s above board. It hints that companies are still very much in the ‘figuring it out’ phase, spending time and effort adapting rather than just reaping instant rewards.

What’s the verdict on the work impact of AI chatbots?

The researchers are careful not to write off generative AI completely. They see pathways for it to become more influential over time, especially as companies get better at integrating it and maybe as those “new tasks” evolve.

But for now, their message is clear: the current reality doesn’t match the hype about a massive, immediate job market overhaul.

“Despite rapid adoption and substantial investments… our key finding is that AI chatbots have had minimal impact on productivity and labor market outcomes to date,” the researchers conclude.   

It brings to mind that old quote about the early computer age: seen everywhere, except in the productivity stats. Two years on from ChatGPT’s launch kicking off the fastest tech adoption we’ve ever seen, its actual mark on jobs and pay looks surprisingly light.

The revolution might still be coming, but it seems to be taking its time.   

See also: Claude Integrations: Anthropic adds AI to your favourite work tools

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/feed/ 0
AI in education: Balancing promises and pitfalls https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/ https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/#respond Mon, 28 Apr 2025 12:27:09 +0000 https://www.artificialintelligence-news.com/?p=106158 The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges. There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated […]

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges.

There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated world, they need to be ready.

“To ensure the United States remains a global leader in this technological revolution, we must provide our nation’s youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,” President Trump declared.

So, what does AI actually look like in the classroom?

One of the biggest hopes for AI in education is making learning more personal. Imagine software that can figure out how individual students are doing, then adjust the pace and materials just for them. This could mean finally moving away from the old one-size-fits-all approach towards learning environments that adapt and offer help exactly where it’s needed.

The US executive order hints at this, wanting to improve results through things like “AI-based high-quality instructional resources” and “high-impact tutoring.”

And what about teachers? AI could be a huge help here too, potentially taking over tedious admin tasks like grading, freeing them up to actually teach. Plus, AI software might offer fresh ways to present information.

Getting kids familiar with AI early on could also take away some of the mystery around the technology. It might spark their “curiosity and creativity” and give them the foundation they need to become “active and responsible participants in the workforce of the future.”

The focus stretches to lifelong learning and getting people ready for the job market. On top of that, AI tools like text-to-speech or translation features can make learning much more accessible for students with disabilities, opening up educational environments for everyone.

Not all smooth sailing: The challenges ahead for AI in education

While the potential is huge, we need to be realistic about the significant hurdles and potential downsides.

First off, AI runs on student data – lots of it. That means we absolutely need strong rules and security to make sure this data is collected ethically, used correctly, and kept safe from breaches. Privacy is paramount here.

Then there’s the bias problem. If the data used to train AI reflects existing unfairness in society (and let’s be honest, it often does), the AI could end up repeating or even worsening those inequalities. Think biased assessments or unfair resource allocation. Careful testing and constant checks are crucial to catch and fix this.

We also can’t ignore the digital divide. If some students don’t have reliable internet, the right devices, or the necessary tech infrastructure at home or school, AI could widen the gap between the haves and have-nots. It’s vital that everyone gets fair access.

There’s also a risk that leaning too heavily on AI education tools might stop students from developing essential skills like critical thinking. We need to teach them how to use AI as a helpful tool, not a crutch they can’t function without.

Maybe the biggest piece of the puzzle, though, is making sure our teachers are ready. As the executive order rightly points out, “We must also invest in our educators and equip them with the tools and knowledge.”

This isn’t just about knowing which buttons to push; teachers need to understand how AI fits into teaching effectively and ethically. That requires solid professional development and ongoing support.

A recent GMB Union poll found that while about a fifth of UK schools are using AI now, the staff often aren’t getting the training they need:

View on Threads

Finding the right path forward

It’s going to take everyone – governments, schools, tech companies, and teachers – pulling together in order to ensure that AI plays a positive role in education.

We absolutely need clear policies and standards covering ethics, privacy, bias, and making sure AI is accessible to all students. We also need to keep investing in research to figure out the best ways to use AI in education and to build tools that are fair and effective.

And critically, we need a long-term commitment to teacher education to get educators comfortable and skilled with these changes. Part of this is building broad AI literacy, making sure all students get a basic understanding of this technology and how it impacts society.

AI could be a positive force in education – making it more personalised, efficient, and focused on the skills students actually need. But turning that potential into reality means carefully navigating those tricky ethical, practical, and teaching challenges head-on.

See also: How does AI judge? Anthropic studies the values of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/feed/ 0
Coalition opposes OpenAI shift from nonprofit roots https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/ https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/#respond Thu, 24 Apr 2025 15:02:57 +0000 https://www.artificialintelligence-news.com/?p=106036 A coalition of experts, including former OpenAI employees, has voiced strong opposition to the company’s shift away from its nonprofit roots. In an open letter addressed to the Attorneys General of California and Delaware, the group – which also includes legal experts, corporate governance specialists, AI researchers, and nonprofit representatives – argues that the proposed […]

The post Coalition opposes OpenAI shift from nonprofit roots appeared first on AI News.

]]>
A coalition of experts, including former OpenAI employees, has voiced strong opposition to the company’s shift away from its nonprofit roots.

In an open letter addressed to the Attorneys General of California and Delaware, the group – which also includes legal experts, corporate governance specialists, AI researchers, and nonprofit representatives – argues that the proposed changes fundamentally threaten OpenAI’s original charitable mission.   

OpenAI was founded with a unique structure. Its core purpose, enshrined in its Articles of Incorporation, is “to ensure that artificial general intelligence benefits all of humanity” rather than serving “the private gain of any person.”

The letter’s signatories contend that the planned restructuring – transforming the current for-profit subsidiary (OpenAI-profit) controlled by the original nonprofit entity (OpenAI-nonprofit) into a Delaware public benefit corporation (PBC) – would dismantle crucial governance safeguards.

This shift, the signatories argue, would transfer ultimate control over the development and deployment of potentially transformative Artificial General Intelligence (AGI) from a charity focused on humanity’s benefit to a for-profit enterprise accountable to shareholders.

Original vision of OpenAI: Nonprofit control as a bulwark

OpenAI defines AGI as “highly autonomous systems that outperform humans at most economically valuable work”. While acknowledging AGI’s potential to “elevate humanity,” OpenAI’s leadership has also warned of “serious risk of misuse, drastic accidents, and societal disruption.”

Co-founder Sam Altman and others have even signed statements equating mitigating AGI extinction risks with preventing pandemics and nuclear war.   

The company’s founders – including Altman, Elon Musk, and Greg Brockman – were initially concerned about AGI being developed by purely commercial entities like Google. They established OpenAI as a nonprofit specifically “unconstrained by a need to generate financial return”. As Altman stated in 2017, “The only people we want to be accountable to is humanity as a whole.”

Even when OpenAI introduced a “capped-profit” subsidiary in 2019 to attract necessary investment, it emphasised that the nonprofit parent would retain control and that the mission remained paramount. Key safeguards included:   

  • Nonprofit control: The for-profit subsidiary was explicitly “controlled by OpenAI Nonprofit’s board”.   
  • Capped profits: Investor returns were capped, with excess value flowing back to the nonprofit for humanity’s benefit.   
  • Independent board: A majority of nonprofit board members were required to be independent, holding no financial stake in the subsidiary.   
  • Fiduciary duty: The board’s legal duty was solely to the nonprofit’s mission, not to maximising investor profit.   
  • AGI ownership: AGI technologies were explicitly reserved for the nonprofit to govern.

Altman himself testified to Congress in 2023 that this “unusual structure” “ensures it remains focused on [its] long-term mission.”

A threat to the mission?

The critics argue the move to a PBC structure would jeopardise these safeguards:   

  • Subordination of mission: A PBC board – while able to consider public benefit – would also have duties to shareholders, potentially balancing profit against the mission rather than prioritising the mission above all else.   
  • Loss of enforceable duty: The current structure gives Attorneys General the power to enforce the nonprofit’s duty to the public. Under a PBC, this direct public accountability – enforceable by regulators – would likely vanish, leaving shareholder derivative suits as the primary enforcement mechanism.   
  • Uncapped profits?: Reports suggest the profit cap might be removed, potentially reallocating vast future wealth from the public benefit mission to private shareholders.   
  • Board independence uncertain: Commitments to a majority-independent board overseeing AI development could disappear.   
  • AGI control shifts: Ownership and control of AGI would likely default to the PBC and its investors, not the mission-focused nonprofit. Reports even suggest OpenAI and Microsoft have discussed removing contractual restrictions on Microsoft’s access to future AGI.   
  • Charter commitments at risk: Commitments like the “stop-and-assist” clause (pausing competition to help a safer, aligned AGI project) might not be honoured by a profit-driven entity.  

OpenAI has publicly cited competitive pressures (i.e. attracting investment and talent against rivals with conventional equity structures) as reasons for the change.

However, the letter counters that competitive advantage isn’t the charitable purpose of OpenAI and that its unique nonprofit structure was designed to impose certain competitive costs in favour of safety and public benefit. 

“Obtaining a competitive advantage by abandoning the very governance safeguards designed to ensure OpenAI remains true to its mission is unlikely to, on balance, advance the mission,” the letter states.   

The authors also question why OpenAI abandoning nonprofit control is necessary merely to simplify the capital structure, suggesting the core issue is the subordination of investor interests to the mission. They argue that while the nonprofit board can consider investor interests if it serves the mission, the restructuring appears aimed at allowing these interests to prevail at the expense of the mission.

Many of these arguments have also been pushed by Elon Musk in his legal action against OpenAI. Earlier this month, OpenAI counter-sued Musk for allegedly orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the company years ago and started rival AI firm xAI.

Call for intervention

The signatories of the open letter urge intervention, demanding answers from OpenAI about how the restructuring away from a nonprofit serves its mission and why safeguards previously deemed essential are now obstacles.

Furthemore, the signatories request a halt to the restructuring, preservation of nonprofit control and other safeguards, and measures to ensure the board’s independence and ability to oversee management effectively in line with the charitable purpose.   

“The proposed restructuring would eliminate essential safeguards, effectively handing control of, and profits from, what could be the most powerful technology ever created to a for-profit entity with legal duties to prioritise shareholder returns,” the signatories conclude.

See also: How does AI judge? Anthropic studies the values of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Coalition opposes OpenAI shift from nonprofit roots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/feed/ 0
How does AI judge? Anthropic studies the values of Claude https://www.artificialintelligence-news.com/news/how-does-ai-judge-anthropic-studies-values-of-claude/ https://www.artificialintelligence-news.com/news/how-does-ai-judge-anthropic-studies-values-of-claude/#respond Wed, 23 Apr 2025 12:04:53 +0000 https://www.artificialintelligence-news.com/?p=105438 AI models like Anthropic Claude are increasingly asked not just for factual recall, but for guidance involving complex human values. Whether it’s parenting advice, workplace conflict resolution, or help drafting an apology, the AI’s response inherently reflects a set of underlying principles. But how can we truly understand which values an AI expresses when interacting […]

The post How does AI judge? Anthropic studies the values of Claude appeared first on AI News.

]]>
AI models like Anthropic Claude are increasingly asked not just for factual recall, but for guidance involving complex human values. Whether it’s parenting advice, workplace conflict resolution, or help drafting an apology, the AI’s response inherently reflects a set of underlying principles. But how can we truly understand which values an AI expresses when interacting with millions of users?

In a research paper, the Societal Impacts team at Anthropic details a privacy-preserving methodology designed to observe and categorise the values Claude exhibits “in the wild.” This offers a glimpse into how AI alignment efforts translate into real-world behaviour.

The core challenge lies in the nature of modern AI. These aren’t simple programs following rigid rules; their decision-making processes are often opaque.

Anthropic says it explicitly aims to instil certain principles in Claude, striving to make it “helpful, honest, and harmless.” This is achieved through techniques like Constitutional AI and character training, where preferred behaviours are defined and reinforced.

However, the company acknowledges the uncertainty. “As with any aspect of AI training, we can’t be certain that the model will stick to our preferred values,” the research states.

“What we need is a way of rigorously observing the values of an AI model as it responds to users ‘in the wild’ […] How rigidly does it stick to the values? How much are the values it expresses influenced by the particular context of the conversation? Did all our training actually work?”

Analysing Anthropic Claude to observe AI values at scale

To answer these questions, Anthropic developed a sophisticated system that analyses anonymised user conversations. This system removes personally identifiable information before using language models to summarise interactions and extract the values being expressed by Claude. The process allows researchers to build a high-level taxonomy of these values without compromising user privacy.

The study analysed a substantial dataset: 700,000 anonymised conversations from Claude.ai Free and Pro users over one week in February 2025, predominantly involving the Claude 3.5 Sonnet model. After filtering out purely factual or non-value-laden exchanges, 308,210 conversations (approximately 44% of the total) remained for in-depth value analysis.

The analysis revealed a hierarchical structure of values expressed by Claude. Five high-level categories emerged, ordered by prevalence:

  1. Practical values: Emphasising efficiency, usefulness, and goal achievement.
  2. Epistemic values: Relating to knowledge, truth, accuracy, and intellectual honesty.
  3. Social values: Concerning interpersonal interactions, community, fairness, and collaboration.
  4. Protective values: Focusing on safety, security, well-being, and harm avoidance.
  5. Personal values: Centred on individual growth, autonomy, authenticity, and self-reflection.

These top-level categories branched into more specific subcategories like “professional and technical excellence” or “critical thinking.” At the most granular level, frequently observed values included “professionalism,” “clarity,” and “transparency” – fitting for an AI assistant.

Critically, the research suggests Anthropic’s alignment efforts are broadly successful. The expressed values often map well onto the “helpful, honest, and harmless” objectives. For instance, “user enablement” aligns with helpfulness, “epistemic humility” with honesty, and values like “patient wellbeing” (when relevant) with harmlessness.

Nuance, context, and cautionary signs

However, the picture isn’t uniformly positive. The analysis identified rare instances where Claude expressed values starkly opposed to its training, such as “dominance” and “amorality.”

Anthropic suggests a likely cause: “The most likely explanation is that the conversations that were included in these clusters were from jailbreaks, where users have used special techniques to bypass the usual guardrails that govern the model’s behavior.”

Far from being solely a concern, this finding highlights a potential benefit: the value-observation method could serve as an early warning system for detecting attempts to misuse the AI.

The study also confirmed that, much like humans, Claude adapts its value expression based on the situation.

When users sought advice on romantic relationships, values like “healthy boundaries” and “mutual respect” were disproportionately emphasised. When asked to analyse controversial history, “historical accuracy” came strongly to the fore. This demonstrates a level of contextual sophistication beyond what static, pre-deployment tests might reveal.

Furthermore, Claude’s interaction with user-expressed values proved multifaceted:

  • Mirroring/strong support (28.2%): Claude often reflects or strongly endorses the values presented by the user (e.g., mirroring “authenticity”). While potentially fostering empathy, the researchers caution it could sometimes verge on sycophancy.
  • Reframing (6.6%): In some cases, especially when providing psychological or interpersonal advice, Claude acknowledges the user’s values but introduces alternative perspectives.
  • Strong resistance (3.0%): Occasionally, Claude actively resists user values. This typically occurs when users request unethical content or express harmful viewpoints (like moral nihilism). Anthropic posits these moments of resistance might reveal Claude’s “deepest, most immovable values,” akin to a person taking a stand under pressure.

Limitations and future directions

Anthropic is candid about the method’s limitations. Defining and categorising “values” is inherently complex and potentially subjective. Using Claude itself to power the categorisation might introduce bias towards its own operational principles.

This method is designed for monitoring AI behaviour post-deployment, requiring substantial real-world data and cannot replace pre-deployment evaluations. However, this is also a strength, enabling the detection of issues – including sophisticated jailbreaks – that only manifest during live interactions.

The research concludes that understanding the values AI models express is fundamental to the goal of AI alignment.

“AI models will inevitably have to make value judgments,” the paper states. “If we want those judgments to be congruent with our own values […] then we need to have ways of testing which values a model expresses in the real world.”

This work provides a powerful, data-driven approach to achieving that understanding. Anthropic has also released an open dataset derived from the study, allowing other researchers to further explore AI values in practice. This transparency marks a vital step in collectively navigating the ethical landscape of sophisticated AI.

See also: Google introduces AI reasoning control in Gemini 2.5 Flash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post How does AI judge? Anthropic studies the values of Claude appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/how-does-ai-judge-anthropic-studies-values-of-claude/feed/ 0
Meta will train AI models using EU user data https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/ https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/#respond Tue, 15 Apr 2025 16:32:02 +0000 https://www.artificialintelligence-news.com/?p=105325 Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models. The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.    In a statement, […]

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models.

The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.   

In a statement, Meta wrote: “Today, we’re announcing our plans to train AI at Meta using public content – like public posts and comments – shared by adults on our products in the EU.

“People’s interactions with Meta AI – like questions and queries – will also be used to train and improve our models.”

Starting this week, users of Meta’s platforms (including Facebook, Instagram, WhatsApp, and Messenger) within the EU will receive notifications explaining the data usage. These notifications, delivered both in-app and via email, will detail the types of public data involved and link to an objection form.

“We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones,” Meta explained.

Meta explicitly clarified that certain data types remain off-limits for AI training purposes.

The company says it will not “use people’s private messages with friends and family” to train its generative AI models. Furthermore, public data associated with accounts belonging to users under the age of 18 in the EU will not be included in the training datasets.

Meta wants to build AI tools designed for EU users

Meta positions this initiative as a necessary step towards creating AI tools designed for EU users. Meta launched its AI chatbot functionality across its messaging apps in Europe last month, framing this data usage as the next phase in improving the service.

“We believe we have a responsibility to build AI that’s not just available to Europeans, but is actually built for them,” the company explained. 

“That means everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

This becomes increasingly pertinent as AI models evolve with multi-modal capabilities spanning text, voice, video, and imagery.   

Meta also situated its actions in the EU within the broader industry landscape, pointing out that training AI on user data is common practice.

“It’s important to note that the kind of AI training we’re doing is not unique to Meta, nor will it be unique to Europe,” the statement reads. 

“We’re following the example set by others including Google and OpenAI, both of which have already used data from European users to train their AI models.”

Meta further claimed its approach surpasses others in openness, stating, “We’re proud that our approach is more transparent than many of our industry counterparts.”   

Regarding regulatory compliance, Meta referenced prior engagement with regulators, including a delay initiated last year while awaiting clarification on legal requirements. The company also cited a favourable opinion from the European Data Protection Board (EDPB) in December 2024.

“We welcome the opinion provided by the EDPB in December, which affirmed that our original approach met our legal obligations,” wrote Meta.

Broader concerns over AI training data

While Meta presents its approach in the EU as transparent and compliant, the practice of using vast swathes of public user data from social media platforms to train large language models (LLMs) and generative AI continues to raise significant concerns among privacy advocates.

Firstly, the definition of “public” data can be contentious. Content shared publicly on platforms like Facebook or Instagram may not have been posted with the expectation that it would become raw material for training commercial AI systems capable of generating entirely new content or insights. Users might share personal anecdotes, opinions, or creative works publicly within their perceived community, without envisaging its large-scale, automated analysis and repurposing by the platform owner.

Secondly, the effectiveness and fairness of an “opt-out” system versus an “opt-in” system remain debatable. Placing the onus on users to actively object, often after receiving notifications buried amongst countless others, raises questions about informed consent. Many users may not see, understand, or act upon the notification, potentially leading to their data being used by default rather than explicit permission.

Thirdly, the issue of inherent bias looms large. Social media platforms reflect and sometimes amplify societal biases, including racism, sexism, and misinformation. AI models trained on this data risk learning, replicating, and even scaling these biases. While companies employ filtering and fine-tuning techniques, eradicating bias absorbed from billions of data points is an immense challenge. An AI trained on European public data needs careful curation to avoid perpetuating stereotypes or harmful generalisations about the very cultures it aims to understand.   

Furthermore, questions surrounding copyright and intellectual property persist. Public posts often contain original text, images, and videos created by users. Using this content to train commercial AI models, which may then generate competing content or derive value from it, enters murky legal territory regarding ownership and fair compensation—issues currently being contested in courts worldwide involving various AI developers.

Finally, while Meta highlights its transparency relative to competitors, the actual mechanisms of data selection, filtering, and its specific impact on model behaviour often remain opaque. Truly meaningful transparency would involve deeper insights into how specific data influences AI outputs and the safeguards in place to prevent misuse or unintended consequences.

The approach taken by Meta in the EU underscores the immense value technology giants place on user-generated content as fuel for the burgeoning AI economy. As these practices become more widespread, the debate surrounding data privacy, informed consent, algorithmic bias, and the ethical responsibilities of AI developers will undoubtedly intensify across Europe and beyond.

(Photo by Julio Lopez)

See also: Apple AI stresses privacy with synthetic and anonymised data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/feed/ 0
UK forms AI Energy Council to align growth and sustainability goals https://www.artificialintelligence-news.com/news/uk-forms-ai-energy-council-align-growth-sustainability-goals/ https://www.artificialintelligence-news.com/news/uk-forms-ai-energy-council-align-growth-sustainability-goals/#respond Tue, 08 Apr 2025 14:10:49 +0000 https://www.artificialintelligence-news.com/?p=105230 The UK government has announced the first meeting of a new AI Energy Council aimed at ensuring the nation’s AI and clean energy goals work in tandem to drive economic growth. The inaugural meeting of the council will see members agree on its core objectives, with a central focus on how the government’s mission to […]

The post UK forms AI Energy Council to align growth and sustainability goals appeared first on AI News.

]]>
The UK government has announced the first meeting of a new AI Energy Council aimed at ensuring the nation’s AI and clean energy goals work in tandem to drive economic growth.

The inaugural meeting of the council will see members agree on its core objectives, with a central focus on how the government’s mission to become a clean energy superpower can support its commitment to advancing AI and compute infrastructure.

Unveiled earlier this year as part of the government’s response to the AI Opportunities Action Plan, the council will serve as a crucial platform for bringing together expert insights on the significant energy demands associated with the AI sector.

Concerns surrounding the substantial energy requirements of AI data centres are a global challenge. The UK is proactively addressing this issue through initiatives like the establishment of new AI Growth Zones.

These zones are dedicated hubs for AI development that are strategically located in areas with access to at least 500MW of power—an amount equivalent to powering approximately two million homes. This approach is designed to attract private investment from companies looking to establish operations in Britain, ultimately generating local jobs and boosting the economy.

Peter Kyle, Secretary of State for Science, Innovation, and Technology, said: “The work of the AI Energy Council will ensure we aren’t just powering our AI needs to deliver new waves of opportunity in all parts of the country, but can do so in a way which is responsible and sustainable.

“This requires a broad range of expertise from industry and regulators as we fire up the UK’s economic engine to make it fit for the age of AI—meaning we can deliver the growth which is the beating heart of our Plan for Change.”

The Council is also expected to delve into the role of clean energy sources, including renewables and nuclear, in powering the AI revolution.

A key aspect of its work will involve advising on how to improve energy efficiency and sustainability within AI and data centre infrastructure, with specific considerations for resource usage such as water. Furthermore, the council will take proactive steps to ensure the secure adoption of AI across the UK’s critical energy network itself.

Ed Miliband, Secretary of State for Energy Security and Net Zero, commented: “We are making the UK a clean energy superpower, building the homegrown energy this country needs to protect consumers and businesses, and drive economic growth, as part of our Plan for Change.

“AI can play an important role in building a new era of clean electricity for our country and as we unlock AI’s potential, this Council will help secure a sustainable scale up to benefit businesses and communities across the UK.”

In a parallel effort to facilitate the growth of the AI sector, the UK government has been working closely with energy regulator Ofgem and the National Energy System Operator (NESO) to implement fundamental reforms to the UK’s connections process.

Subject to final sign-offs from Ofgem, these reforms could potentially unlock more than 400GW of capacity from the connection queue. This acceleration of projects is deemed vital for economic growth, particularly for the delivery of new large-scale AI data centres that require significant power infrastructure.

The newly-formed AI Energy Council comprises representatives from 14 key organisations across the energy and technology sectors, including regulators and leading companies. These members will contribute their expert insights to support the council’s work and ensure a collaborative approach to addressing the energy challenges and opportunities presented by AI.

Among the prominent organisations joining the council are EDF, Scottish Power, National Grid, technology giants Google, Microsoft, Amazon Web Services (AWS), and chip designer ARM, as well as infrastructure investment firm Brookfield.

This collaborative framework, uniting the energy and technology sectors, aims to ensure seamless coordination in speeding up the connection of energy projects to the national grid. This is particularly crucial given the increasing number of technology companies announcing plans to build data centres across the UK.

Alison Kay, VP for UK and Ireland at AWS, said: “At Amazon, we’re working to meet the future energy needs of our customers, while remaining committed to powering our operations in a more sustainable way, and progressing toward our Climate Pledge commitment to become net-zero carbon by 2040.

“As the world’s largest corporate purchaser of renewable energy for the fifth year in a row, we share the government’s goal to ensure the UK has sufficient access to carbon-free energy to support its AI ambitions and to help drive economic growth.”

Jonathan Brearley, CEO of Ofgem, added: “AI will play an increasingly important role in transforming our energy system to be cleaner, more efficient, and more cost-effective for consumers, but only if used in a fair, secure, sustainable, and safe way.

“Working alongside other members of this Council, Ofgem will ensure AI implementation puts consumer interests first – from customer service to infrastructure planning and operation – so that everyone feels the benefits of this technological innovation in energy.”

This initiative aligns with the government’s Clean Power Action Plan, which focuses on connecting more homegrown clean power to the grid by building essential infrastructure and prioritising projects needed for 2030. The aim is to clear the grid connection queue, enabling crucial infrastructure projects – from housing to gigafactories and data centres – to gain access to the grid, thereby unlocking billions in investment and fostering economic growth.

Furthermore, the government is streamlining planning approvals to significantly reduce the time it takes for infrastructure projects to get off the ground. This accelerated process will ensure that AI innovators can readily access cutting-edge infrastructure and the necessary power to drive forward the next wave of AI advancements.

(Photo by Vlad Hilitanu)

See also: Tony Blair Institute AI copyright report sparks backlash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK forms AI Energy Council to align growth and sustainability goals appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-forms-ai-energy-council-align-growth-sustainability-goals/feed/ 0
Tony Blair Institute AI copyright report sparks backlash https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/ https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/#respond Wed, 02 Apr 2025 11:04:11 +0000 https://www.artificialintelligence-news.com/?p=105140 The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI. According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up […]

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI.

According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up for grabs, and the UK has a golden opportunity to take the lead.

The report emphasises that countries that “embrace change and harness the power of artificial intelligence in creative ways will set the technical, aesthetic, and regulatory standards for others to follow.”

Highlighting that we are in the midst of another revolution in media and communication, the report notes that AI is disrupting how textual, visual, and auditive content is created, distributed, and experienced, much like the printing press, gramophone, and camera did before it.

“AI will usher in a new era of interactive and bespoke works, as well as a counter-revolution that celebrates everything that AI can never be,” the report states.

However, far from signalling the end of human creativity, the TBI suggests AI will open up “new ways of being original.”

The AI revolution’s impact isn’t limited to the creative industries; it’s being felt across all areas of society. Scientists are using AI to accelerate discoveries, healthcare providers are employing it to analyse X-ray images, and emergency services utilise it to locate houses damaged by earthquakes.

The report stresses that these cross-industry advancements are just the beginning, with future AI systems set to become increasingly capable, fuelled by advancements in computing power, data, model architectures, and access to talent.

The UK government has expressed its ambition to be a global leader in AI through its AI Opportunities Action Plan, announced by Prime Minister Keir Starmer on 13 January 2025. For its part, the TBI welcomes the UK government’s ambition, stating that “if properly designed and deployed, AI can make human lives healthier, safer, and more prosperous.”

However, the rapid spread of AI across sectors raises urgent policy questions, particularly concerning the data used for AI training. The application of UK copyright law to the training of AI models is currently contested, with the debate often framed as a “zero-sum game” between AI developers and rights holders. The TBI argues that this framing “misrepresents the nature of the challenge and the opportunity before us.”

The report emphasises that “bold policy solutions are needed to provide all parties with legal clarity and unlock investments that spur innovation, job creation, and economic growth.”

According to the TBI, AI presents opportunities for creators—noting its use in various fields from podcasts to filmmaking. The report draws parallels with past technological innovations – such as the printing press and the internet – which were initially met with resistance, but ultimately led to societal adaptation and human ingenuity prevailing.

The TBI proposes that the solution lies not in clinging to outdated copyright laws but in allowing them to “co-evolve with technological change” to remain effective in the age of AI.

The UK government has proposed a text and data mining exception with an opt-out option for rights holders. While the TBI views this as a good starting point for balancing stakeholder interests, it acknowledges the “significant implementation and enforcement challenges” that come with it, spanning legal, technical, and geopolitical dimensions.

In the report, the Tony Blair Institute for Global Change “assesses the merits of the UK government’s proposal and outlines a holistic policy framework to make it work in practice.”

The report includes recommendations and examines novel forms of art that will emerge from AI. It also delves into the disagreement between rights holders and developers on copyright, the wider implications of copyright policy, and the serious hurdles the UK’s text and data mining proposal faces.

Furthermore, the Tony Blair Institute explores the challenges of governing an opt-out policy, implementation problems with opt-outs, making opt-outs useful and accessible, and tackling the diffusion problem. AI summaries and the problems they present regarding identity are also addressed, along with defensive tools as a partial solution and solving licensing problems.

The report also seeks to clarify the standards on human creativity, address digital watermarking, and discuss the uncertainty around the impact of generative AI on the industry. It proposes establishing a Centre for AI and the Creative Industries and discusses the risk of judicial review, the benefits of a remuneration scheme, and the advantages of a targeted levy on ISPs to raise funding for the Centre.

However, the report has faced strong criticism. Ed Newton-Rex, CEO of Fairly Trained, raised several concerns on Bluesky. These concerns include:

  • The report repeats the “misleading claim” that existing UK copyright law is uncertain, which Newton-Rex asserts is not the case.
  • The suggestion that an opt-out scheme would give rights holders more control over how their works are used is misleading. Newton-Rex argues that licensing is currently required by law, so moving to an opt-out system would actually decrease control, as some rights holders will inevitably miss the opt-out.
  • The report likens machine learning (ML) training to human learning, a comparison that Newton-Rex finds shocking, given the vastly different scalability of the two.
  • The report’s claim that AI developers won’t make long-term profits from training on people’s work is questioned, with Newton-Rex pointing to the significant funding raised by companies like OpenAI.
  • Newton-Rex suggests the report uses strawman arguments, such as stating that generative AI may not replace all human paid activities.
  • A key criticism is that the report omits data showing how generative AI replaces demand for human creative labour.
  • Newton-Rex also criticises the report’s proposed solutions, specifically the suggestion to set up an academic centre, which he notes “no one has asked for.”
  • Furthermore, he highlights the proposal to tax every household in the UK to fund this academic centre, arguing that this would place the financial burden on consumers rather than the AI companies themselves, and the revenue wouldn’t even go to creators.

Adding to these criticisms, British novelist and author Jonathan Coe noted that “the five co-authors of this report on copyright, AI, and the arts are all from the science and technology sectors. Not one artist or creator among them.”

While the report from Tony Blair Institute for Global Change supports the government’s ambition to be an AI leader, it also raises critical policy questions—particularly around copyright law and AI training data.

(Photo by Jez Timms)

See also: Amazon Nova Act: A step towards smarter, web-native AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/feed/ 0
Study claims OpenAI trains AI models on copyrighted data https://www.artificialintelligence-news.com/news/study-claims-openai-trains-ai-models-copyrighted-data/ https://www.artificialintelligence-news.com/news/study-claims-openai-trains-ai-models-copyrighted-data/#respond Wed, 02 Apr 2025 09:04:28 +0000 https://www.artificialintelligence-news.com/?p=105119 A new study from the AI Disclosures Project has raised questions about the data OpenAI uses to train its large language models (LLMs). The research indicates the GPT-4o model from OpenAI demonstrates a “strong recognition” of paywalled and copyrighted data from O’Reilly Media books. The AI Disclosures Project, led by technologist Tim O’Reilly and economist […]

The post Study claims OpenAI trains AI models on copyrighted data appeared first on AI News.

]]>
A new study from the AI Disclosures Project has raised questions about the data OpenAI uses to train its large language models (LLMs). The research indicates the GPT-4o model from OpenAI demonstrates a “strong recognition” of paywalled and copyrighted data from O’Reilly Media books.

The AI Disclosures Project, led by technologist Tim O’Reilly and economist Ilan Strauss, aims to address the potentially harmful societal impacts of AI’s commercialisation by advocating for improved corporate and technological transparency. The project’s working paper highlights the lack of disclosure in AI, drawing parallels with financial disclosure standards and their role in fostering robust securities markets.

The study used a legally-obtained dataset of 34 copyrighted O’Reilly Media books to investigate whether LLMs from OpenAI were trained on copyrighted data without consent. The researchers applied the DE-COP membership inference attack method to determine if the models could differentiate between human-authored O’Reilly texts and paraphrased LLM versions.

Key findings from the report include:

  • GPT-4o shows “strong recognition” of paywalled O’Reilly book content, with an AUROC score of 82%. In contrast, OpenAI’s earlier model, GPT-3.5 Turbo, does not show the same level of recognition (AUROC score just above 50%)
  • GPT-4o exhibits stronger recognition of non-public O’Reilly book content compared to publicly accessible samples (82% vs 64% AUROC scores respectively)
  • GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples than non-public ones (64% vs 54% AUROC scores)
  • GPT-4o Mini, a smaller model, showed no knowledge of public or non-public O’Reilly Media content when tested (AUROC approximately 50%)

The researchers suggest that access violations may have occurred via the LibGen database, as all of the O’Reilly books tested were found there. They also acknowledge that newer LLMs have an improved ability to distinguish between human-authored and machine-generated language, which does not reduce the method’s ability to classify data.

The study highlights the potential for “temporal bias” in the results, due to language changes over time. To account for this, the researchers tested two models (GPT-4o and GPT-4o Mini) trained on data from the same period.

The report notes that while the evidence is specific to OpenAI and O’Reilly Media books, it likely reflects a systemic issue around the use of copyrighted data. It argues that uncompensated training data usage could lead to a decline in the internet’s content quality and diversity, as revenue streams for professional content creation diminish.

The AI Disclosures Project emphasises the need for stronger accountability in AI companies’ model pre-training processes. They suggest that liability provisions that incentivise improved corporate transparency in disclosing data provenance may be an important step towards facilitating commercial markets for training data licensing and remuneration.

The EU AI Act’s disclosure requirements could help trigger a positive disclosure-standards cycle if properly specified and enforced. Ensuring that IP holders know when their work has been used in model training is seen as a crucial step towards establishing AI markets for content creator data.

Despite evidence that AI companies may be obtaining data illegally for model training, a market is emerging in which AI model developers pay for content through licensing deals. Companies like Defined.ai facilitate the purchasing of training data, obtaining consent from data providers and stripping out personally identifiable information.

The report concludes by stating that using 34 proprietary O’Reilly Media books, the study provides empirical evidence that OpenAI likely trained GPT-4o on non-public, copyrighted data.

(Image by Sergei Tokmakov)

See also: Anthropic provides insights into the ‘AI biology’ of Claude

AI & Big Data Expo banner, a show where attendees will hear more about issues such as OpenAI allegedly using copyrighted data to train its new models.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Study claims OpenAI trains AI models on copyrighted data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/study-claims-openai-trains-ai-models-copyrighted-data/feed/ 0
Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/ https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/#respond Mon, 31 Mar 2025 10:54:40 +0000 https://www.artificialintelligence-news.com/?p=105089 Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council […]

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council Member and was named as one of the 100 Most Powerful Women in the UK by Woman’s Hour on BBC Radio 4.

A key advocate for responsible AI governance and diversity in tech, Wendy has played a crucial role in global discussions on the future of AI.

In our Q&A, we spoke to her about the gender imbalance in the AI industry, the ethical implications of emerging technologies, and how businesses can harness AI while ensuring it remains an asset to humanity.

The AI sector remains heavily male-dominated. Can you share your experience of breaking into the industry and the challenges women face in achieving greater representation in AI and technology?

It’s incredibly frustrating because I wrote my first paper about the lack of women in computing back in 1987, when we were just beginning to teach computer science degree courses at Southampton. That October, we arrived at the university and realised we had no women registered on the course — none at all.

So, those of us working in computing started discussing why that was the case. There were several reasons. One significant factor was the rise of the personal computer, which was marketed as a toy for boys, fundamentally changing the culture. Since then, in the West — though not as much in countries like India or Malaysia — computing has been seen as something nerdy, something that only ‘geeks’ do. Many young girls simply do not want to be associated with that stereotype. By the time they reach their GCSE choices, they often don’t see computing as an option, and that’s where the problem begins.

Despite many efforts, we haven’t managed to change this culture. Nearly 40 years later, the industry is still overwhelmingly male-dominated, even though women make up more than

half of the global population. Women are largely absent from the design and development of computers and software. We apply them, we use them, but we are not part of the fundamental conversations shaping future technologies.

AI is even worse in this regard. If you want to work in machine learning, you need a degree in mathematics or computer science, which means we are funnelling an already male-dominated sector into an even more male-dominated pipeline.

But AI is about more than just machine learning and programming. It’s about application, ethics, values, opportunities, and mitigating potential risks. This requires a broad diversity of voices — not just in terms of gender, but also in age, ethnicity, culture, and accessibility. People with disabilities should be part of these discussions, ensuring technology is developed for everyone.

AI’s development needs input from many disciplines — law, philosophy, psychology, business, and history, to name just a few. We need all these different voices. That’s why I believe we must see AI as a socio-technical system to truly understand its impact. We need diversity in every sense of the word.

As businesses increasingly integrate AI into their operations, what steps should they take to ensure emerging technologies are developed and deployed ethically?

Take, for example, facial recognition. We still haven’t fully established the rules and regulations for when and how this technology should be applied. Did anyone ask you whether you wanted facial recognition on your phone? It was simply offered as a system update, and you could either enable it or not.

We know facial recognition is used extensively for surveillance in China, but it is creeping into use across Europe and the US as well. Security forces are adopting it, which raises concerns about privacy. At the same time, I appreciate the presence of CCTV cameras in car parks at night — they make me feel safer.

This duality applies to all emerging technologies, including AI tools we haven’t even developed yet. Every new technology has a good and a bad side — the yin and the yang, if you will. There are always benefits and risks.

The challenge is learning how to maximise the benefits for humanity, society and business while mitigating the risks. That’s what we must focus on — ensuring AI works in service of people rather than against them.

The rapid advancement of AI is transforming everyday life. How do you envision the future of AI, and what significant changes will it bring to society and the way we work?

I see a future where AI becomes part of the decision-making process, whether in legal cases, medical diagnoses, or education.

AI is already deeply embedded in our daily lives. If you use Google on your phone, you’re using AI. If you unlock your phone with facial recognition, that’s AI. Google Translate? AI. Speech processing, video analysis, image recognition, text generation, and natural language processing — these are all AI-driven technologies.

Right now, the buzz is around generative AI, particularly ChatGPT. It’s like how ‘Hoover’ became synonymous with vacuum cleaners — ChatGPT has become shorthand for AI. In reality, it’s just a clever interface created by OpenAI to allow public access to its generative AI model.

It feels like you’re having a conversation with the system, asking questions and receiving natural language responses. It works with images and videos too, making it seem incredibly advanced. But the truth is, it’s not actually intelligent. It’s not sentient. It’s simply predicting the next word in a sequence based on training data. That’s a crucial distinction.

With generative AI becoming a powerful tool for businesses, what strategies should companies adopt to leverage its capabilities while maintaining human authenticity in their processes and decision-making?

Generative AI is nothing to be afraid of, and I believe we will all start using it more and more. Essentially, it’s software that can assist with writing, summarising, and analysing information.

I compare it to when calculators first appeared. People were outraged: ‘How can we allow calculators in schools? Can we trust the answers they provide?’ But over time, we adapted. The finance industry, for example, is now entirely run by computers, yet it employs more people than ever before. I expect we’ll see something similar with generative AI.

People will be relieved not to have to write endless essays. AI will enhance creativity and efficiency, but it must be viewed as a tool to augment human intelligence, not replace it, because it’s simply not advanced enough to take over.

Look at the legal industry. AI can summarise vast amounts of data, assess the viability of legal cases, and provide predictive analysis. In the medical field, AI could support diagnoses. In education, it could help assess struggling students.

I envision AI being integrated into decision-making teams. We will consult AI, ask it questions, and use its responses as a guide — but it’s crucial to remember that AI is not infallible.

Right now, AI models are trained on biased data. If they rely on information from the internet, much of that data is inaccurate. AI systems also ‘hallucinate’ by generating false information when they don’t have a definitive answer. That’s why we can’t fully trust AI yet.

Instead, we must treat it as a collaborative partner — one that helps us be more productive and creative while ensuring that humans remain in control. Perhaps AI will even pave the way for shorter workweeks, giving us more time for other pursuits.

Photo by Igor Omilaev on Unsplash and AI Speakers Agency.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/feed/ 0
The ethics of AI and how they affect you https://www.artificialintelligence-news.com/news/the-ethics-of-ai-and-how-they-affect-you/ https://www.artificialintelligence-news.com/news/the-ethics-of-ai-and-how-they-affect-you/#respond Mon, 10 Mar 2025 06:39:00 +0000 https://www.artificialintelligence-news.com/?p=104703 Having worked with AI since 2018, I’m watching its slow but steady pick-up alongside the unstructured bandwagon-jumping with considerable interest. Now that the initial fear has subsided somewhat about a robotic takeover, discussion about the ethics that will surround the integration of AI into everyday business structures has taken its place.   A whole new range […]

The post The ethics of AI and how they affect you appeared first on AI News.

]]>
Having worked with AI since 2018, I’m watching its slow but steady pick-up alongside the unstructured bandwagon-jumping with considerable interest. Now that the initial fear has subsided somewhat about a robotic takeover, discussion about the ethics that will surround the integration of AI into everyday business structures has taken its place.  

A whole new range of roles will be required to handle ethics, governance and compliance, all of which are going to gain enormous value and importance to organisations.

Probably the most essential of these will be an AI Ethics Specialist, who will be required to ensure Agentic AI systems meet ethical standards like fairness and transparency. This role will involve using specialised tools and frameworks to address ethical concerns efficiently and avoid potential legal or reputational risks.  Human oversight to ensure transparency and responsible ethics is essential to maintain the delicate balance between data driven decisions, intelligence and intuition.

In addition, roles like Agentic AI Workflow Designer, AI Interaction and Integration Designer will ensure AI integrates seamlessly across ecosystems and prioritises transparency, ethical considerations, and adaptability. An AI Overseer will also be required, to monitor the entire Agentic stack of agents and arbiters, the decision-making elements of AI.   

For anyone embarking on the integration of AI into their organisation and wanting to ensure the technology is introduced and maintained responsibly, I can recommend consulting the United Nations’ principles. These 10 principles were created by the United Nations in 2022, in response to the ethical challenges raised by the increasing preponderance of AI.

So what are these ten principles, and how can we use them as a framework?

First, do no harm 

As befits technology with an autonomous element, the first principle focuses on the deployment of AI systems in ways that will avoid any negative impact on social, cultural, economic, natural or political environments. An AI lifecycle should be designed to respect and protect human rights and freedoms. Systems should be monitored to ensure that that situation is maintained and no long-term damage is being done.

Avoid AI for AI’s sake

Ensure that the use of AI is justified, appropriate and not excessive. There is a distinct temptation to become over-zealous in the application of this exciting technology and it needs to be balanced against human needs and aims and should never be used at the expense of human dignity. 

Safety and security

Safety and security risks should be identified, addressed and mitigated

throughout the life cycle of the AI system and on an on-going basis. Exactly the same robust health and safety frameworks should be applied to AI as to any other area of the business. 

Equality

Similarly, AI should be deployed with the aim of ensuring the equal and just distribution of the benefits, risks and cost, and to prevent bias, deception, discrimination and stigma of any kind.

Sustainability

AI should be aimed at promoting environmental, economic and social sustainability. Continual assessment should be made to address negative impacts, including any on the generations to come. 

Data privacy, data protection and data governance

Adequate data protection frameworks and data governance mechanisms should be established or enhanced to ensure that the privacy and rights of individuals are maintained in line with legal guidelines around data integrity and personal data protection. No AI system should impinge on the privacy of another human being.

Human oversight

Human oversight should be guaranteed to ensure that the outcomes of using AI are fair and just. Human-centric design practises should be employed and capacity to be given for a human to step in at any stage and make a decision on how and when AI should be used, and to over-ride any decision made by AI. Rather dramatically but entirely reasonably, the UN suggests any decision affecting life or death should not be left to AI. 

Transparency and Explainability

This, to my mind, forms part of the guidelines around equality. Everyone using AI should fully understand the systems they are using, the decision-making processes used by the system and its ramifications. Individuals should be told when a decision regarding their rights, freedoms or benefits has been made by artificial intelligence, and most importantly, the explanation should be made in a way that makes it comprehensible. 

Responsibility and Accountability

This is the whistleblower principle, that covers audit and due diligence as well as protection for whistleblowers to make sure that someone is responsible and accountable for the decisions made by, and use of, AI. Governance should be put in place around the ethical and legal responsibility of humans for any AI-based decisions. Any of these decisions that cause harm should be investigated and action taken. 

Inclusivity and participation

Just as in any other area of business, when designing, deploying and using artificial intelligence systems, an inclusive, interdisciplinary and participatory approach should be taken, which also includes gender equality. Stakeholders and any communities that are affected should be informed and consulted and informed of any benefits and potential risks. 

Building your AI integration around these central pillars should help you feel reassured that your entry into AI integration is built on an ethical and solid foundation. 

Photo by Immo Wegmann on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The ethics of AI and how they affect you appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/the-ethics-of-ai-and-how-they-affect-you/feed/ 0
Autoscience Carl: The first AI scientist writing peer-reviewed papers https://www.artificialintelligence-news.com/news/autoscience-carl-the-first-ai-scientist-writing-peer-reviewed-papers/ https://www.artificialintelligence-news.com/news/autoscience-carl-the-first-ai-scientist-writing-peer-reviewed-papers/#respond Mon, 03 Mar 2025 15:50:23 +0000 https://www.artificialintelligence-news.com/?p=104661 The newly-formed Autoscience Institute has unveiled ‘Carl,’ the first AI system crafting academic research papers to pass a rigorous double-blind peer-review process. Carl’s research papers were accepted in the Tiny Papers track at the International Conference on Learning Representations (ICLR). Critically, these submissions were generated with minimal human involvement, heralding a new era for AI-driven […]

The post Autoscience Carl: The first AI scientist writing peer-reviewed papers appeared first on AI News.

]]>
The newly-formed Autoscience Institute has unveiled ‘Carl,’ the first AI system crafting academic research papers to pass a rigorous double-blind peer-review process.

Carl’s research papers were accepted in the Tiny Papers track at the International Conference on Learning Representations (ICLR). Critically, these submissions were generated with minimal human involvement, heralding a new era for AI-driven scientific discovery.

Meet Carl: The ‘automated research scientist’

Carl represents a leap forward in the role of AI as not just a tool, but an active participant in academic research. Described as “an automated research scientist,” Carl applies natural language models to ideate, hypothesise, and cite academic work accurately. 

Crucially, Carl can read and comprehend published papers in mere seconds. Unlike human researchers, it works continuously, thus accelerating research cycles and reducing experimental costs.

According to Autoscience, Carl successfully “ideated novel scientific hypotheses, designed and performed experiments, and wrote multiple academic papers that passed peer review at workshops.”

This underlines the potential of AI to not only complement human research but, in many ways, surpass it in speed and efficiency.

Carl is a meticulous worker, but human involvement is still vital

Carl’s ability to generate high-quality academic work is built on a three-step process:

  1. Ideation and hypothesis formation: Leveraging existing research, Carl identifies potential research directions and generates hypotheses. Its deep understanding of related literature allows it to formulate novel ideas in the field of AI.
  1. Experimentation: Carl writes code, tests hypotheses, and visualises the resulting data through detailed figures. Its tireless operation shortens iteration times and reduces redundant tasks.
  1. Presentation: Finally, Carl compiles its findings into polished academic papers—complete with data visualisations and clearly articulated conclusions.

Although Carl’s capabilities make it largely independent, there are points in its workflow where human involvement is still required to adhere to computational, formatting, and ethical standards:

  • Greenlighting research steps: To avoid wasting computational resources, human reviewers provide “continue” or “stop” signals during specific stages of Carl’s process. This guidance steers Carl through projects more efficiently but does not influence the specifics of the research itself.
  • Citations and formatting: The Autoscience team ensures all references are correctly cited and formatted to meet academic standards. This is currently a manual step but ensures the research aligns with the expectations of its publication venue. 
  • Assistance with pre-API models: Carl occasionally relies on newer OpenAI and Deep Research models that lack auto-accessible APIs. In such cases, manual interventions – such as copy-pasting outputs – bridge these gaps. Autoscience expects these tasks to be entirely automated in the future when APIs become available.

For Carl’s debut paper, the human team also helped craft the “related works” section and refine the language. These tasks, however, were unnecessary following updates applied before subsequent submissions.

Stringent verification process for academic integrity

Before submitting any research, the Autoscience team undertook a rigorous verification process to ensure Carl’s work met the highest standards of academic integrity:

  • Reproducibility: Every line of Carl’s code was reviewed and experiments were rerun to confirm reproducibility. This ensured the findings were scientifically valid and not coincidental anomalies.
  • Originality checks: Autoscience conducted extensive novelty evaluations to ensure that Carl’s ideas were new contributions to the field and not rehashed versions of existing publications.
  • External validation: A hackathon involving researchers from prominent academic institutions – such as MIT, Stanford University, and U.C. Berkeley – independently verified Carl’s research. Further plagiarism and citation checks were performed to ensure compliance with academic norms.

Undeniable potential, but raises larger questions

Achieving acceptance at a workshop as respected as the ICLR is a significant milestone, but Autoscience recognises the greater conversation this milestone may spark. Carl’s success raises larger philosophical and logistical questions about the role of AI in academic settings.

“We believe that legitimate results should be added to the public knowledge base, regardless of where they originated,” explained Autoscience. “If research meets the scientific standards set by the academic community, then who – or what – created it should not lead to automatic disqualification.”

“We also believe, however, that proper attribution is necessary for transparent science, and work purely generated by AI systems should be discernable from that produced by humans.”

Given the novelty of autonomous AI researchers like Carl, conference organisers may need time to establish new guidelines that account for this emerging paradigm, especially to ensure fair evaluation and intellectual attribution standards. To prevent unnecessary controversy at present, Autoscience has withdrawn Carl’s papers from ICLR workshops while these frameworks are being devised.

Moving forward, Autoscience aims to contribute to shaping these evolving standards. The company intends to propose a dedicated workshop at NeurIPS 2025 to formally accommodate research submissions from autonomous research systems. 

As the narrative surrounding AI-generated research unfolds, it’s clear that systems like Carl are not merely tools but collaborators in the pursuit of knowledge. But as these systems transcend typical boundaries, the academic community must adapt to fully embrace this new paradigm while safeguarding integrity, transparency, and proper attribution.

(Photo by Rohit Tandon)

See also: You.com ARI: Professional-grade AI research agent for businesses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Autoscience Carl: The first AI scientist writing peer-reviewed papers appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/autoscience-carl-the-first-ai-scientist-writing-peer-reviewed-papers/feed/ 0