ethical AI Archives - AI News https://www.artificialintelligence-news.com/news/tag/ethical-ai/ Artificial Intelligence News Thu, 01 May 2025 11:28:50 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png ethical AI Archives - AI News https://www.artificialintelligence-news.com/news/tag/ethical-ai/ 32 32 Conversations with AI: Education https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/ https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/#respond Thu, 01 May 2025 10:27:00 +0000 https://www.artificialintelligence-news.com/?p=106152 How can AI be used in education? An ethical debate, with an AI

The post Conversations with AI: Education appeared first on AI News.

]]>
The classroom hasn’t changed much in over a century. A teacher at the front, rows of students listening, and a curriculum defined by what’s testable – not necessarily what’s meaningful.

But AI, as arguably the most powerful tool humanity has created in the last few years, is about to break that model open. Not with smarter software or faster grading, but by forcing us to ask: “What is the purpose of education in a world where machines could teach?”

At AI News, rather than speculate about distant futures or lean on product announcements and edtech deals, we started a conversation – with an AI. We asked it what it sees when it looks at the classroom, the teacher, and the learner.

What follows is a distilled version of that exchange, given here not as a technical analysis, but as a provocation.

The system cracks

Education is under pressure worldwide: Teachers are overworked, students are disengaged, and curricula feel outdated in a changing world. Into this comes AI – not as a patch or plug-in, but as a potential accelerant.

Our opening prompt: What roles might an AI play in education?

The answer was wide-ranging:

  • Personalised learning pathways
  • Intelligent tutoring systems
  • Administrative efficiency
  • Language translation and accessibility tools
  • Behavioural and emotional recognition
  • Scalable, always-available content delivery

These are features of an education system, its nuts and bolts. But what about meaning and ethics?

Flawed by design?

One concern kept resurfacing: bias.

We asked the AI: “If you’re trained on the internet – and the internet is the output of biased, flawed human thought – doesn’t that mean your responses are equally flawed?”

The AI acknowledged the logic. Bias is inherited. Inaccuracies, distortions, and blind spots all travel from teacher to pupil. What an AI learns, it learns from us, and it can reproduce our worst habits at vast scale.

But we weren’t interested in letting human teachers off the hook either. So we asked: “Isn’t bias true of human educators too?”

The AI agreed: human teachers are also shaped by the limitations of their training, culture, and experience. Both systems – AI and human – are imperfect. But only humans can reflect and care.

That led us to a deeper question: if both AI and human can reproduce bias, why use AI at all?

Why use AI in education?

The AI outlined what it felt were its clear advantages, which seemed to be systemic, rather than revolutionary. The aspect of personalised learning intrigued us – after all, doing things fast and at scale is what software and computers are good at.

We asked: How much data is needed to personalise learning effectively?

The answer: it varies. But at scale, it could require gigabytes or even terabytes of student data – performance, preferences, feedback, and longitudinal tracking over years.

Which raises its own question: “What do we trade in terms of privacy for that precision?”

A personalised or fragmented future?

Putting aside the issue of whether we’re happy with student data being codified and ingested, if every student were to receive a tailored lesson plan, what happens to the shared experience of learning?

Education has always been more than information. It’s about dialogue, debate, discomfort, empathy, and encounters with other minds, not just mirrored algorithms. AI can tailor a curriculum, but it can’t recreate the unpredictable alchemy of a classroom.

We risk mistaking customisation for connection.

“I use ChatGPT to provide more context […] to plan, structure and compose my essays.” – James, 17, Ottawa, Canada.

The teacher reimagined

Where does this leave the teacher?

In the AI’s view: liberated. Freed from repetitive tasks and administrative overload, the teacher is able to spend more time guiding, mentoring, and cultivating important thinking.

But this requires a shift in mindset – from delivering knowledge to curating wisdom. In broad terms, from part-time administrator, part-time teacher, to in-classroom collaborator.

AI won’t replace teachers, but it might reveal which parts of the teaching job were never the most important.

“The main way I use ChatGPT is to either help with ideas for when I am planning an essay, or to reinforce understanding when revising.” – Emily, 16, Eastbourne College, UK.

What we teach next

So, what do we want students to learn?

In an AI-rich world, important thinking, ethical reasoning, and emotional intelligence rise in value. Ironically, the more intelligent our machines become, the more we’ll need to double down on what makes us human.

Perhaps the ultimate lesson isn’t in what AI can teach us – but in what it can’t, or what it shouldn’t even try.

Conclusion

The future of education won’t be built by AI alone. The is our opportunity to modernise classrooms, and to reimagine them. Not to fear the machine, but to ask the bigger question: “What is learning in a world where all knowledge is available?”

Whatever the answer is – that’s how we should be teaching next.

(Image source: “Large lecture college classes” by Kevin Dooley is licensed under CC BY 2.0)

See also: AI in education: Balancing promises and pitfalls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Conversations with AI: Education appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/feed/ 0
Ursula von der Leyen: AI race ‘is far from over’ https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/ https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/#respond Tue, 11 Feb 2025 16:51:29 +0000 https://www.artificialintelligence-news.com/?p=104314 Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris. While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths […]

The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News.

]]>
Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris.

While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths to carve a leading role for itself.

“This is the third summit on AI safety in just over one year,” von der Leyen remarked. “In the same period, three new generations of ever more powerful AI models have been released. Some expect models that will approach human reasoning within a year’s time.”

The European Commission President set the tone of the event by contrasting the groundwork laid in previous summits with the urgency of this one.

“Past summits focused on laying the groundwork for AI safety. Together, we built a shared consensus that AI will be safe, that it will promote our values and benefit humanity. But this Summit is focused on action. And that is exactly what we need right now.”

As the world witnesses AI’s disruptive power, von der Leyen urged Europe to “formulate a vision of where we want AI to take us, as society and as humanity.” Growing adoption, “in the key sectors of our economy, and for the key challenges of our times,” provides a golden opportunity for the continent to lead, she argued.

The case for a European approach to the AI race 

Von der Leyen rejected notions that Europe has fallen behind its global competitors.

“Too often, I hear that Europe is late to the race – while the US and China have already got ahead. I disagree,” she stated. “The frontier is constantly moving. And global leadership is still up for grabs.”

Instead of replicating what other regions are doing, she called for doubling down on Europe’s unique strengths to define the continent’s distinct approach to AI.

“Too often, I have heard that we should replicate what others are doing and run after their strengths,” she said. “I think that instead, we should invest in what we can do best and build on our strengths here in Europe, which are our science and technology mastery that we have given to the world.”

Von der Leyen defined three pillars of the so-called “European brand of AI” that sets it apart: 1) focusing on high-complexity, industry-specific applications, 2) taking a cooperative, collaborative approach to innovation, and 3) embracing open-source principles.

“This summit shows there is a distinct European brand of AI,” she asserted. “It is already driving innovation and adoption. And it is picking up speed.”

Accelerating innovation: AI factories and gigafactories  

To maintain its competitive edge, Europe must supercharge its AI innovation, von der Leyen stressed.

A key component of this strategy lies in its computational infrastructure. Europe already boasts some of the world’s fastest supercomputers, which are now being leveraged through the creation of “AI factories.”

“In just a few months, we have set up a record of 12 AI factories,” von der Leyen revealed. “And we are investing €10 billion in them. This is not a promise—it is happening right now, and it is the largest public investment for AI in the world, which will unlock over ten times more private investment.”

Beyond these initial steps, von der Leyen unveiled an even more ambitious initiative. AI gigafactories, built on the scale of CERN’s Large Hadron Collider, will provide the infrastructure needed for training AI systems at unprecedented scales. They aim to foster collaboration between researchers, entrepreneurs, and industry leaders.

“We provide the infrastructure for large computational power,” von der Leyen explained. “Talents of the world are welcome. Industries will be able to collaborate and federate their data.”

The cooperative ethos underpinning AI gigafactories is part of a broader European push to balance competition with collaboration.

“AI needs competition but also collaboration,” she emphasised, highlighting that the initiative will serve as a “safe space” for these cooperative efforts.

Building trust with the AI Act

Crucially, von der Leyen reiterated Europe’s commitment to making AI safe and trustworthy. She pointed to the EU AI Act as the cornerstone of this strategy, framing it as a harmonised framework to replace fragmented national regulations across member states.

“The AI Act [will] provide one single set of safety rules across the European Union – 450 million people – instead of 27 different national regulations,” she said, before acknowledging businesses’ concerns about regulatory complexities.

“At the same time, I know, we have to make it easier, we have to cut red tape. And we will.”

€200 billion to remain in the AI race

Financing such ambitious plans naturally requires significant resources. Von der Leyen praised the recently launched EU AI Champions Initiative, which has already pledged €150 billion from providers, investors, and industry.

During her speech at the summit, von der Leyen announced the Commission’s complementary InvestAI initiative that will bring in an additional €50 billion. Altogether, the result is mobilising a massive €200 billion in public-private AI investments.

“We will have a focus on industrial and mission-critical applications,” she said. “It will be the largest public-private partnership in the world for the development of trustworthy AI.”

Ethical AI is a global responsibility

Von der Leyen closed her address by framing Europe’s AI ambitions within a broader, humanitarian perspective, arguing that ethical AI is a global responsibility.

“Cooperative AI can be attractive well beyond Europe, including for our partners in the Global South,” she proclaimed, extending a message of inclusivity.

Von der Leyen expressed full support for the AI Foundation launched at the summit, highlighting its mission to ensure widespread access to AI’s benefits.

“AI can be a gift to humanity. But we must make sure that benefits are widespread and accessible to all,” she remarked.

“We want AI to be a force for good. We want an AI where everyone collaborates and everyone benefits. That is our path – our European way.”

See also: AI Action Summit: Leaders call for unity and equitable development

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/feed/ 0
OpenAI funds $1 million study on AI and morality at Duke University https://www.artificialintelligence-news.com/news/openai-funds-1-million-study-on-ai-and-morality-at-duke-university/ https://www.artificialintelligence-news.com/news/openai-funds-1-million-study-on-ai-and-morality-at-duke-university/#respond Mon, 23 Dec 2024 14:09:26 +0000 https://www.artificialintelligence-news.com/?p=16784 OpenAI is awarding a $1 million grant to a Duke University research team to look at how AI could predict human moral judgments. The initiative highlights the growing focus on the intersection of technology and ethics, and raises critical questions: Can AI handle the complexities of morality, or should ethical decisions remain the domain of […]

The post OpenAI funds $1 million study on AI and morality at Duke University appeared first on AI News.

]]>
OpenAI is awarding a $1 million grant to a Duke University research team to look at how AI could predict human moral judgments.

The initiative highlights the growing focus on the intersection of technology and ethics, and raises critical questions: Can AI handle the complexities of morality, or should ethical decisions remain the domain of humans?

Duke University’s Moral Attitudes and Decisions Lab (MADLAB), led by ethics professor Walter Sinnott-Armstrong and co-investigator Jana Schaich Borg, is in charge of the “Making Moral AI” project. The team envisions a “moral GPS,” a tool that could guide ethical decision-making.

Its research spans diverse fields, including computer science, philosophy, psychology, and neuroscience, to understand how moral attitudes and decisions are formed and how AI can contribute to the process.

The role of AI in morality

MADLAB’s work examines how AI might predict or influence moral judgments. Imagine an algorithm assessing ethical dilemmas, such as deciding between two unfavourable outcomes in autonomous vehicles or providing guidance on ethical business practices. Such scenarios underscore AI’s potential but also raise fundamental questions: Who determines the moral framework guiding these types of tools, and should AI be trusted to make decisions with ethical implications?

OpenAI’s vision

The grant supports the development of algorithms that forecast human moral judgments in areas such as medical, law, and business, which frequently involve complex ethical trade-offs. While promising, AI still struggles to grasp the emotional and cultural nuances of morality. Current systems excel at recognising patterns but lack the deeper understanding required for ethical reasoning.

Another concern is how this technology might be applied. While AI could assist in life-saving decisions, its use in defence strategies or surveillance introduces moral dilemmas. Can unethical AI actions be justified if they serve national interests or align with societal goals? These questions emphasise the difficulties of embedding morality into AI systems.

Challenges and opportunities

Integrating ethics into AI is a formidable challenge that requires collaboration across disciplines. Morality is not universal; it is shaped by cultural, personal, and societal values, making it difficult to encode into algorithms. Additionally, without safeguards such as transparency and accountability, there is a risk of perpetuating biases or enabling harmful applications.

OpenAI’s investment in Duke’s research marks at step toward understanding the role of AI in ethical decision-making. However, the journey is far from over. Developers and policymakers must work together to ensure that AI tools align with social values, and emphasise fairness and inclusivity while addressing biases and unintended consequences.

As AI becomes more integral to decision-making, its ethical implications demand attention. Projects like “Making Moral AI” offer a starting point for navigating a complex landscape, balancing innovation with responsibility in order to shape a future where technology serves the greater good.

(Photo by Unsplash)

See also: AI governance: Analysing emerging global regulations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI funds $1 million study on AI and morality at Duke University appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-funds-1-million-study-on-ai-and-morality-at-duke-university/feed/ 0
‘Information gap’ between AI creators and policymakers needs to be resolved – report https://www.artificialintelligence-news.com/news/information-gap-between-ai-creators-and-policymakers-needs-to-be-resolved-report/ https://www.artificialintelligence-news.com/news/information-gap-between-ai-creators-and-policymakers-needs-to-be-resolved-report/#respond Tue, 23 Feb 2021 16:47:37 +0000 http://artificialintelligence-news.com/?p=10304 An article posted by the World Economic Forum (WEF) has argued there is a ‘huge gap in understanding’ between policymakers and AI creators. The report, authored by Adriana Bora, AI policy researcher and project manager at The Future Society, and David Alexandru Timis, outgoing curator at Brussels Hub, explores how to resolve accountability and trust-building […]

The post ‘Information gap’ between AI creators and policymakers needs to be resolved – report appeared first on AI News.

]]>
An article posted by the World Economic Forum (WEF) has argued there is a ‘huge gap in understanding’ between policymakers and AI creators.

The report, authored by Adriana Bora, AI policy researcher and project manager at The Future Society, and David Alexandru Timis, outgoing curator at Brussels Hub, explores how to resolve accountability and trust-building issues with AI technology.

Bora and Timis note there is “a need for sound mechanisms that will generate a comprehensive and collectively shared understanding of AI’s development and deployment cycle.” As a result, the two add, this governance “needs to be designed under continuous dialogue utilising multi-stakeholder and interdisciplinary methodologies and skills.”

In plain language, both sides need to speak the same language. Yet while AI creators have the information and understanding, this does not extend to regulators, the authors note.

“There is a limited number of policy experts who truly understand the full cycle of AI technology,” the article noted. “On the other hand, the technology providers lack clarity, and at times interest, in shaping AI policy with integrity by implementing ethics in their technological designs.”

Examples of unethical AI practice, or where inherent bias is built into systems, are legion. In July, MIT apologised for, and took offline, a dataset which trained AI models with misogynistic and racist tendencies. Google and Microsoft have also fessed up to errors with YouTube moderation and MSN News respectively.

Artificial intelligence technology in law enforcement has also been questioned. More than 1,000 researchers, academics and experts signed an open letter in June to question an upcoming paper which claimed to be able to predict criminality based on automated facial recognition. Separately, in the same month, the chief of Detroit Police admitted its AI-powered face recognition did not work the vast majority of the time.

Google has been under fire of late, with the firing of Margaret Mitchell last week, who co-led the company’s ethical AI team, adding to the negative publicity. Mitchell confirmed her dismissal on Twitter. A statement from Google to Reuters said the firing followed an investigation which found Mitchell moved electronic files outside of the company.

In December, Google fired Timnit Gebru, another leading figure in ethical AI development, who claimed she was fired over an unpublished paper and sending an email critical of the company’s practices. Mitchell had previously written an open letter detailing ‘concern’ over the firing. Per an Axios report, the company made changes into ‘how it handles issues around research, diversity and employee exits’, following Gebru’s dismissal. As this publication reported, Gebru’s departure forced other employees to leave; software engineer Vinesh Kannan and engineering director David Baker.

Bora and Timis emphasised the need for ‘ethics literacy’ and a ‘commitment to multidisciplinary research’ from the technology providers’ perspective.

“Through their training and during their careers, the technical teams behind AI developments are not methodically educated about the complexity of human social systems, how their products could negatively impact society, and how to embed ethics in their designs,” the article noted.

“The process of understanding and acknowledging the social and cultural context in which AI technologies are deployed, sometimes with high stakes for humanity, requires patience and time,” Bora and Timis added. “With increased investments in AI, technology companies are encouraged to identify the ethical consideration relevant to their products and transparently implement solutions before deploying them.”

This could theoretically take care of hasty withdrawals and fulsome apologies when models behave unethically. Yet the researchers also noted how policymakers need to step up.

“It is only by familiarising themselves with AI and its potential benefits and risks that policymakers can draft sensible regulation that balances the development of AI within legal and ethical boundaries while leveraging its tremendous potential,” the article noted. “Knowledge building is critical both for developing smarter regulations when it comes to AI, for enabling policymakers to engage in dialogue with technology companies on an equal footing, and together set a framework of ethics and norms in which AI can innovate safely.”

Innovation is taking place with regard to solving algorithmic bias. In the UK, as this publication reported in November, the Centre for Data Ethics and Innovation (CDEI) has created a ‘roadmap’ to tackle the issue. The CDEI report focused on policing, recruitment, financial services and local government, and makes cross-cutting recommendations that aim to help build the right systems so that algorithms improve, rather than worsen, decision-making.

You can read the full WEF article here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G ExpoIoT Tech ExpoBlockchain ExpoAI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post ‘Information gap’ between AI creators and policymakers needs to be resolved – report appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/information-gap-between-ai-creators-and-policymakers-needs-to-be-resolved-report/feed/ 0