ai risks Archives - AI News https://www.artificialintelligence-news.com/news/tag/ai-risks/ Artificial Intelligence News Thu, 01 May 2025 11:28:50 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png ai risks Archives - AI News https://www.artificialintelligence-news.com/news/tag/ai-risks/ 32 32 Conversations with AI: Education https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/ https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/#respond Thu, 01 May 2025 10:27:00 +0000 https://www.artificialintelligence-news.com/?p=106152 How can AI be used in education? An ethical debate, with an AI

The post Conversations with AI: Education appeared first on AI News.

]]>
The classroom hasn’t changed much in over a century. A teacher at the front, rows of students listening, and a curriculum defined by what’s testable – not necessarily what’s meaningful.

But AI, as arguably the most powerful tool humanity has created in the last few years, is about to break that model open. Not with smarter software or faster grading, but by forcing us to ask: “What is the purpose of education in a world where machines could teach?”

At AI News, rather than speculate about distant futures or lean on product announcements and edtech deals, we started a conversation – with an AI. We asked it what it sees when it looks at the classroom, the teacher, and the learner.

What follows is a distilled version of that exchange, given here not as a technical analysis, but as a provocation.

The system cracks

Education is under pressure worldwide: Teachers are overworked, students are disengaged, and curricula feel outdated in a changing world. Into this comes AI – not as a patch or plug-in, but as a potential accelerant.

Our opening prompt: What roles might an AI play in education?

The answer was wide-ranging:

  • Personalised learning pathways
  • Intelligent tutoring systems
  • Administrative efficiency
  • Language translation and accessibility tools
  • Behavioural and emotional recognition
  • Scalable, always-available content delivery

These are features of an education system, its nuts and bolts. But what about meaning and ethics?

Flawed by design?

One concern kept resurfacing: bias.

We asked the AI: “If you’re trained on the internet – and the internet is the output of biased, flawed human thought – doesn’t that mean your responses are equally flawed?”

The AI acknowledged the logic. Bias is inherited. Inaccuracies, distortions, and blind spots all travel from teacher to pupil. What an AI learns, it learns from us, and it can reproduce our worst habits at vast scale.

But we weren’t interested in letting human teachers off the hook either. So we asked: “Isn’t bias true of human educators too?”

The AI agreed: human teachers are also shaped by the limitations of their training, culture, and experience. Both systems – AI and human – are imperfect. But only humans can reflect and care.

That led us to a deeper question: if both AI and human can reproduce bias, why use AI at all?

Why use AI in education?

The AI outlined what it felt were its clear advantages, which seemed to be systemic, rather than revolutionary. The aspect of personalised learning intrigued us – after all, doing things fast and at scale is what software and computers are good at.

We asked: How much data is needed to personalise learning effectively?

The answer: it varies. But at scale, it could require gigabytes or even terabytes of student data – performance, preferences, feedback, and longitudinal tracking over years.

Which raises its own question: “What do we trade in terms of privacy for that precision?”

A personalised or fragmented future?

Putting aside the issue of whether we’re happy with student data being codified and ingested, if every student were to receive a tailored lesson plan, what happens to the shared experience of learning?

Education has always been more than information. It’s about dialogue, debate, discomfort, empathy, and encounters with other minds, not just mirrored algorithms. AI can tailor a curriculum, but it can’t recreate the unpredictable alchemy of a classroom.

We risk mistaking customisation for connection.

“I use ChatGPT to provide more context […] to plan, structure and compose my essays.” – James, 17, Ottawa, Canada.

The teacher reimagined

Where does this leave the teacher?

In the AI’s view: liberated. Freed from repetitive tasks and administrative overload, the teacher is able to spend more time guiding, mentoring, and cultivating important thinking.

But this requires a shift in mindset – from delivering knowledge to curating wisdom. In broad terms, from part-time administrator, part-time teacher, to in-classroom collaborator.

AI won’t replace teachers, but it might reveal which parts of the teaching job were never the most important.

“The main way I use ChatGPT is to either help with ideas for when I am planning an essay, or to reinforce understanding when revising.” – Emily, 16, Eastbourne College, UK.

What we teach next

So, what do we want students to learn?

In an AI-rich world, important thinking, ethical reasoning, and emotional intelligence rise in value. Ironically, the more intelligent our machines become, the more we’ll need to double down on what makes us human.

Perhaps the ultimate lesson isn’t in what AI can teach us – but in what it can’t, or what it shouldn’t even try.

Conclusion

The future of education won’t be built by AI alone. The is our opportunity to modernise classrooms, and to reimagine them. Not to fear the machine, but to ask the bigger question: “What is learning in a world where all knowledge is available?”

Whatever the answer is – that’s how we should be teaching next.

(Image source: “Large lecture college classes” by Kevin Dooley is licensed under CC BY 2.0)

See also: AI in education: Balancing promises and pitfalls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Conversations with AI: Education appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/feed/ 0
UK will host global AI summit to address potential risks https://www.artificialintelligence-news.com/news/uk-host-global-ai-summit-address-potential-risks/ https://www.artificialintelligence-news.com/news/uk-host-global-ai-summit-address-potential-risks/#respond Thu, 08 Jun 2023 12:53:16 +0000 https://www.artificialintelligence-news.com/?p=13171 The UK has announced that it will host a global summit this autumn to address the most significant risks associated with AI. The decision comes after meetings between Prime Minister Rishi Sunak, US President Joe Biden, Congress, and business leaders. “AI has an incredible potential to transform our lives for the better. But we need […]

The post UK will host global AI summit to address potential risks appeared first on AI News.

]]>
The UK has announced that it will host a global summit this autumn to address the most significant risks associated with AI.

The decision comes after meetings between Prime Minister Rishi Sunak, US President Joe Biden, Congress, and business leaders.

“AI has an incredible potential to transform our lives for the better. But we need to make sure it is developed and used in a way that is safe and secure,” explained Sunak.

“No one country can do this alone. This is going to take a global effort. But with our vast expertise and commitment to an open, democratic international system, the UK will stand together with our allies to lead the way.”

The UK government believes that the country is the natural place to lead discussions due to hosting Europe’s largest AI industry, which is only behind the US and China on the world stage

The AI industry in the UK employs over 50,000 people and contributes more than £3.7 billion to the country’s economy. US tech giant Palantir announced today it will make the UK its new European HQ for AI development.

“We are proud to extend our partnership with the United Kingdom, where we employ nearly a quarter of our global workforce,” said Alexander C. Karp, CEO of Palantir.

“London is a magnet for the best software engineering talent in the world, and it is the natural choice as the hub for our European efforts to develop the most effective and ethical artificial intelligence software solutions available.”

The urgency to evaluate AI risks stems from increasing concerns about the potential existential threats posed by this technology. Earlier this week, an AI task force adviser to the UK prime minister issued a stark warning: AI will threaten humans in two years.

McKinsey, a global consulting firm, predicts that between 2016 and 2030, AI-related advancements could impact approximately 15 percent of the global workforce, potentially displacing 400 million workers worldwide. In response, global regulators are racing to establish new rules and regulations to mitigate these risks.

“The Global Summit on AI Safety will play a critical role in bringing together government, industry, academia and civil society, and we’re looking forward to working closely with the UK Government to help make these efforts a success,” said Demis Hassabis, CEO of UK-headquartered Google DeepMind.

The attendees of the upcoming summit have not been announced yet, but the UK government plans to bring together key countries, leading tech companies, and researchers to establish safety measures for AI.

Prime Minister Sunak aims to ensure that AI is developed and utilised in a manner that is safe and secure while maximising its potential to benefit humanity.

Sridhar Iyengar, MD of Zoho Europe, commented:

“Earlier this year, the whitepaper released in the UK highlighted the numerous advantages of artificial intelligence, emphasising its potential as a valuable tool for enhancing business operations.

With the government’s ongoing ambition to position the UK as a science and technology superpower by 2030, and coupled with Chancellor Jeremy Hunt reiterating his vision of making the UK the ‘next Silicon Valley’, the UK’s leading input here could be extremely helpful in achieving these goals.”

Iyengar emphasised the advantages of AI and its potential to enhance various aspects of business operations, from customer service to fraud detection, ultimately improving business efficiencies.

However, Iyengar stressed the need for a global regulatory framework supported by public trust to fully harness the power of AI and achieve optimal outcomes for all stakeholders.

The European Union is already working on an Artificial Intelligence Act but it could take up to two-and-a-half years to come into effect. China, meanwhile, has also started drafting AI regulations, including proposals to require companies to notify users when an AI algorithm is being used.

These ongoing efforts highlight the global recognition of the need for comprehensive regulations and guidelines to manage AI’s impact effectively.

“To fully harness the power of AI and ensure optimal outcomes for all stakeholders, a global regulatory framework supported by public trust is essential,” added Iyengar.

“As AI becomes increasingly integrated into our daily lives, adopting a unified approach to regulations becomes crucial.”

The UK’s decision to host a global AI safety measure summit demonstrates its commitment to proactively addressing the risks associated with AI. As the world grapples with the challenges posed by AI, global cooperation and unified regulatory approaches will be vital to shaping the future of this transformative technology.

(Image Credit: No 10 Downing Street)

Related: AI leaders warn about ‘risk of extinction’ in open letter

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK will host global AI summit to address potential risks appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-host-global-ai-summit-address-potential-risks/feed/ 0
AI leaders warn about ‘risk of extinction’ in open letter https://www.artificialintelligence-news.com/news/ai-leaders-warn-about-risk-of-extinction-in-open-letter/ https://www.artificialintelligence-news.com/news/ai-leaders-warn-about-risk-of-extinction-in-open-letter/#respond Wed, 31 May 2023 08:33:10 +0000 https://www.artificialintelligence-news.com/?p=13124 The Center for AI Safety (CAIS) recently issued a statement signed by prominent figures in AI warning about the potential risks posed by the technology to humanity. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement. Signatories of the […]

The post AI leaders warn about ‘risk of extinction’ in open letter appeared first on AI News.

]]>
The Center for AI Safety (CAIS) recently issued a statement signed by prominent figures in AI warning about the potential risks posed by the technology to humanity.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement.

Signatories of the statement include renowned researchers and Turing Award winners like Geoffery Hinton and Yoshua Bengio, as well as executives from OpenAI and DeepMind, such as Sam Altman, Ilya Sutskever, and Demis Hassabis.

The CAIS letter aims to spark discussions about the various urgent risks associated with AI and has attracted both support and criticism across the wider industry. It follows another open letter signed by Elon Musk, Steve Wozniak, and over 1,000 other experts who called for a halt to “out-of-control” AI development.

Despite its brevity, the latest statement does not provide specific details about the definition of AI or offer concrete strategies for mitigating the risks. However, CAIS clarified in a press release that its goal is to establish safeguards and institutions to ensure that AI risks are effectively managed.

OpenAI CEO Sam Altman has been actively engaging with global leaders and advocating for AI regulations. During a recent Senate appearance, Altman repeatedly called on lawmakers to heavily regulate the industry. The CAIS statement aligns with his efforts to raise awareness about the dangers of AI.

While the open letter has garnered attention, some experts in AI ethics have criticised the trend of issuing such statements.

Dr Sasha Luccioni, a machine-learning research scientist, suggests that mentioning hypothetical risks of AI alongside tangible risks like pandemics and climate change enhances its credibility while diverting attention from immediate issues like bias, legal challenges, and consent.

Daniel Jeffries, a writer and futurist, argues that discussing AI risks has become a status game in which individuals jump on the bandwagon without incurring any real costs.

Critics believe that signing open letters about future threats allows those responsible for current AI harms to alleviate their guilt while neglecting the ethical problems associated with AI technologies already in use.

However, CAIS – a San Francisco-based nonprofit – remains focused on reducing societal-scale risks from AI through technical research and advocacy. The organisation was co-founded by experts with backgrounds in computer science and a keen interest in AI safety.

While some researchers fear the emergence of a superintelligent AI that could surpass human capabilities and pose an existential threat, others argue that signing open letters about hypothetical doomsday scenarios distracts from the existing ethical dilemmas surrounding AI. They emphasise the need to address the real problems AI poses today, such as surveillance, biased algorithms, and the infringement of human rights.

Balancing the advancement of AI with responsible implementation and regulation remains a crucial task for researchers, policymakers, and industry leaders alike.

(Photo by Apolo Photographer on Unsplash)

Related: OpenAI CEO: AI regulation ‘is essential’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI leaders warn about ‘risk of extinction’ in open letter appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-leaders-warn-about-risk-of-extinction-in-open-letter/feed/ 0
UN calls for ‘urgent’ action over AI’s risk to human rights https://www.artificialintelligence-news.com/news/un-calls-for-urgent-action-over-ais-risk-to-human-rights/ https://www.artificialintelligence-news.com/news/un-calls-for-urgent-action-over-ais-risk-to-human-rights/#respond Fri, 17 Sep 2021 14:15:13 +0000 http://artificialintelligence-news.com/?p=11092 The United Nations’ (UN) head of human rights has called for all member states to put a moratorium on the sale and use of artificial intelligence systems. UN high commissioner for human rights Michelle Bachelet acknowledged that AI can be a “force for good” but that it could also have “negative, even catastrophic, effects” if […]

The post UN calls for ‘urgent’ action over AI’s risk to human rights appeared first on AI News.

]]>
The United Nations’ (UN) head of human rights has called for all member states to put a moratorium on the sale and use of artificial intelligence systems.

UN high commissioner for human rights Michelle Bachelet acknowledged that AI can be a “force for good” but that it could also have “negative, even catastrophic, effects” if the risks It poses are not addressed.

Bachelet’s comments come alongside a new report from the Office of the High Commissioner for Human Rights (OHCHR).

The report analyses how AI affects people’s rights to privacy, health, education, freedom of movement, amongst other things.

“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online,” Bachelet said.

Both the report and Bachelet’s comments follow the July revelations surrounding Pegasus spyware, which the UN rights chief described as part of the “unprecedented level of surveillance” being seen across the globe currently.

Bachelet insisted this situation is “incompatible” with human rights.

Now, in a similar vein, the OHCHR has turned its attention to AI.

According to the report, states and organisations often fail to carry out due diligence when rushing to build AI applications, leading to unjust treatment of individuals as a result of AI decision-making.

What’s more, data used to inform and guide AI systems can be faulty or discriminatory, and when stored for long periods of time could someday be exploited through yet unknown means.

“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face,” Bachelet noted.

“The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us,” she stressed.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post UN calls for ‘urgent’ action over AI’s risk to human rights appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/un-calls-for-urgent-action-over-ais-risk-to-human-rights/feed/ 0