Education - AI News https://www.artificialintelligence-news.com/categories/ai-industries/education/ Artificial Intelligence News Thu, 01 May 2025 11:28:50 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Education - AI News https://www.artificialintelligence-news.com/categories/ai-industries/education/ 32 32 Conversations with AI: Education https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/ https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/#respond Thu, 01 May 2025 10:27:00 +0000 https://www.artificialintelligence-news.com/?p=106152 How can AI be used in education? An ethical debate, with an AI

The post Conversations with AI: Education appeared first on AI News.

]]>
The classroom hasn’t changed much in over a century. A teacher at the front, rows of students listening, and a curriculum defined by what’s testable – not necessarily what’s meaningful.

But AI, as arguably the most powerful tool humanity has created in the last few years, is about to break that model open. Not with smarter software or faster grading, but by forcing us to ask: “What is the purpose of education in a world where machines could teach?”

At AI News, rather than speculate about distant futures or lean on product announcements and edtech deals, we started a conversation – with an AI. We asked it what it sees when it looks at the classroom, the teacher, and the learner.

What follows is a distilled version of that exchange, given here not as a technical analysis, but as a provocation.

The system cracks

Education is under pressure worldwide: Teachers are overworked, students are disengaged, and curricula feel outdated in a changing world. Into this comes AI – not as a patch or plug-in, but as a potential accelerant.

Our opening prompt: What roles might an AI play in education?

The answer was wide-ranging:

  • Personalised learning pathways
  • Intelligent tutoring systems
  • Administrative efficiency
  • Language translation and accessibility tools
  • Behavioural and emotional recognition
  • Scalable, always-available content delivery

These are features of an education system, its nuts and bolts. But what about meaning and ethics?

Flawed by design?

One concern kept resurfacing: bias.

We asked the AI: “If you’re trained on the internet – and the internet is the output of biased, flawed human thought – doesn’t that mean your responses are equally flawed?”

The AI acknowledged the logic. Bias is inherited. Inaccuracies, distortions, and blind spots all travel from teacher to pupil. What an AI learns, it learns from us, and it can reproduce our worst habits at vast scale.

But we weren’t interested in letting human teachers off the hook either. So we asked: “Isn’t bias true of human educators too?”

The AI agreed: human teachers are also shaped by the limitations of their training, culture, and experience. Both systems – AI and human – are imperfect. But only humans can reflect and care.

That led us to a deeper question: if both AI and human can reproduce bias, why use AI at all?

Why use AI in education?

The AI outlined what it felt were its clear advantages, which seemed to be systemic, rather than revolutionary. The aspect of personalised learning intrigued us – after all, doing things fast and at scale is what software and computers are good at.

We asked: How much data is needed to personalise learning effectively?

The answer: it varies. But at scale, it could require gigabytes or even terabytes of student data – performance, preferences, feedback, and longitudinal tracking over years.

Which raises its own question: “What do we trade in terms of privacy for that precision?”

A personalised or fragmented future?

Putting aside the issue of whether we’re happy with student data being codified and ingested, if every student were to receive a tailored lesson plan, what happens to the shared experience of learning?

Education has always been more than information. It’s about dialogue, debate, discomfort, empathy, and encounters with other minds, not just mirrored algorithms. AI can tailor a curriculum, but it can’t recreate the unpredictable alchemy of a classroom.

We risk mistaking customisation for connection.

“I use ChatGPT to provide more context […] to plan, structure and compose my essays.” – James, 17, Ottawa, Canada.

The teacher reimagined

Where does this leave the teacher?

In the AI’s view: liberated. Freed from repetitive tasks and administrative overload, the teacher is able to spend more time guiding, mentoring, and cultivating important thinking.

But this requires a shift in mindset – from delivering knowledge to curating wisdom. In broad terms, from part-time administrator, part-time teacher, to in-classroom collaborator.

AI won’t replace teachers, but it might reveal which parts of the teaching job were never the most important.

“The main way I use ChatGPT is to either help with ideas for when I am planning an essay, or to reinforce understanding when revising.” – Emily, 16, Eastbourne College, UK.

What we teach next

So, what do we want students to learn?

In an AI-rich world, important thinking, ethical reasoning, and emotional intelligence rise in value. Ironically, the more intelligent our machines become, the more we’ll need to double down on what makes us human.

Perhaps the ultimate lesson isn’t in what AI can teach us – but in what it can’t, or what it shouldn’t even try.

Conclusion

The future of education won’t be built by AI alone. The is our opportunity to modernise classrooms, and to reimagine them. Not to fear the machine, but to ask the bigger question: “What is learning in a world where all knowledge is available?”

Whatever the answer is – that’s how we should be teaching next.

(Image source: “Large lecture college classes” by Kevin Dooley is licensed under CC BY 2.0)

See also: AI in education: Balancing promises and pitfalls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Conversations with AI: Education appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/feed/ 0
AI in education: Balancing promises and pitfalls https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/ https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/#respond Mon, 28 Apr 2025 12:27:09 +0000 https://www.artificialintelligence-news.com/?p=106158 The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges. There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated […]

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges.

There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated world, they need to be ready.

“To ensure the United States remains a global leader in this technological revolution, we must provide our nation’s youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,” President Trump declared.

So, what does AI actually look like in the classroom?

One of the biggest hopes for AI in education is making learning more personal. Imagine software that can figure out how individual students are doing, then adjust the pace and materials just for them. This could mean finally moving away from the old one-size-fits-all approach towards learning environments that adapt and offer help exactly where it’s needed.

The US executive order hints at this, wanting to improve results through things like “AI-based high-quality instructional resources” and “high-impact tutoring.”

And what about teachers? AI could be a huge help here too, potentially taking over tedious admin tasks like grading, freeing them up to actually teach. Plus, AI software might offer fresh ways to present information.

Getting kids familiar with AI early on could also take away some of the mystery around the technology. It might spark their “curiosity and creativity” and give them the foundation they need to become “active and responsible participants in the workforce of the future.”

The focus stretches to lifelong learning and getting people ready for the job market. On top of that, AI tools like text-to-speech or translation features can make learning much more accessible for students with disabilities, opening up educational environments for everyone.

Not all smooth sailing: The challenges ahead for AI in education

While the potential is huge, we need to be realistic about the significant hurdles and potential downsides.

First off, AI runs on student data – lots of it. That means we absolutely need strong rules and security to make sure this data is collected ethically, used correctly, and kept safe from breaches. Privacy is paramount here.

Then there’s the bias problem. If the data used to train AI reflects existing unfairness in society (and let’s be honest, it often does), the AI could end up repeating or even worsening those inequalities. Think biased assessments or unfair resource allocation. Careful testing and constant checks are crucial to catch and fix this.

We also can’t ignore the digital divide. If some students don’t have reliable internet, the right devices, or the necessary tech infrastructure at home or school, AI could widen the gap between the haves and have-nots. It’s vital that everyone gets fair access.

There’s also a risk that leaning too heavily on AI education tools might stop students from developing essential skills like critical thinking. We need to teach them how to use AI as a helpful tool, not a crutch they can’t function without.

Maybe the biggest piece of the puzzle, though, is making sure our teachers are ready. As the executive order rightly points out, “We must also invest in our educators and equip them with the tools and knowledge.”

This isn’t just about knowing which buttons to push; teachers need to understand how AI fits into teaching effectively and ethically. That requires solid professional development and ongoing support.

A recent GMB Union poll found that while about a fifth of UK schools are using AI now, the staff often aren’t getting the training they need:

View on Threads

Finding the right path forward

It’s going to take everyone – governments, schools, tech companies, and teachers – pulling together in order to ensure that AI plays a positive role in education.

We absolutely need clear policies and standards covering ethics, privacy, bias, and making sure AI is accessible to all students. We also need to keep investing in research to figure out the best ways to use AI in education and to build tools that are fair and effective.

And critically, we need a long-term commitment to teacher education to get educators comfortable and skilled with these changes. Part of this is building broad AI literacy, making sure all students get a basic understanding of this technology and how it impacts society.

AI could be a positive force in education – making it more personalised, efficient, and focused on the skills students actually need. But turning that potential into reality means carefully navigating those tricky ethical, practical, and teaching challenges head-on.

See also: How does AI judge? Anthropic studies the values of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/feed/ 0
Red Hat on open, small language models for responsible, practical AI https://www.artificialintelligence-news.com/news/red-hat-on-open-small-language-models-for-responsible-practical-ai/ https://www.artificialintelligence-news.com/news/red-hat-on-open-small-language-models-for-responsible-practical-ai/#respond Tue, 22 Apr 2025 07:49:15 +0000 https://www.artificialintelligence-news.com/?p=105184 As geopolitical events shape the world, it’s no surprise that they affect technology too – specifically, in the ways that the current AI market is changing, alongside its accepted methodology, how it’s developed, and the ways it’s put to use in the enterprise. The expectations of results from AI are balanced at present with real-world […]

The post Red Hat on open, small language models for responsible, practical AI appeared first on AI News.

]]>
As geopolitical events shape the world, it’s no surprise that they affect technology too – specifically, in the ways that the current AI market is changing, alongside its accepted methodology, how it’s developed, and the ways it’s put to use in the enterprise.

The expectations of results from AI are balanced at present with real-world realities. And there remains a good deal of suspicion about the technology, again in balance with those who are embracing it even in its current nascent stages. The closed-loop nature of the well-known LLMs is being challenged by instances like Llama, DeepSeek, and Baidu’s recently-released Ernie X1.

In contrast, open source development provides transparency and the ability to contribute back, which is more in tune with the desire for “responsible AI”: a phrase that encompasses the environmental impact of large models, how AIs are used, what comprises their learning corpora, and issues around data sovereignty, language, and politics. 

As the company that’s demonstrated the viability of an economically-sustainable open source development model for its business, Red Hat wants to extend its open, collaborative, and community-driven approach to AI. We spoke recently to Julio Guijarro, the CTO for EMEA at Red Hat, about the organisation’s efforts to unlock the undoubted power of generative AI models in ways that bring value to the enterprise, in a manner that’s responsible, sustainable, and as transparent as possible. 

Julio underlined how much education is still needed in order for us to more fully understand AI, stating, “Given the significant unknowns about AI’s inner workings, which are rooted in complex science and mathematics, it remains a ‘black box’ for many. This lack of transparency is compounded where it has been developed in largely inaccessible, closed environments.”

There are also issues with language (European and Middle-Eastern languages are very much under-served), data sovereignty, and fundamentally, trust. “Data is an organisation’s most valuable asset, and businesses need to make sure they are aware of the risks of exposing sensitive data to public platforms with varying privacy policies.” 

The Red Hat response 

Red Hat’s response to global demand for AI has been to pursue what it feels will bring most benefit to end-users, and remove many of the doubts and caveats that are quickly becoming apparent when the de facto AI services are deployed. 

One answer, Julio said, is small language models, running locally or in hybrid clouds, on non-specialist hardware, and accessing local business information. SLMs are compact, efficient alternatives to LLMs, designed to deliver strong performance for specific tasks while requiring significantly fewer computational resources. There are smaller cloud providers that can be utilised to offload some compute, but the key is having the flexibility and freedom to choose to keep business-critical information in-house, close to the model, if desired. That’s important, because information in an organisation changes rapidly. “One challenge with large language models is they can get obsolete quickly because the data generation is not happening in the big clouds. The data is happening next to you and your business processes,” he said. 

There’s also the cost. “Your customer service querying an LLM can present a significant hidden cost – before AI, you knew that when you made a data query, it had a limited and predictable scope. Therefore, you could calculate how much that transaction could cost you. In the case of LLMs, they work on an iterative model. So the more you use it, the better its answer can get, and the more you like it, the more questions you may ask. And every interaction is costing you money. So the same query that before was a single transaction can now become a hundred, depending on who and how is using the model. When you are running a model on-premise, you can have greater control, because the scope is limited by the cost of your own infrastructure, not by the cost of each query.”

Organisations needn’t brace themselves for a procurement round that involves writing a huge cheque for GPUs, however. Part of Red Hat’s current work is optimising models (in the open, of course) to run on more standard hardware. It’s possible because the specialist models that many businesses will use don’t need the huge, general-purpose data corpus that has to be processed at high cost with every query. 

“A lot of the work that is happening right now is people looking into large models and removing everything that is not needed for a particular use case. If we want to make AI ubiquitous, it has to be through smaller language models. We are also focused on supporting and improving vLLM (the inference engine project) to make sure people can interact with all these models in an efficient and standardised way wherever they want: locally, at the edge or in the cloud,” Julio said. 

Keeping it small 

Using and referencing local data pertinent to the user means that the outcomes can be crafted according to need. Julio cited projects in the Arab- and Portuguese-speaking worlds that wouldn’t be viable using the English-centric household name LLMs. 

There are a couple of other issues, too, that early adopter organisations have found in practical, day-to-day use LLMs. The first is latency – which can be problematic in time-sensitive or customer-facing contexts. Having the focused resources and relevantly-tailored results just a network hop or two away makes sense. 

Secondly, there is the trust issue: an integral part of responsible AI. Red Hat advocates for open platforms, tools, and models so we can move towards greater transparency, understanding, and the ability for as many people as possible to contribute. “It is going to be critical for everybody,” Julio said. “We are building capabilities to democratise AI, and that’s not only publishing a model, it’s giving users the tools to be able to replicate them, tune them, and serve them.” 

Red Hat recently acquired Neural Magic to help enterprises more easily scale AI, to improve performance of inference, and to provide even greater choice and accessibility of how enterprises build and deploy AI workloads with the vLLM project for open model serving. Red Hat, together with IBM Research, also released InstructLab to open the door to would-be AI builders who aren’t data scientists but who have the right business knowledge. 

There’s a great deal of speculation around if, or when, the AI bubble might burst, but such conversations tend to gravitate to the economic reality that the big LLM providers will soon have to face. Red Hat believes that AI has a future in a use case-specific and inherently open source form, a technology that will make business sense and that will be available to all. To quote Julio’s boss, Matt Hicks (CEO of Red Hat), “The future of AI is open.” 

Supporting Assets: 

Tech Journey: Adopt and scale AI

The post Red Hat on open, small language models for responsible, practical AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/red-hat-on-open-small-language-models-for-responsible-practical-ai/feed/ 0
Web3 tech helps instil confidence and trust in AI https://www.artificialintelligence-news.com/news/web3-tech-helps-instil-confidence-and-trust-in-ai/ https://www.artificialintelligence-news.com/news/web3-tech-helps-instil-confidence-and-trust-in-ai/#respond Wed, 09 Apr 2025 13:47:57 +0000 https://www.artificialintelligence-news.com/?p=105268 The promise of AI is that it’ll make all of our lives easier. And with great convenience comes the potential for serious profit. The United Nations thinks AI could be a $4.8 trillion global market by 2033 – about as big as the German economy. But forget about 2033: in the here and now, AI […]

The post Web3 tech helps instil confidence and trust in AI appeared first on AI News.

]]>
The promise of AI is that it’ll make all of our lives easier. And with great convenience comes the potential for serious profit. The United Nations thinks AI could be a $4.8 trillion global market by 2033 – about as big as the German economy.

But forget about 2033: in the here and now, AI is already fueling transformation in industries as diverse as financial services, manufacturing, healthcare, marketing, agriculture, and e-commerce. Whether it’s autonomous algorithmic ‘agents’ managing your investment portfolio or AI diagnostics systems detecting diseases early, AI is fundamentally changing how we live and work.

But cynicism is snowballing around AI – we’ve seen Terminator 2 enough times to be extremely wary. The question worth asking, then, is how do we ensure trust as AI integrates deeper into our everyday lives?

The stakes are high: A recent report by Camunda highlights an inconvenient truth: most organisations (84%) attribute regulatory compliance issues to a lack of transparency in AI applications. If companies can’t view algorithms – or worse, if the algorithms are hiding something – users are left completely in the dark. Add the factors of systemic bias, untested systems, and a patchwork of regulations and you have a recipe for mistrust on a large scale.

Transparency: Opening the AI black box

For all their impressive capabilities, AI algorithms are often opaque, leaving users ignorant of how decisions are reached. Is that AI-powered loan request being denied because of your credit score – or due to an undisclosed company bias? Without transparency, AI can pursue its owner’s goals, or that of its owner, while the user remains unaware, still believing it’s doing their bidding.

One promising solution would be to put the processes on the blockchain, making algorithms verifiable and auditable by anyone. This is where Web3 tech comes in. We’re already seeing startups explore the possibilities. Space and Time (SxT), an outfit backed by Microsoft, offers tamper-proof data feeds consisting of a verifiable compute layer, so SxT can ensure that the information on which AI relies is real, accurate, and untainted by a single entity.

Space and Time’s novel Proof of SQL prover guarantees queries are computed accurately against untampered data, proving computations in blockchain histories and being able to do so much faster than state-of-the art zkVMs and coprocessors. In essence, SxT helps establish trust in AI’s inputs without dependence on a centralised power.

Proving AI can be trusted

Trust isn’t a one-and-done deal; it’s earned over time, analogous to a restaurant maintaining standards to retain its Michelin star. AI systems must be assessed continually for performance and safety, especially in high-stakes domains like healthcare or autonomous driving. A second-rate AI prescribing the wrong medicines or hitting a pedestrian is more than a glitch, it’s a catastrophe.

This is the beauty of open-source models and on-chain verification via using immutable ledgers, with built-in privacy protections assured by the use of cryptography like Zero-Knowledge Proofs (ZKPs). Trust isn’t the only consideration, however: Users must know what AI can and can’t do, to set their expectations realistically. If a user believes AI is infallible, they’re more likely to trust flawed output.

To date, the AI education narrative has centred on its dangers. From now on, we should try to improve users’ knowledge of AI’s capabilities and limitations, better to ensure users are empowered not exploited.

Compliance and accountability

As with cryptocurrency, the word compliance comes often when discussing AI. AI doesn’t get a pass under the law and various regulations. How should a faceless algorithm be held accountable? The answer may lie in the modular blockchain protocol Cartesi, which ensures AI inference happens on-chain.

Cartesi’s virtual machine lets developers run standard AI libraries – like TensorFlow, PyTorch, and Llama.cpp – in a decentralised execution environment, making it suitable for on-chain AI development. In other words, a blend of blockchain transparency and computational AI.

Trust through decentralisation

The UN’s recent Technology and Innovation Report shows that while AI promises prosperity and innovation, its development risks “deepening global divides.” Decentralisation could be the answer, one that helps AI scale and instils trust in what’s under the hood.

(Image source: Unsplash)

The post Web3 tech helps instil confidence and trust in AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/web3-tech-helps-instil-confidence-and-trust-in-ai/feed/ 0
Navigating the EU AI Act: Implications for UK businesses https://www.artificialintelligence-news.com/news/navigating-the-eu-ai-act-implications-for-uk-businesses/ https://www.artificialintelligence-news.com/news/navigating-the-eu-ai-act-implications-for-uk-businesses/#respond Mon, 07 Apr 2025 07:10:00 +0000 https://www.artificialintelligence-news.com/?p=105005 The EU AI Act, which came into effect on August 1, 2024, marks a turning point in the regulation of artificial intelligence. Aimed at governing the use and development of AI, it imposes rigorous standards for organisations operating within the EU or providing AI-driven products and services to its member states. Understanding and complying with […]

The post Navigating the EU AI Act: Implications for UK businesses appeared first on AI News.

]]>
The EU AI Act, which came into effect on August 1, 2024, marks a turning point in the regulation of artificial intelligence. Aimed at governing the use and development of AI, it imposes rigorous standards for organisations operating within the EU or providing AI-driven products and services to its member states. Understanding and complying with the Act is essential for UK businesses seeking to compete in the European market.

The scope and impact of the EU AI Act

The EU AI Act introduces a risk-based framework that classifies AI systems into four categories: minimal, limited, high, and unacceptable risk. High-risk systems, which include AI used in healthcare diagnostics, autonomous vehicles, and financial decision-making, face stringent regulations. This risk-based approach ensures that the level of oversight corresponds to the potential impact of the technology on individuals and society.

For UK businesses, non-compliance with these rules is not an option. Organisations must ensure their AI systems align with the Act’s requirements or risk hefty fines, reputational damage, and exclusion from the lucrative EU market. The first step is to evaluate how their AI systems are classified and adapt operations accordingly. For instance, a company using AI to automate credit scoring must ensure its system meets transparency, fairness, and data privacy standards.

Preparing for the UK’s next steps

While the EU AI Act directly affects UK businesses trading with the EU, the UK is also likely to implement its own AI regulations. The recent King’s Speech highlighted the government’s commitment to AI governance, focusing on ethical AI and data protection. Future UK legislation will likely mirror aspects of the EU framework, making it essential for businesses to proactively prepare for compliance in multiple jurisdictions.

The role of ISO 42001 in ensuring compliance

International standards like ISO 42001 provide a practical solution for businesses navigating this evolving regulatory landscape. As the global benchmark for AI management systems, ISO 42001 offers a structured framework to manage the development and deployment of AI responsibly.

Adopting ISO 42001 enables businesses to demonstrate compliance with EU requirements while fostering trust among customers, partners, and regulators. Its focus on continuous improvement ensures that organisations can adapt to future regulatory changes, whether from the EU, UK, or other regions. Moreover, the standard promotes

transparency, safety, and ethical practices, which are essential for building AI systems that are not only compliant but also aligned with societal values.

Using AI as a catalyst for growth

Compliance with the EU AI Act and ISO 42001 isn’t just about avoiding penalties; it’s an opportunity to use AI as a sustainable growth and innovation driver. Businesses prioritising ethical AI practices can gain a competitive edge by enhancing customer trust and delivering high-value solutions.

For example, AI can revolutionise patient care in the healthcare sector by enabling faster diagnostics and personalised treatments. By aligning these technologies with ISO 42001, organisations can ensure their tools meet the highest safety and privacy standards. Similarly, financial firms can harness AI to optimise decision-making processes while maintaining transparency and fairness in customer interactions.

The risks of non-compliance

Recent incidents, such as AI-driven fraud schemes and cases of algorithmic bias, highlight the risks of neglecting proper governance. The EU AI Act directly addresses these challenges by enforcing strict guidelines on data usage, transparency, and accountability. Failure to comply risks significant fines and undermines stakeholder confidence, with long-lasting consequences for an organisation’s reputation.

The MOVEit and Capita breaches serve as stark reminders of the vulnerabilities associated with technology when governance and security measures are lacking. For UK businesses, robust compliance strategies are essential to mitigate such risks and ensure resilience in an increasingly regulated environment.

How UK businesses can adapt

1. Understand the risk level of AI systems: Conduct a comprehensive review of how AI is used within the organisation to determine risk levels. This assessment should consider the impact of the technology on users, stakeholders, and society.

2. Update compliance programs: Align data collection, system monitoring, and auditing practices with the requirements of the EU AI Act.

3. Adopt ISO 42001: Implementing the standard provides a scalable framework to manage AI responsibly, ensuring compliance while fostering innovation.

4. Invest in employee education: Equip teams with the knowledge to manage AI responsibly and adapt to evolving regulations.

5. Leverage advanced technologies: Use AI itself to monitor compliance, identify risks, and improve operational efficiency.

The future of AI regulation

As AI becomes an integral part of business operations, regulatory frameworks will continue to evolve. The EU AI Act will likely inspire similar legislation worldwide, creating a more complex compliance landscape. Businesses that act now to adopt international standards and align with best practices will be better positioned to navigate these changes.

The EU AI Act is a wake-up call for UK businesses to prioritise ethical AI practices and proactive compliance. By implementing tools like ISO 42001 and preparing for future regulations, organisations can turn compliance into an opportunity for growth, innovation, and resilience.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Navigating the EU AI Act: Implications for UK businesses appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/navigating-the-eu-ai-act-implications-for-uk-businesses/feed/ 0
Beyond acceleration: the rise of Agentic AI https://www.artificialintelligence-news.com/news/beyond-acceleration-the-rise-of-agentic-ai/ https://www.artificialintelligence-news.com/news/beyond-acceleration-the-rise-of-agentic-ai/#respond Mon, 07 Apr 2025 06:31:00 +0000 https://www.artificialintelligence-news.com/?p=105001 We already find ourselves at an inflection point with AI. According to a recent study by McKinsey, we’ve reached the turning point where ‘businesses must look beyond automation and towards AI-driven reinvention’ to stay ahead of the competition. While the era of AI-driven acceleration isn’t over, a new phase has already begun – one that […]

The post Beyond acceleration: the rise of Agentic AI appeared first on AI News.

]]>
We already find ourselves at an inflection point with AI. According to a recent study by McKinsey, we’ve reached the turning point where ‘businesses must look beyond automation and towards AI-driven reinvention’ to stay ahead of the competition. While the era of AI-driven acceleration isn’t over, a new phase has already begun – one that goes beyond making existing workflows more efficient and moves toward replacing existing workflows and/or creating new ones.

This is the age of Agentic AI.

Truly autonomous AI agents are capable of reshaping operations entirely. Systems can act autonomously, make decisions, and adapt dynamically. These agents will go beyond conversational interfaces, responding to user input and proactively managing tasks, navigating complex IT environments, and orchestrating business processes.

However, this shift isn’t just about technology — it also comes with a few considerations. Companies will need to address regulatory challenges, build AI literacy, and focus on applied use cases with clear ROI if the evolution is to succeed.

Moving from acceleration to transformation

So far, companies have primarily used AI to accelerate existing processes, whether through chatbots improving customer interactions or AI-driven analytics optimising workflows. In the end, these implementations make businesses more efficient.

But acceleration alone is no longer enough to stay ahead in the game. The real opportunity lies in replacing outdated workflows entirely and creating new, previously impossible capabilities.

For example, AI plays a vital role in automating troubleshooting and enhancing security within the network industry. But what if AI could autonomously anticipate and predict failures, reconfigure networks proactively to avoid service level degradations in real time, and optimise performance without human intervention? As AI becomes more autonomous, its ability to not just assist but act independently will be key to unlocking new levels of productivity and innovation.

That’s what Agentic AI is about.

Navigating the AI regulatory landscape

However, as AI becomes more autonomous, the regulatory landscape governing its deployment will evolve in parallel. The introduction of the EU AI Act, alongside global regulatory frameworks, means companies must already navigate new compliance requirements related to AI transparency, bias mitigation, and ethical deployment.

That means AI governance can no longer be an afterthought.

AI-powered systems must be designed with built-in compliance mechanisms, data privacy protections, and explainability features to build trust among users and regulators alike. Zero-trust security models will also be crucial in mitigating risks, enforcing strict access controls, and ensuring that AI decisions remain auditable and secure.

The importance of AI literacy

As stated, the success of Agentic AI’s era will depend on more than just technical capabilities – it will require alignment between leadership, developers, and end-users. As AI becomes more advanced, AI literacy becomes a key differentiator, and companies must invest in upskilling their workforce to understand AI’s capabilities, limitations, and ethical considerations. A recent report by the ICT Workforce Consortium found that 92% of information and communication technology jobs are expected to undergo significant transformation due to advancements in AI. So, without proper AI education, businesses risk misalignment between AI implementers and those who use the technology.

This can lead to a lack of trust, slow adoption, and ineffective deployment, which can impact the bottom line. So, to unlock the full potential of Agentic AI, it’s essential to build AI literacy across all levels of the organisation.

As this new era of AI blooms, companies must learn from the current era of AI adoption: focus on applied use cases with tangible ROI. The days of experimenting with AI for innovation’s sake are ending – the next generation of AI deployments must prove their worth.

In networking, it could be projects such as AI-powered autonomous network optimisation. These systems do more than automate tasks; they continuously monitor network traffic, predict congestion points, and autonomously adjust configurations to ensure optimal performance. By providing proactive insights and real-time adjustments, these AI-driven solutions help companies prevent issues and outages before they occur.

This level of AI autonomy reduces human intervention and enhances overall security and operational efficiency.

Identifying and implementing high-value, high-impact Agentic AI use cases such as these will be vital.

Trust as the adoption hurdle

While we’re entering a new era, trust plays a key role in widespread AI adoption. Users must feel confident that AI decisions are accurate, fair, and explainable. Even the most advanced AI models will face challenges gaining acceptance without transparency.

This is particularly relevant as AI transitions from assisting users to making autonomous decisions. Whether AI agents manage IT infrastructure or drive customer interactions, organisations must ensure that AI decisions are auditable, unbiased, and aligned with business objectives.

Without transparency and accountability, companies may face resistance from both employees and customers.

The future of AI

Looking ahead, 2025 holds exciting potential for AI. As it reaches a new level of maturity, its success will depend on how well organisations, governments, and individuals adapt to its growing presence in everyday life. Moving beyond efficiency and automation, AI has the opportunity to become a powerful driver of intelligent decision-making, problem-solving, and innovation.

Organisations that harness Agentic AI effectively – balancing autonomy with oversight – will see the greatest benefits. However, success will require a commitment to transparency, education, and ethical deployment to build trust and ensure AI is a true enabler of progress.

Because AI is no longer just an accelerant, it is a transformative force reshaping how we work, communicate, and interact with technology.

Photo by Ryan De Hamer on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Beyond acceleration: the rise of Agentic AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/beyond-acceleration-the-rise-of-agentic-ai/feed/ 0
GITEX GLOBAL in Asia: The largest tech show in the world https://www.artificialintelligence-news.com/news/gitex-asia-2025/ https://www.artificialintelligence-news.com/news/gitex-asia-2025/#respond Tue, 01 Apr 2025 11:46:28 +0000 https://www.artificialintelligence-news.com/?p=105098 23-25 April 2025 | Marina Bay Sands, Singapore GITEX ASIA 2025 will bring together 700+ tech companies, featuring 400+ startups and digital promotion agencies, and 250+ global investors & VCs from 60+ countries. The event will serve as a bridge between the Eastern and Western technology ecosystems and feature 180+ hours of expert insights from 220 […]

The post GITEX GLOBAL in Asia: The largest tech show in the world appeared first on AI News.

]]>
23-25 April 2025 | Marina Bay Sands, Singapore

GITEX ASIA 2025 will bring together 700+ tech companies, featuring 400+ startups and digital promotion agencies, and 250+ global investors & VCs from 60+ countries.

The event will serve as a bridge between the Eastern and Western technology ecosystems and feature 180+ hours of expert insights from 220 global thought leaders.

GITEX ASIA 2025 is set to foster cross-border collaboration, investment, and innovation, connecting global tech enterprises, unicorn founders, policymakers, SMEs, and academia to shape the future of digital transformation in Asia.

GITEX ASIA 2025 will comprise of five co-located events:

  • AI EVERTYTHING SINGAPORE – the AI showcase.
  • NORTHSTAR ASIA – for startups and investors
  • GITEX CYBER VALLEY ASIA – helping create a defence ecosystem for governments and businesses
  • GITEX QUANTUM QUANTUM EXPO ASIA – Asia’s quantum frontier
  • GITEX DIGI HEALTH & BIOTECH SINGAPORE – the healthcare revolution

GITEX ASIA 2025 will host a lineup of conferences and summits, exploring a range of transformative trends in technology and investment. Key themes will include AI, cloud & connectivity, cybersecurity, quantum, health tech & biotech, green tech & smart cities, startups & investors, and SMEs.

Sessions will include Asia Digital AI Economy, AI Everything: AI Adoption & Commercialisation, Cybersecurity: AI-Enabled Cybersecurity & Critical Infrastructure, Digital Health, and the Supernova Pitch Competition.

The event will bring together leading voices and ideas from different industries, including public services, retail, finance, education, health, and manufacturing.

Be part of the action at GITEX ASIA 2025 and witness the future of technology unfold in Singapore. For more information and updates on GITEX ASIA, visit www.gitexasia.com

Social media links: LinkedIn | X | Facebook | Instagram | YouTube

The post GITEX GLOBAL in Asia: The largest tech show in the world appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/gitex-asia-2025/feed/ 0
Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/ https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/#respond Mon, 31 Mar 2025 10:54:40 +0000 https://www.artificialintelligence-news.com/?p=105089 Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council […]

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council Member and was named as one of the 100 Most Powerful Women in the UK by Woman’s Hour on BBC Radio 4.

A key advocate for responsible AI governance and diversity in tech, Wendy has played a crucial role in global discussions on the future of AI.

In our Q&A, we spoke to her about the gender imbalance in the AI industry, the ethical implications of emerging technologies, and how businesses can harness AI while ensuring it remains an asset to humanity.

The AI sector remains heavily male-dominated. Can you share your experience of breaking into the industry and the challenges women face in achieving greater representation in AI and technology?

It’s incredibly frustrating because I wrote my first paper about the lack of women in computing back in 1987, when we were just beginning to teach computer science degree courses at Southampton. That October, we arrived at the university and realised we had no women registered on the course — none at all.

So, those of us working in computing started discussing why that was the case. There were several reasons. One significant factor was the rise of the personal computer, which was marketed as a toy for boys, fundamentally changing the culture. Since then, in the West — though not as much in countries like India or Malaysia — computing has been seen as something nerdy, something that only ‘geeks’ do. Many young girls simply do not want to be associated with that stereotype. By the time they reach their GCSE choices, they often don’t see computing as an option, and that’s where the problem begins.

Despite many efforts, we haven’t managed to change this culture. Nearly 40 years later, the industry is still overwhelmingly male-dominated, even though women make up more than

half of the global population. Women are largely absent from the design and development of computers and software. We apply them, we use them, but we are not part of the fundamental conversations shaping future technologies.

AI is even worse in this regard. If you want to work in machine learning, you need a degree in mathematics or computer science, which means we are funnelling an already male-dominated sector into an even more male-dominated pipeline.

But AI is about more than just machine learning and programming. It’s about application, ethics, values, opportunities, and mitigating potential risks. This requires a broad diversity of voices — not just in terms of gender, but also in age, ethnicity, culture, and accessibility. People with disabilities should be part of these discussions, ensuring technology is developed for everyone.

AI’s development needs input from many disciplines — law, philosophy, psychology, business, and history, to name just a few. We need all these different voices. That’s why I believe we must see AI as a socio-technical system to truly understand its impact. We need diversity in every sense of the word.

As businesses increasingly integrate AI into their operations, what steps should they take to ensure emerging technologies are developed and deployed ethically?

Take, for example, facial recognition. We still haven’t fully established the rules and regulations for when and how this technology should be applied. Did anyone ask you whether you wanted facial recognition on your phone? It was simply offered as a system update, and you could either enable it or not.

We know facial recognition is used extensively for surveillance in China, but it is creeping into use across Europe and the US as well. Security forces are adopting it, which raises concerns about privacy. At the same time, I appreciate the presence of CCTV cameras in car parks at night — they make me feel safer.

This duality applies to all emerging technologies, including AI tools we haven’t even developed yet. Every new technology has a good and a bad side — the yin and the yang, if you will. There are always benefits and risks.

The challenge is learning how to maximise the benefits for humanity, society and business while mitigating the risks. That’s what we must focus on — ensuring AI works in service of people rather than against them.

The rapid advancement of AI is transforming everyday life. How do you envision the future of AI, and what significant changes will it bring to society and the way we work?

I see a future where AI becomes part of the decision-making process, whether in legal cases, medical diagnoses, or education.

AI is already deeply embedded in our daily lives. If you use Google on your phone, you’re using AI. If you unlock your phone with facial recognition, that’s AI. Google Translate? AI. Speech processing, video analysis, image recognition, text generation, and natural language processing — these are all AI-driven technologies.

Right now, the buzz is around generative AI, particularly ChatGPT. It’s like how ‘Hoover’ became synonymous with vacuum cleaners — ChatGPT has become shorthand for AI. In reality, it’s just a clever interface created by OpenAI to allow public access to its generative AI model.

It feels like you’re having a conversation with the system, asking questions and receiving natural language responses. It works with images and videos too, making it seem incredibly advanced. But the truth is, it’s not actually intelligent. It’s not sentient. It’s simply predicting the next word in a sequence based on training data. That’s a crucial distinction.

With generative AI becoming a powerful tool for businesses, what strategies should companies adopt to leverage its capabilities while maintaining human authenticity in their processes and decision-making?

Generative AI is nothing to be afraid of, and I believe we will all start using it more and more. Essentially, it’s software that can assist with writing, summarising, and analysing information.

I compare it to when calculators first appeared. People were outraged: ‘How can we allow calculators in schools? Can we trust the answers they provide?’ But over time, we adapted. The finance industry, for example, is now entirely run by computers, yet it employs more people than ever before. I expect we’ll see something similar with generative AI.

People will be relieved not to have to write endless essays. AI will enhance creativity and efficiency, but it must be viewed as a tool to augment human intelligence, not replace it, because it’s simply not advanced enough to take over.

Look at the legal industry. AI can summarise vast amounts of data, assess the viability of legal cases, and provide predictive analysis. In the medical field, AI could support diagnoses. In education, it could help assess struggling students.

I envision AI being integrated into decision-making teams. We will consult AI, ask it questions, and use its responses as a guide — but it’s crucial to remember that AI is not infallible.

Right now, AI models are trained on biased data. If they rely on information from the internet, much of that data is inaccurate. AI systems also ‘hallucinate’ by generating false information when they don’t have a definitive answer. That’s why we can’t fully trust AI yet.

Instead, we must treat it as a collaborative partner — one that helps us be more productive and creative while ensuring that humans remain in control. Perhaps AI will even pave the way for shorter workweeks, giving us more time for other pursuits.

Photo by Igor Omilaev on Unsplash and AI Speakers Agency.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/feed/ 0
Hugging Face calls for open-source focus in the AI Action Plan https://www.artificialintelligence-news.com/news/hugging-face-open-source-focus-ai-action-plan/ https://www.artificialintelligence-news.com/news/hugging-face-open-source-focus-ai-action-plan/#respond Thu, 20 Mar 2025 17:41:39 +0000 https://www.artificialintelligence-news.com/?p=104946 Hugging Face has called on the US government to prioritise open-source development in its forthcoming AI Action Plan. In a statement to the Office of Science and Technology Policy (OSTP), Hugging Face emphasised that “thoughtful policy can support innovation while ensuring that AI development remains competitive, and aligned with American values.” Hugging Face, which hosts […]

The post Hugging Face calls for open-source focus in the AI Action Plan appeared first on AI News.

]]>
Hugging Face has called on the US government to prioritise open-source development in its forthcoming AI Action Plan.

In a statement to the Office of Science and Technology Policy (OSTP), Hugging Face emphasised that “thoughtful policy can support innovation while ensuring that AI development remains competitive, and aligned with American values.”

Hugging Face, which hosts over 1.5 million public models across various sectors and serves seven million users, proposes an AI Action Plan centred on three interconnected pillars:

  1. Hugging Face stresses the importance of strengthening open-source AI ecosystems.  The company argues that technical innovation stems from diverse actors across institutions and that support for infrastructure – such as the National AI Research Resource (NAIRR), and investment in open science and data – allows these contributions to have an additive effect and accelerate robust innovation.
  1. The company prioritises efficient and reliable adoption of AI. Hugging Face believes that spreading the benefits of the technology by facilitating its adoption along the value chain requires actors across sectors of activity to shape its development. It states that more efficient, modular, and robust AI models require research and infrastructural investments to enable the broadest possible participation and innovation—enabling diffusion of technology across the US economy.
  1. Hugging Face also highlights the need to promote security and standards. The company suggests that decades of practices in open-source software cybersecurity, information security, and standards can inform safer AI technology. It advocates for promoting traceability, disclosure, and interoperability standards to foster a more resilient and robust technology ecosystem.

Open-source is key for AI advancement in the US (and beyond)

Hugging Face underlines that modern AI is built on decades of open research, with commercial giants relying heavily on open-source contributions. Recent breakthroughs – such as OLMO-2 and Olympic-Coder – demonstrate that open research remains a promising path to developing systems that match the performance of commercial models, and can often surpass them, especially in terms of efficiency and performance in specific domains.

“Perhaps most striking is the rapid compression of development timelines,” notes the company, “what once required over 100B parameter models just two years ago can now be accomplished with 2B parameter models, suggesting an accelerating path to parity.”

This trend towards more accessible, efficient, and collaborative AI development indicates that open approaches to AI development have a critical role to play in enabling a successful AI strategy that maintains technical leadership and supports more widespread and secure adoption of the technology.

Hugging Face argues that open models, infrastructure, and scientific practices constitute the foundation of AI innovation, allowing a diverse ecosystem of researchers, companies, and developers to build upon shared knowledge.

The company’s platform hosts AI models and datasets from both small actors (e.g., startups, universities) and large organisations (e.g., Microsoft, Google, OpenAI, Meta), demonstrating how open approaches accelerate progress and democratise access to AI capabilities.

“The United States must lead in open-source AI and open science, which can enhance American competitiveness by fostering a robust ecosystem of innovation and ensuring a healthy balance of competition and shared innovation,” states Hugging Face.

Research has shown that open technical systems act as force multipliers for economic impact, with an estimated 2000x multiplier effect. This means that $4 billion invested in open systems could potentially generate $8 trillion in value for companies using them.

These economic benefits extend to national economies as well. Without any open-source software contributions, the average country would lose 2.2% of its GDP. Open-source drove between €65 billion and €95 billion of European GDP in 2018 alone, a finding so significant that the European Commission cited it when establishing new rules to streamline the process for open-sourcing government software.

This demonstrates how open-source impact translates directly into policy action and economic advantage at the national level, underlining the importance of open-source as a public good.

Practical factors driving commercial adoption of open-source AI

Hugging Face identifies several practical factors driving the commercial adoption of open models:

  • Cost efficiency is a major driver, as developing AI models from scratch requires significant investment, so leveraging open foundations reduces R&D expenses.
  • Customisation is crucial, as organisations can adapt and deploy models specifically tailored to their use cases rather than relying on one-size-fits-all solutions.
  • Open models reduce vendor lock-in, giving companies greater control over their technology stack and independence from single providers.
  • Open models have caught up to and, in certain cases, surpassed the capabilities of closed, proprietary systems.

These factors are particularly valuable for startups and mid-sized companies, which can access cutting-edge technology without massive infrastructure investments. Banks, pharmaceutical companies, and other industries have been adapting open models to specific market needs—demonstrating how open-source foundations support a vibrant commercial ecosystem across the value chain.

Hugging Face’s policy recommendations to support open-source AI in the US

To support the development and adoption of open AI systems, Hugging Face offers several policy recommendations:

  • Enhance research infrastructure: Fully implement and expand the National AI Research Resource (NAIRR) pilot. Hugging Face’s active participation in the NAIRR pilot has demonstrated the value of providing researchers with access to computing resources, datasets, and collaborative tools.
  • Allocate public computing resources for open-source: The public should have ways to participate via public AI infrastructure. One way to do this would be to dedicate a portion of publicly-funded computing infrastructure to support open-source AI projects, reducing barriers to innovation for smaller research teams and companies that cannot afford proprietary systems.
  • Enable access to data for developing open systems: Create sustainable data ecosystems through targeted policies that address the decreasing data commons. Publishers are increasingly signing data licensing deals with proprietary AI model developers, meaning that quality data acquisition costs are now approaching or even surpassing computational expenses of training frontier models, threatening to lock out small open developers from access to quality data.  Support organisations that contribute to public data repositories and streamline compliance pathways that reduce legal barriers to responsible data sharing.
  • Develop open datasets: Invest in the creation, curation, and maintenance of robust, representative datasets that can support the next generation of AI research and applications. Expand initiatives like the IBM AI Alliance Trusted Data Catalog and support projects like IDI’s AI-driven Digitization of the public collections in the Boston Public Library.
  • Strengthen rights-respecting data access frameworks: Establish clear guidelines for data usage, including standardised protocols for anonymisation, consent management, and usage tracking.  Support public-private partnerships to create specialised data trusts for high-value domains like healthcare and climate science, ensuring that individuals and organisations maintain appropriate control over their data while enabling innovation.    
  • Invest in stakeholder-driven innovation: Create and support programmes that enable organisations across diverse sectors (healthcare, manufacturing, education) to develop customised AI systems for their specific needs, rather than relying exclusively on general-purpose systems from major providers. This enables broader participation in the AI ecosystem and ensures that the benefits of AI extend throughout the economy.
  • Strengthen centres of excellence: Expand NIST’s role as a convener for AI experts across academia, industry, and government to share lessons and develop best practices.  In particular, the AI Risk Management Framework has played a significant role in identifying stages of AI development and research questions that are critical to ensuring more robust and secure technology deployment for all. The tools developed at Hugging Face, from model documentation to evaluation libraries, are directly shaped by these questions.
  • Support high-quality data for performance and reliability evaluation: AI development depends heavily on data, both to train models and to reliably evaluate their progress, strengths, risks, and limitations. Fostering greater access to public data in a safe and secure way and ensuring that the evaluation data used to characterise models is sound and evidence-based will accelerate progress in both performance and reliability of the technology.

Prioritising efficient and reliable AI adoption

Hugging Face highlights that smaller companies and startups face significant barriers to AI adoption due to high costs and limited resources. According to IDC, global AI spending will reach $632 billion in 2028, but these costs remain prohibitive for many small organisations.

For organisations adopting open-source AI tools, it brings financial returns. 51% of surveyed companies currently utilising open-source AI tools report positive ROI, compared to just 41% of those not using open-source.

However, energy scarcity presents a growing concern, with the International Energy Agency projecting that data centres’ electricity consumption could double from 2022 levels to 1,000 TWh by 2026, equivalent to Japan’s entire electricity demand. While training AI models is energy-intensive, inference, due to its scale and frequency, can ultimately exceed training energy consumption.

Ensuring broad AI accessibility requires both hardware optimisations and scalable software frameworks.  A range of organisations are developing models tailored to their specific needs, and US leadership in efficiency-focused AI development presents a strategic advantage. The DOE’s AI for Energy initiative further supports research into energy-efficient AI, facilitating wider adoption without excessive computational demands.

With its letter to the OSTP, Hugging Face advocates for an AI Action Plan centred on open-source principles. By taking decisive action, the US can secure its leadership, drive innovation, enhance security, and ensure the widespread benefits of AI are realised across society and the economy.

See also: UK minister in US to pitch Britain as global AI investment hub

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Hugging Face calls for open-source focus in the AI Action Plan appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/hugging-face-open-source-focus-ai-action-plan/feed/ 0
UK must act to secure its semiconductor industry leadership https://www.artificialintelligence-news.com/news/uk-must-act-secure-its-semiconductor-industry-leadership/ https://www.artificialintelligence-news.com/news/uk-must-act-secure-its-semiconductor-industry-leadership/#respond Mon, 17 Feb 2025 11:47:01 +0000 https://www.artificialintelligence-news.com/?p=104518 The UK semiconductor industry is at a critical juncture, with techUK urging the government to act to maintain its global competitiveness. Laura Foster, Associate Director of Technology and Innovation at techUK, said: “The UK has a unique opportunity to lead in the global semiconductor landscape, but success will require bold action and sustained commitment. “By […]

The post UK must act to secure its semiconductor industry leadership appeared first on AI News.

]]>
The UK semiconductor industry is at a critical juncture, with techUK urging the government to act to maintain its global competitiveness.

Laura Foster, Associate Director of Technology and Innovation at techUK, said: “The UK has a unique opportunity to lead in the global semiconductor landscape, but success will require bold action and sustained commitment.

“By accelerating the implementation of the National Semiconductor Strategy, we can unlock investment, foster innovation, and strengthen our position in this critical industry.  

Semiconductors are the backbone of modern technology, powering everything from consumer electronics to AI data centres. With the global semiconductor market projected to reach $1 trillion by 2030, the UK must act to secure its historic leadership in this lucrative and strategically vital industry.

“We must act at pace to secure the UK’s semiconductor future and as such our technological and economic resilience,” explains Foster.

UK semiconductor industry strengths and challenges

The UK has long been a leader in semiconductor design and intellectual property (IP), with Cambridge in particular serving as a global hub for innovation.

Companies like Arm, which designs chips used in 99% of the world’s smartphones, exemplify the UK’s strengths in this area. However, a techUK report warns that these strengths are under threat due to insufficient investment, skills shortages, and a lack of tailored support for the sector.

“The UK is not starting from zero,” the report states. “We have globally competitive capabilities in design and IP, but we must double down on these strengths to compete internationally.”

The UK’s semiconductor industry contributed £12 billion in turnover in 2021, with 90% of companies expecting growth in the coming years. However, the sector faces significant challenges, including high costs, limited access to private capital, and a reliance on international talent.

The report highlights that only 5% of funding for UK semiconductor startups originates domestically, with many companies struggling to find qualified investors.

A fundamental need for strategic investment and innovation

The report makes 27 recommendations across six key areas, including design and IP, R&D, manufacturing, skills, and global partnerships.

Some of the key proposals include:

  • Turn current strengths into leadership: The UK must leverage its existing capabilities in design, IP, and compound semiconductors. This includes supporting regional clusters like Cambridge and South Wales, which have proven track records of innovation.
  • Establishing a National Semiconductor Centre: This would act as a central hub for the industry, providing support for businesses, coordinating R&D efforts, and fostering collaboration between academia and industry.
  • Expanding R&D tax credits: The report calls for the inclusion of capital expenditure in R&D tax credits to incentivise investment in new facilities and equipment.
  • Creating a Design Competence Centre: This would provide shared facilities for chip designers, reducing the financial risk of innovation and supporting the development of advanced designs.
  • Nurturing skills: The UK must address the skills shortage in the semiconductor sector by upskilling workers, attracting international talent, and promoting STEM education.
  • Capitalise on global partnerships: The UK must strengthen its position in the global semiconductor supply chain by forming strategic partnerships with allied countries. This includes collaborating on R&D, securing access to critical materials, and navigating export controls.

Urgent action is required to secure the UK semiconductor industry

The report warns that the UK risks falling behind other nations if it does not act quickly. Countries like the US, China, and the EU have already announced significant investments in their domestic semiconductor industries.

The European Chips Act, for example, has committed €43 billion to support semiconductor infrastructure, skills, and startups.

“Governments across the world are acting quickly to attract semiconductor companies while also building domestic capability,” the report states. “The UK must use its existing resources tactically, playing to its globally recognised strengths within the semiconductor value chain.”

The UK’s semiconductor industry has the potential to be a global leader, but this will require sustained investment, strategic planning, and collaboration between government, industry, and academia.

“The UK Government should look to its semiconductor ambitions as an essential part of delivering the wider Industrial Strategy and securing not just the fastest growth in the G7, but also secure and resilient economic growth,” the report concludes.

(Photo by Rocco Dipoppa)

See also: AI in 2025: Purpose-driven models, human integration, and more

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK must act to secure its semiconductor industry leadership appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-must-act-secure-its-semiconductor-industry-leadership/feed/ 0
AI Action Summit: Leaders call for unity and equitable development https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-development/ https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-development/#respond Mon, 10 Feb 2025 13:07:09 +0000 https://www.artificialintelligence-news.com/?p=104258 As the 2025 AI Action Summit kicks off in Paris, global leaders, industry experts, and academics are converging to address the challenges and opportunities presented by AI. Against the backdrop of rapid technological advancements and growing societal concerns, the summit aims to build on the progress made since the 2024 Seoul Safety Summit and establish […]

The post AI Action Summit: Leaders call for unity and equitable development appeared first on AI News.

]]>
As the 2025 AI Action Summit kicks off in Paris, global leaders, industry experts, and academics are converging to address the challenges and opportunities presented by AI.

Against the backdrop of rapid technological advancements and growing societal concerns, the summit aims to build on the progress made since the 2024 Seoul Safety Summit and establish a cohesive global framework for AI governance.  

AI Action Summit is ‘a wake-up call’

French President Emmanuel Macron has described the summit as “a wake-up call for Europe,” emphasising the need for collective action in the face of AI’s transformative potential. This comes as the US has committed $500 billion to AI infrastructure.

The UK, meanwhile, has unveiled its Opportunities Action Plan ahead of the full implementation of the UK AI Act. Ahead of the AI Summit, UK tech minister Peter Kyle told The Guardian the AI race must be led by “western, liberal, democratic” countries.

These developments signal a renewed global dedication to harnessing AI’s capabilities while addressing its risks.  

Matt Cloke, CTO at Endava, highlighted the importance of bridging the gap between AI’s potential and its practical implementation.

Headshot of Matt Cloke.

“Much of the conversation is set to focus on understanding the risks involved with using AI while helping to guide decision-making in an ever-evolving landscape,” he said.  

Cloke also stressed the role of organisations in ensuring AI adoption goes beyond regulatory frameworks.

“Modernising core systems enables organisations to better harness AI while ensuring regulatory compliance,” he explained.

“With improved data management, automation, and integration capabilities, these systems make it easier for organisations to stay agile and quickly adapt to impending regulatory changes.”  

Governance and workforce among critical AI Action Summit topics

Kit Cox, CTO and Founder of Enate, outlined three critical areas for the summit’s agenda.

Headshot of Kit Cox ahead of the 2025 AI Action Summit in Paris.

“First, AI governance needs urgent clarity,” he said. “We must establish global guidelines to ensure AI is safe, ethical, and aligned across nations. A disconnected approach won’t work; we need unity to build trust and drive long-term progress.”

Cox also emphasised the need for a future-ready workforce.

“Employers and governments must invest in upskilling the workforce for an AI-driven world,” he said. “This isn’t just about automation replacing jobs; it’s about creating opportunities through education and training that genuinely prepare people for the future of work.”  

Finally, Cox called for democratising AI’s benefits.

“AI must be fair and democratic both now and in the future,” he said. “The benefits can’t be limited to a select few. We must ensure that AI’s power reaches beyond Silicon Valley to all corners of the globe, creating opportunities for everyone to thrive.”  

Developing AI in the public interest

Professor Gina Neff, Professor of Responsible AI at Queen Mary University of London and Executive Director at Cambridge University’s Minderoo Centre for Technology & Democracy, stressed the importance of making AI relatable to everyday life.

Headshot of Professor Gina Neff.

“For us in civil society, it’s essential that we bring imaginaries about AI into the everyday,” she said. “From the barista who makes your morning latte to the mechanic fixing your car, they all have to understand how AI impacts them and, crucially, why AI is a human issue.”  

Neff also pushed back against big tech’s dominance in AI development.

“I’ll be taking this spirit of public interest into the Summit and pushing back against big tech’s push for hyperscaling. Thinking about AI as something we’re building together – like we do our cities and local communities – puts us all in a better place.”

Addressing bias and building equitable AI

Professor David Leslie, Professor of Ethics, Technology, and Society at Queen Mary University of London, highlighted the unresolved challenges of bias and diversity in AI systems.

“Over a year after the first AI Safety Summit at Bletchley Park, only incremental progress has been made to address the many problems of cultural bias and toxic and imbalanced training data that have characterised the development and use of Silicon Valley-led frontier AI systems,” he said.

Headshot of Professor David Leslie ahead of the 2025 AI Action Summit in Paris.

Leslie called for a renewed focus on public interest AI.

“The French AI Action Summit promises to refocus the conversation on AI governance to tackle these and other areas of immediate risk and harm,” he explained. “A main focus will be to think about how to advance public interest AI for all through mission-driven and society-led funding.”  

He proposed the creation of a public interest AI foundation, supported by governments, companies, and philanthropic organisations.

“This type of initiative will have to address issues of algorithmic and data biases head on, at concrete and practice-based levels,” he said. “Only then can it stay true to the goal of making AI technologies – and the infrastructures upon which they depend – accessible global public goods.”  

Systematic evaluation  

Professor Maria Liakata, Professor of Natural Language Processing at Queen Mary University of London, emphasised the need for rigorous evaluation of AI systems.

Headshot of Professor Maria Liakata ahead of the 2025 AI Action Summit in Paris.

“AI has the potential to make public service more efficient and accessible,” she said. “But at the moment, we are not evaluating AI systems properly. Regulators are currently on the back foot with evaluation, and developers have no systematic way of offering the evidence regulators need.”  

Liakata called for a flexible and systematic approach to AI evaluation.

“We must remain agile and listen to the voices of all stakeholders,” she said. “This would give us the evidence we need to develop AI regulation and help us get there faster. It would also help us get better at anticipating the risks posed by AI.”  

AI in healthcare: Balancing innovation and ethics

Dr Vivek Singh, Lecturer in Digital Pathology at Barts Cancer Institute, Queen Mary University of London, highlighted the ethical implications of AI in healthcare.

Headshot of Dr Vivek Singh ahead of the 2025 AI Action Summit in Paris.

“The Paris AI Action Summit represents a critical opportunity for global collaboration on AI governance and innovation,” he said. “I hope to see actionable commitments that balance ethical considerations with the rapid advancement of AI technologies, ensuring they benefit society as a whole.”  

Singh called for clear frameworks for international cooperation.

“A key outcome would be the establishment of clear frameworks for international cooperation, fostering trust and accountability in AI development and deployment,” he said.  

AI Action Summit: A pivotal moment

The 2025 AI Action Summit in Paris represents a pivotal moment for global AI governance. With calls for unity, equity, and public interest at the forefront, the summit aims to address the challenges of bias, regulation, and workforce readiness while ensuring AI’s benefits are shared equitably.

As world leaders and industry experts converge, the hope is that actionable commitments will pave the way for a more inclusive and ethical AI future.

(Photo by Jorge Gascón)

See also: EU AI Act: What businesses need to know as regulations go live

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI Action Summit: Leaders call for unity and equitable development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-development/feed/ 0
NEPC: AI sprint risks environmental catastrophe https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/ https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/#respond Fri, 07 Feb 2025 12:32:41 +0000 https://www.artificialintelligence-news.com/?p=104189 The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint. A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction. The report, Engineering […]

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint.

A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction.

The report, Engineering Responsible AI: Foundations for Environmentally Sustainable AI, was developed in collaboration with the Royal Academy of Engineering, the Institution of Engineering and Technology, and BCS, the Chartered Institute of IT.

While stressing that data centres enabling AI systems can be built to consume fewer resources like energy and water, the report highlights that infrastructure and regulatory conditions must align for these efficiencies to materialise.

Unlocking the potential of AI while minimising environmental risks  

AI is heralded as capable of driving economic growth, creating jobs, and improving livelihoods. Launched as a central pillar of the UK’s tech strategy, the AI Opportunities Action Plan is intended to “boost economic growth, provide jobs for the future and improve people’s everyday lives.”  

Use cases for AI that are already generating public benefits include accelerating drug discovery, forecasting weather events, optimising energy systems, and even aiding climate science and improving sustainability efforts. However, this growing reliance on AI also poses environmental risks from the infrastructure required to power these systems.  

Data centres, which serve as the foundation of AI technologies, consume vast amounts of energy and water. Increasing demand has raised concerns about global competition for limited resources, such as sustainable energy and drinking water. Google and Microsoft, for instance, have recorded rising water usage by their data centres each year since 2020. Much of this water comes from drinking sources, sparking fears about resource depletion.  

With plans already in place to reform the UK’s planning system to facilitate the construction of data centres, the report calls for urgent policies to manage their environmental impact. Accurate and transparent data on resource consumption is currently lacking, which hampers policymakers’ ability to assess the true scale of these impacts and act accordingly.

Five steps to sustainable AI  

The NEPC is urging the government to spearhead change by prioritising sustainable AI development. The report outlines five key steps policymakers can act upon immediately to position the UK as a leader in resource-efficient AI:  

  1. Expand environmental reporting mandates
  2. Communicate the sector’s environmental impacts
  3. Set sustainability requirements for data centres
  4. Reconsider data collection, storage, and management practices
  5. Lead by example with government investment

Mandatory environmental reporting forms a cornerstone of the recommendations. This involves measuring data centres’ energy sources, water consumption, carbon emissions, and e-waste recycling practices to provide the resource use data necessary for policymaking.  

Raising public awareness is also vital. Communicating the environmental costs of AI can encourage developers to optimise AI tools, use smaller datasets, and adopt more efficient approaches. Notably, the report recommends embedding environmental design and sustainability topics into computer science and AI education at both school and university levels.  

Smarter, greener data centres  

One of the most urgent calls to action involves redesigning data centres to reduce their environmental footprint. The report advocates for innovations like waste heat recovery systems, zero drinking water use for cooling, and the exclusive use of 100% carbon-free energy certificates.  

Efforts like those at Queen Mary University of London, where residual heat from a campus data centre is repurposed to provide heating and hot water, offer a glimpse into the possibilities of greener tech infrastructure.  

In addition, the report suggests revising legislation on mandatory data retention to reduce the unnecessary environmental costs of storing vast amounts of data long-term. Proposals for a National Data Library could drive best practices by centralising and streamlining data storage.  

Professor Tom Rodden, Pro-Vice-Chancellor at the University of Nottingham and Chair of the working group behind the report, urged swift action:  

“In recent years, advances in AI systems and services have largely been driven by a race for size and scale, demanding increasing amounts of computational power. As a result, AI systems and services are growing at a rate unparalleled by other high-energy systems—generally without much regard for resource efficiency.  

“This is a dangerous trend, and we face a real risk that our development, deployment, and use of AI could do irreparable damage to the environment.”  

Rodden added that reliable data on these impacts is critical. “To build systems and services that effectively use resources, we first need to effectively monitor their environmental cost. Once we have access to trustworthy data… we can begin to effectively target efficiency in development, deployment, and use – and plan a sustainable AI future for the UK.”

Dame Dawn Childs, CEO of Pure Data Centres Group, underscored the role of engineering in improving efficiency. “Some of this will come from improvements to AI models and hardware, making them less energy-intensive. But we must also ensure that the data centres housing AI’s computing power and storage are as sustainable as possible.  

“That means prioritising renewable energy, minimising water use, and reducing carbon emissions – both directly and indirectly. Using low-carbon building materials is also essential.”  

Childs emphasised the importance of a coordinated approach from the start of projects. “As the UK government accelerates AI adoption – through AI Growth Zones and streamlined planning for data centres – sustainability must be a priority at every step.”  

For Alex Bardell, Chair of BCS’ Green IT Specialist Group, the focus is on optimising AI processes. “Our report has discussed optimising models for efficiency. Previous attempts to limit the drive toward increased computational power and larger models have faced significant resistance, with concerns that the UK may fall behind in the AI arena; this may not necessarily be true.  

“It is crucial to reevaluate our approach to developing sustainable AI in the future.”  

Time for transparency around AI environmental risks

Public awareness of AI’s environmental toll remains low. Recent research by the Institution of Engineering and Technology (IET) found that fewer than one in six UK residents are aware of the significant environmental costs associated with AI systems.  

“AI providers must be transparent about these effects,” said Professor Sarvapali Ramchurn, CEO of Responsible AI UK and a Fellow of the IET. “If we cannot measure it, we cannot manage it, nor ensure benefits for all. This report’s recommendations will aid national discussions on the sustainability of AI systems and the trade-offs involved.”  

As the UK pushes forward with ambitious plans to lead in AI development, ensuring environmental sustainability must take centre stage. By adopting policies and practices outlined in the NEPC report, the government can support AI growth while safeguarding finite resources for future generations.

(Photo by Braden Collum)

See also: Sustainability is key in 2025 for businesses to advance AI efforts

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/feed/ 0