europe Archives - AI News https://www.artificialintelligence-news.com/news/tag/europe/ Artificial Intelligence News Wed, 30 Apr 2025 11:35:05 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png europe Archives - AI News https://www.artificialintelligence-news.com/news/tag/europe/ 32 32 UK opens Europe’s first E-Beam semiconductor chip lab https://www.artificialintelligence-news.com/news/uk-opens-europe-first-e-beam-semiconductor-chip-lab/ https://www.artificialintelligence-news.com/news/uk-opens-europe-first-e-beam-semiconductor-chip-lab/#respond Wed, 30 Apr 2025 11:35:03 +0000 https://www.artificialintelligence-news.com/?p=106228 The UK has cut the ribbon on a pioneering electron beam (E-Beam) lithography facility to build the semiconductor chips of the future. What makes this special? It’s the first of its kind in Europe, and only the second facility like it on the planet—the other being in Japan. So, what’s the big deal about E-Beam […]

The post UK opens Europe’s first E-Beam semiconductor chip lab appeared first on AI News.

]]>
The UK has cut the ribbon on a pioneering electron beam (E-Beam) lithography facility to build the semiconductor chips of the future. What makes this special? It’s the first of its kind in Europe, and only the second facility like it on the planet—the other being in Japan.

So, what’s the big deal about E-Beam lithography? Imagine trying to draw incredibly complex patterns, but thousands of times smaller than a human hair. That’s essentially what this technology does, using a focused beam of tiny electrons.

Such precision is vital for designing the microscopic components inside the chips that run everything from our smartphones and gaming consoles to life-saving medical scanners and advanced defence systems.

Semiconductors are already big business for the UK, adding around £10 billion to its economy each year. And that figure is only expected to climb, potentially hitting £17 billion by the end of the decade.

Nurturing this sector is a major opportunity for the UK—not just for bragging rights in advanced manufacturing, but for creating high-value jobs and driving real economic growth.

Speaking at the launch of the facility in Southampton, Science Minister Lord Patrick Vallance said: “Britain is home to some of the most exciting semiconductor research anywhere in the world—and Southampton’s new E-Beam facility is a major boost to our national capabilities.

“By investing in both infrastructure and talent, we’re giving our researchers and innovators the support they need to develop next-generation chips right here in the UK.”

Lord Vallance’s visit wasn’t just a photo opportunity, though. It came alongside some sobering news: fresh research published today highlights that one of the biggest hurdles facing the UK’s growing chip industry is finding enough people with the right skills.

We’re talking about a serious talent crunch. When you consider that a single person working in semiconductors contributes an average of £460,000 to the economy each year, you can see why plugging this skills gap is so critical.

So, what’s the plan? The government isn’t just acknowledging the problem; they’re putting money where their mouth is with a £4.75 million semiconductor skills package. The idea is to build up that talent pipeline, making sure universities like Southampton – already powerhouses of chip innovation – have resources like the E-Beam lab and the students they need.

“Our £4.75 million skills package will support our Plan for Change by helping more young people into high-value semiconductors careers, closing skills gaps and backing growth in this critical sector,” Lord Vallance explained.

Here’s where that cash is going:

  • Getting students hooked (£3 million): Fancy £5,000 towards your degree? 300 students starting Electronics and Electrical Engineering courses this year will get just that, along with specific learning modules to show them what a career in semiconductors actually involves, particularly in chip design and making the things.
  • Practical chip skills (£1.2 million): It’s one thing learning the theory, another designing a real chip. This pot will fund new hands-on chip design courses for students (undergrad and postgrad) and even train up lecturers. They’re also looking into creating conversion courses to tempt talented people from other fields into the chip world.
  • Inspiring the next generation (Nearly £550,000): To really build a long-term pipeline, you need to capture interest early. This funding aims to give 7,000 teenagers (15-18) and 450 teachers some real, hands-on experience with semiconductors, working with local companies in existing UK chip hotspots like Newport, Cambridge, and Glasgow. The goal is to show young people the cool career paths available right on their doorstep.

Ultimately, the hope is that this targeted support will give the UK semiconductor scene the skilled workforce it needs to thrive. It’s about encouraging more students to jump into these valuable careers, helping companies find the people they desperately need, and making sure the UK stays at the forefront of the technologies that will shape tomorrow’s economy.

Professor Graham Reed, who heads up the Optoelectronics Research Centre (ORC) at Southampton University, commented: “The introduction of the new E-Beam facility will reinforce our position of hosting the most advanced cleanroom in UK academia.

“It facilitates a vast array of innovative and industrially relevant research, and much needed semiconductor skills training.”

Putting world-class tools in the hands of researchers while simultaneously investing in the people who will use them will help to cement the UK’s leadership in semiconductors.

See also: AI in education: Balancing promises and pitfalls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK opens Europe’s first E-Beam semiconductor chip lab appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-opens-europe-first-e-beam-semiconductor-chip-lab/feed/ 0
Meta will train AI models using EU user data https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/ https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/#respond Tue, 15 Apr 2025 16:32:02 +0000 https://www.artificialintelligence-news.com/?p=105325 Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models. The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.    In a statement, […]

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models.

The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.   

In a statement, Meta wrote: “Today, we’re announcing our plans to train AI at Meta using public content – like public posts and comments – shared by adults on our products in the EU.

“People’s interactions with Meta AI – like questions and queries – will also be used to train and improve our models.”

Starting this week, users of Meta’s platforms (including Facebook, Instagram, WhatsApp, and Messenger) within the EU will receive notifications explaining the data usage. These notifications, delivered both in-app and via email, will detail the types of public data involved and link to an objection form.

“We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones,” Meta explained.

Meta explicitly clarified that certain data types remain off-limits for AI training purposes.

The company says it will not “use people’s private messages with friends and family” to train its generative AI models. Furthermore, public data associated with accounts belonging to users under the age of 18 in the EU will not be included in the training datasets.

Meta wants to build AI tools designed for EU users

Meta positions this initiative as a necessary step towards creating AI tools designed for EU users. Meta launched its AI chatbot functionality across its messaging apps in Europe last month, framing this data usage as the next phase in improving the service.

“We believe we have a responsibility to build AI that’s not just available to Europeans, but is actually built for them,” the company explained. 

“That means everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

This becomes increasingly pertinent as AI models evolve with multi-modal capabilities spanning text, voice, video, and imagery.   

Meta also situated its actions in the EU within the broader industry landscape, pointing out that training AI on user data is common practice.

“It’s important to note that the kind of AI training we’re doing is not unique to Meta, nor will it be unique to Europe,” the statement reads. 

“We’re following the example set by others including Google and OpenAI, both of which have already used data from European users to train their AI models.”

Meta further claimed its approach surpasses others in openness, stating, “We’re proud that our approach is more transparent than many of our industry counterparts.”   

Regarding regulatory compliance, Meta referenced prior engagement with regulators, including a delay initiated last year while awaiting clarification on legal requirements. The company also cited a favourable opinion from the European Data Protection Board (EDPB) in December 2024.

“We welcome the opinion provided by the EDPB in December, which affirmed that our original approach met our legal obligations,” wrote Meta.

Broader concerns over AI training data

While Meta presents its approach in the EU as transparent and compliant, the practice of using vast swathes of public user data from social media platforms to train large language models (LLMs) and generative AI continues to raise significant concerns among privacy advocates.

Firstly, the definition of “public” data can be contentious. Content shared publicly on platforms like Facebook or Instagram may not have been posted with the expectation that it would become raw material for training commercial AI systems capable of generating entirely new content or insights. Users might share personal anecdotes, opinions, or creative works publicly within their perceived community, without envisaging its large-scale, automated analysis and repurposing by the platform owner.

Secondly, the effectiveness and fairness of an “opt-out” system versus an “opt-in” system remain debatable. Placing the onus on users to actively object, often after receiving notifications buried amongst countless others, raises questions about informed consent. Many users may not see, understand, or act upon the notification, potentially leading to their data being used by default rather than explicit permission.

Thirdly, the issue of inherent bias looms large. Social media platforms reflect and sometimes amplify societal biases, including racism, sexism, and misinformation. AI models trained on this data risk learning, replicating, and even scaling these biases. While companies employ filtering and fine-tuning techniques, eradicating bias absorbed from billions of data points is an immense challenge. An AI trained on European public data needs careful curation to avoid perpetuating stereotypes or harmful generalisations about the very cultures it aims to understand.   

Furthermore, questions surrounding copyright and intellectual property persist. Public posts often contain original text, images, and videos created by users. Using this content to train commercial AI models, which may then generate competing content or derive value from it, enters murky legal territory regarding ownership and fair compensation—issues currently being contested in courts worldwide involving various AI developers.

Finally, while Meta highlights its transparency relative to competitors, the actual mechanisms of data selection, filtering, and its specific impact on model behaviour often remain opaque. Truly meaningful transparency would involve deeper insights into how specific data influences AI outputs and the safeguards in place to prevent misuse or unintended consequences.

The approach taken by Meta in the EU underscores the immense value technology giants place on user-generated content as fuel for the burgeoning AI economy. As these practices become more widespread, the debate surrounding data privacy, informed consent, algorithmic bias, and the ethical responsibilities of AI developers will undoubtedly intensify across Europe and beyond.

(Photo by Julio Lopez)

See also: Apple AI stresses privacy with synthetic and anonymised data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/feed/ 0
BCG: Analysing the geopolitics of generative AI https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/ https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/#respond Fri, 11 Apr 2025 16:11:17 +0000 https://www.artificialintelligence-news.com/?p=105294 Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike. Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and […]

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike.

Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and the implications for multinational corporations.

AI investments expose businesses to increasingly tense geopolitics

Sylvain Duranton, Global Leader at BCG X, noted the significant geopolitical risk companies face: “For large companies, close to half of them, 44%, have teams around the world, not just in one country where their headquarters are.”

Sylvain Duranton, Global Leader at BCG X

Many of these businesses operate across numerous countries, making them vulnerable to differing regulations and sovereignty issues. “They’ve built their AI teams and ecosystem far before there was such tension around the world.”

Duranton also pointed to the stark imbalance in the AI supply race, particularly in investment.

Comparing the market capitalisation of tech companies, the US dwarfs Europe by a factor of 20 and the Asia Pacific region by five. Investment figures paint a similar picture, showing a “completely disproportionate” imbalance compared to the relative sizes of the economies.

This AI race is fuelled by massive investments in compute power, frontier models, and the emergence of lighter, open-weight models changing the competitive dynamic.   

Benchmarking national AI capabilities

Nikolaus Lang, Global Leader at the BCG Henderson Institute – BCG’s think tank – detailed the extensive research undertaken to benchmark national GenAI capabilities objectively.

The team analysed the “upstream of GenAI,” focusing on large language model (LLM) development and its six key enablers: capital, computing power, intellectual property, talent, data, and energy.

Using hard data like AI researcher numbers, patents, data centre capacity, and VC investment, they created a comparative analysis. Unsurprisingly, the analysis revealed the US and China as the clear AI frontrunners and maintain leads in geopolitics.

Nikolaus Lang, Global Leader at the BCG Henderson Institute

The US boasts the largest pool of AI specialists (around half a million), immense capital power ($303bn in VC funding, $212bn in tech R&D), and leading compute power (45 GW).

Lang highlighted America’s historical dominance, noting, “the US has been the largest producer of notable AI models with 67%” since 1950, a lead reflected in today’s LLM landscape. This strength is reinforced by “outsized capital power” and strategic restrictions on advanced AI chip access through frameworks like the US AI Diffusion Framework.   

China, the second AI superpower, shows particular strength in data—ranking highly in e-governance and mobile broadband subscriptions, alongside significant data centre capacity (20 GW) and capital power. 

Despite restricted access to the latest chips, Chinese LLMs are rapidly closing the gap with US models. Lang mentioned the emergence of models like DeepSpeech as evidence of this trend, achieved with smaller teams, fewer GPU hours, and previous-generation chips.

China’s progress is also fuelled by heavy investment in AI academic institutions (hosting 45 of the world’s top 100), a leading position in AI patent applications, and significant government-backed VC funding. Lang predicts “governments will play an important role in funding AI work going forward.”

The middle powers: Europe, Middle East, and Asia

Beyond the superpowers, several “middle powers” are carving out niches.

  • EU: While trailing the US and China, the EU holds the third spot with significant data centre capacity (8 GW) and the world’s second-largest AI talent pool (275,000 specialists) when capabilities are combined. Europe also leads in top AI publications. Lang stressed the need for bundled capacities, suggesting AI, defence, and renewables are key areas for future EU momentum.
  • Middle East (UAE & Saudi Arabia): These nations leverage strong capital power via sovereign wealth funds and competitively low electricity prices to attract talent and build compute power, aiming to become AI drivers “from scratch”. They show positive dynamics in attracting AI specialists and are climbing the ranks in AI publications.   
  • Asia (Japan & South Korea): Leveraging strong existing tech ecosystems in hardware and gaming, these countries invest heavily in R&D (around $207bn combined by top tech firms). Government support, particularly in Japan, fosters both supply and demand. Local LLMs and strategic investments by companies like Samsung and SoftBank demonstrate significant activity.   
  • Singapore: Singapore is boosting its AI ecosystem by focusing on talent upskilling programmes, supporting Southeast Asia’s first LLM, ensuring data centre capacity, and fostering adoption through initiatives like establishing AI centres of excellence.   

The geopolitics of generative AI: Strategy and sovereignty

The geopolitics of generative AI is being shaped by four clear dynamics: the US retains its lead, driven by an unrivalled tech ecosystem; China is rapidly closing the gap; middle powers face a strategic choice between building supply or accelerating adoption; and government funding is set to play a pivotal role, particularly as R&D costs climb and commoditisation sets in.

As geopolitical tensions mount, businesses are likely to diversify their GenAI supply chains to spread risk. The race ahead will be defined by how nations and companies navigate the intersection of innovation, policy, and resilience.

(Photo by Markus Krisetya)

See also: OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/feed/ 0
UK forms AI Energy Council to align growth and sustainability goals https://www.artificialintelligence-news.com/news/uk-forms-ai-energy-council-align-growth-sustainability-goals/ https://www.artificialintelligence-news.com/news/uk-forms-ai-energy-council-align-growth-sustainability-goals/#respond Tue, 08 Apr 2025 14:10:49 +0000 https://www.artificialintelligence-news.com/?p=105230 The UK government has announced the first meeting of a new AI Energy Council aimed at ensuring the nation’s AI and clean energy goals work in tandem to drive economic growth. The inaugural meeting of the council will see members agree on its core objectives, with a central focus on how the government’s mission to […]

The post UK forms AI Energy Council to align growth and sustainability goals appeared first on AI News.

]]>
The UK government has announced the first meeting of a new AI Energy Council aimed at ensuring the nation’s AI and clean energy goals work in tandem to drive economic growth.

The inaugural meeting of the council will see members agree on its core objectives, with a central focus on how the government’s mission to become a clean energy superpower can support its commitment to advancing AI and compute infrastructure.

Unveiled earlier this year as part of the government’s response to the AI Opportunities Action Plan, the council will serve as a crucial platform for bringing together expert insights on the significant energy demands associated with the AI sector.

Concerns surrounding the substantial energy requirements of AI data centres are a global challenge. The UK is proactively addressing this issue through initiatives like the establishment of new AI Growth Zones.

These zones are dedicated hubs for AI development that are strategically located in areas with access to at least 500MW of power—an amount equivalent to powering approximately two million homes. This approach is designed to attract private investment from companies looking to establish operations in Britain, ultimately generating local jobs and boosting the economy.

Peter Kyle, Secretary of State for Science, Innovation, and Technology, said: “The work of the AI Energy Council will ensure we aren’t just powering our AI needs to deliver new waves of opportunity in all parts of the country, but can do so in a way which is responsible and sustainable.

“This requires a broad range of expertise from industry and regulators as we fire up the UK’s economic engine to make it fit for the age of AI—meaning we can deliver the growth which is the beating heart of our Plan for Change.”

The Council is also expected to delve into the role of clean energy sources, including renewables and nuclear, in powering the AI revolution.

A key aspect of its work will involve advising on how to improve energy efficiency and sustainability within AI and data centre infrastructure, with specific considerations for resource usage such as water. Furthermore, the council will take proactive steps to ensure the secure adoption of AI across the UK’s critical energy network itself.

Ed Miliband, Secretary of State for Energy Security and Net Zero, commented: “We are making the UK a clean energy superpower, building the homegrown energy this country needs to protect consumers and businesses, and drive economic growth, as part of our Plan for Change.

“AI can play an important role in building a new era of clean electricity for our country and as we unlock AI’s potential, this Council will help secure a sustainable scale up to benefit businesses and communities across the UK.”

In a parallel effort to facilitate the growth of the AI sector, the UK government has been working closely with energy regulator Ofgem and the National Energy System Operator (NESO) to implement fundamental reforms to the UK’s connections process.

Subject to final sign-offs from Ofgem, these reforms could potentially unlock more than 400GW of capacity from the connection queue. This acceleration of projects is deemed vital for economic growth, particularly for the delivery of new large-scale AI data centres that require significant power infrastructure.

The newly-formed AI Energy Council comprises representatives from 14 key organisations across the energy and technology sectors, including regulators and leading companies. These members will contribute their expert insights to support the council’s work and ensure a collaborative approach to addressing the energy challenges and opportunities presented by AI.

Among the prominent organisations joining the council are EDF, Scottish Power, National Grid, technology giants Google, Microsoft, Amazon Web Services (AWS), and chip designer ARM, as well as infrastructure investment firm Brookfield.

This collaborative framework, uniting the energy and technology sectors, aims to ensure seamless coordination in speeding up the connection of energy projects to the national grid. This is particularly crucial given the increasing number of technology companies announcing plans to build data centres across the UK.

Alison Kay, VP for UK and Ireland at AWS, said: “At Amazon, we’re working to meet the future energy needs of our customers, while remaining committed to powering our operations in a more sustainable way, and progressing toward our Climate Pledge commitment to become net-zero carbon by 2040.

“As the world’s largest corporate purchaser of renewable energy for the fifth year in a row, we share the government’s goal to ensure the UK has sufficient access to carbon-free energy to support its AI ambitions and to help drive economic growth.”

Jonathan Brearley, CEO of Ofgem, added: “AI will play an increasingly important role in transforming our energy system to be cleaner, more efficient, and more cost-effective for consumers, but only if used in a fair, secure, sustainable, and safe way.

“Working alongside other members of this Council, Ofgem will ensure AI implementation puts consumer interests first – from customer service to infrastructure planning and operation – so that everyone feels the benefits of this technological innovation in energy.”

This initiative aligns with the government’s Clean Power Action Plan, which focuses on connecting more homegrown clean power to the grid by building essential infrastructure and prioritising projects needed for 2030. The aim is to clear the grid connection queue, enabling crucial infrastructure projects – from housing to gigafactories and data centres – to gain access to the grid, thereby unlocking billions in investment and fostering economic growth.

Furthermore, the government is streamlining planning approvals to significantly reduce the time it takes for infrastructure projects to get off the ground. This accelerated process will ensure that AI innovators can readily access cutting-edge infrastructure and the necessary power to drive forward the next wave of AI advancements.

(Photo by Vlad Hilitanu)

See also: Tony Blair Institute AI copyright report sparks backlash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK forms AI Energy Council to align growth and sustainability goals appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-forms-ai-energy-council-align-growth-sustainability-goals/feed/ 0
Tony Blair Institute AI copyright report sparks backlash https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/ https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/#respond Wed, 02 Apr 2025 11:04:11 +0000 https://www.artificialintelligence-news.com/?p=105140 The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI. According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up […]

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI.

According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up for grabs, and the UK has a golden opportunity to take the lead.

The report emphasises that countries that “embrace change and harness the power of artificial intelligence in creative ways will set the technical, aesthetic, and regulatory standards for others to follow.”

Highlighting that we are in the midst of another revolution in media and communication, the report notes that AI is disrupting how textual, visual, and auditive content is created, distributed, and experienced, much like the printing press, gramophone, and camera did before it.

“AI will usher in a new era of interactive and bespoke works, as well as a counter-revolution that celebrates everything that AI can never be,” the report states.

However, far from signalling the end of human creativity, the TBI suggests AI will open up “new ways of being original.”

The AI revolution’s impact isn’t limited to the creative industries; it’s being felt across all areas of society. Scientists are using AI to accelerate discoveries, healthcare providers are employing it to analyse X-ray images, and emergency services utilise it to locate houses damaged by earthquakes.

The report stresses that these cross-industry advancements are just the beginning, with future AI systems set to become increasingly capable, fuelled by advancements in computing power, data, model architectures, and access to talent.

The UK government has expressed its ambition to be a global leader in AI through its AI Opportunities Action Plan, announced by Prime Minister Keir Starmer on 13 January 2025. For its part, the TBI welcomes the UK government’s ambition, stating that “if properly designed and deployed, AI can make human lives healthier, safer, and more prosperous.”

However, the rapid spread of AI across sectors raises urgent policy questions, particularly concerning the data used for AI training. The application of UK copyright law to the training of AI models is currently contested, with the debate often framed as a “zero-sum game” between AI developers and rights holders. The TBI argues that this framing “misrepresents the nature of the challenge and the opportunity before us.”

The report emphasises that “bold policy solutions are needed to provide all parties with legal clarity and unlock investments that spur innovation, job creation, and economic growth.”

According to the TBI, AI presents opportunities for creators—noting its use in various fields from podcasts to filmmaking. The report draws parallels with past technological innovations – such as the printing press and the internet – which were initially met with resistance, but ultimately led to societal adaptation and human ingenuity prevailing.

The TBI proposes that the solution lies not in clinging to outdated copyright laws but in allowing them to “co-evolve with technological change” to remain effective in the age of AI.

The UK government has proposed a text and data mining exception with an opt-out option for rights holders. While the TBI views this as a good starting point for balancing stakeholder interests, it acknowledges the “significant implementation and enforcement challenges” that come with it, spanning legal, technical, and geopolitical dimensions.

In the report, the Tony Blair Institute for Global Change “assesses the merits of the UK government’s proposal and outlines a holistic policy framework to make it work in practice.”

The report includes recommendations and examines novel forms of art that will emerge from AI. It also delves into the disagreement between rights holders and developers on copyright, the wider implications of copyright policy, and the serious hurdles the UK’s text and data mining proposal faces.

Furthermore, the Tony Blair Institute explores the challenges of governing an opt-out policy, implementation problems with opt-outs, making opt-outs useful and accessible, and tackling the diffusion problem. AI summaries and the problems they present regarding identity are also addressed, along with defensive tools as a partial solution and solving licensing problems.

The report also seeks to clarify the standards on human creativity, address digital watermarking, and discuss the uncertainty around the impact of generative AI on the industry. It proposes establishing a Centre for AI and the Creative Industries and discusses the risk of judicial review, the benefits of a remuneration scheme, and the advantages of a targeted levy on ISPs to raise funding for the Centre.

However, the report has faced strong criticism. Ed Newton-Rex, CEO of Fairly Trained, raised several concerns on Bluesky. These concerns include:

  • The report repeats the “misleading claim” that existing UK copyright law is uncertain, which Newton-Rex asserts is not the case.
  • The suggestion that an opt-out scheme would give rights holders more control over how their works are used is misleading. Newton-Rex argues that licensing is currently required by law, so moving to an opt-out system would actually decrease control, as some rights holders will inevitably miss the opt-out.
  • The report likens machine learning (ML) training to human learning, a comparison that Newton-Rex finds shocking, given the vastly different scalability of the two.
  • The report’s claim that AI developers won’t make long-term profits from training on people’s work is questioned, with Newton-Rex pointing to the significant funding raised by companies like OpenAI.
  • Newton-Rex suggests the report uses strawman arguments, such as stating that generative AI may not replace all human paid activities.
  • A key criticism is that the report omits data showing how generative AI replaces demand for human creative labour.
  • Newton-Rex also criticises the report’s proposed solutions, specifically the suggestion to set up an academic centre, which he notes “no one has asked for.”
  • Furthermore, he highlights the proposal to tax every household in the UK to fund this academic centre, arguing that this would place the financial burden on consumers rather than the AI companies themselves, and the revenue wouldn’t even go to creators.

Adding to these criticisms, British novelist and author Jonathan Coe noted that “the five co-authors of this report on copyright, AI, and the arts are all from the science and technology sectors. Not one artist or creator among them.”

While the report from Tony Blair Institute for Global Change supports the government’s ambition to be an AI leader, it also raises critical policy questions—particularly around copyright law and AI training data.

(Photo by Jez Timms)

See also: Amazon Nova Act: A step towards smarter, web-native AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/feed/ 0
UK minister in US to pitch Britain as global AI investment hub https://www.artificialintelligence-news.com/news/uk-minister-in-us-pitch-britain-global-ai-investment-hub/ https://www.artificialintelligence-news.com/news/uk-minister-in-us-pitch-britain-global-ai-investment-hub/#respond Thu, 20 Mar 2025 13:18:04 +0000 https://www.artificialintelligence-news.com/?p=104940 The UK aims to secure its position as a global leader with additional AI investment, with Technology Secretary Peter Kyle currently in the US to champion Britain’s credentials. As the UK government prioritises AI within its “Plan for Change,” Kyle’s visit aims to strengthen the special relationship between the UK and the US that has […]

The post UK minister in US to pitch Britain as global AI investment hub appeared first on AI News.

]]>
The UK aims to secure its position as a global leader with additional AI investment, with Technology Secretary Peter Kyle currently in the US to champion Britain’s credentials.

As the UK government prioritises AI within its “Plan for Change,” Kyle’s visit aims to strengthen the special relationship between the UK and the US that has been under particular strain in recent years.

Speaking at NVIDIA’s annual conference in San Jose on 20th March, Kyle outlined the government’s strategy to “rewire” the British economy around AI. This initiative seeks to distribute the benefits of AI-driven wealth creation beyond traditional hubs like Silicon Valley and London, empowering communities across the UK to embrace its opportunities.

Addressing an audience of business leaders, developers, and innovators, the Technology Secretary articulated his vision for leveraging AI and advanced technologies to tackle complex global challenges, positioning Britain as a beacon of innovation.

The UK is actively deploying AI to enhance public services and stimulate economic growth, a cornerstone of the government’s “Plan for Change.”

Kyle is now highlighting the significant potential of the UK’s AI sector, currently valued at over $92 billion and projected to exceed $1 trillion by 2035. This growth trajectory, according to the government, will position Britain as the second-leading AI nation in the democratic world—presenting a wealth of investment opportunities for US companies and financial institutions.

A central theme of Kyle’s message is the readiness of the UK to embrace AI investment, with a particular emphasis on transforming “the relics of economic eras past into the UK’s innovative AI Growth Zones.”

These “AI Growth Zones” are a key element of the government’s AI Opportunities Action Plan. They are strategically designated areas designed to rapidly attract large-scale AI investment through streamlined regulations and dedicated infrastructure.

AI Growth Zones, as the name suggests, are envisioned as vibrant hubs for AI development with a pipeline of new opportunities for companies to scale up and innovate. The Technology Secretary is actively encouraging investors to participate in this new form of partnership.

During his speech at the NVIDIA conference, Kyle is expected to detail how these Growth Zones – benefiting from access to substantial power connections and a planning system designed to expedite construction – will facilitate the development of a compute infrastructure on a scale that the UK “has never seen before.”

The government has already received numerous proposals from local leaders and industry stakeholders across the nation, demonstrating Britain’s eagerness to utilise AI to revitalise communities and drive economic growth throughout the country.

This initiative is expected to contribute to higher living standards across the UK, a key priority for the government over the next four years. The AI Growth Zones are intended to deliver the jobs, investment, and a thriving business environment necessary to improve the financial well-being of citizens and deliver on the “Plan for Change.”

At the NVIDIA conference, Kyle is expected to say: “In empty factories and abandoned mines, in derelict sites and unused power supplies, I see the places where we can begin to build a new economic model. A model completely rewired around the immense power of artificial intelligence.

“Where, faced with that power, the state is neither a blocker nor a shirker—but an agile, proactive partner. In Britain, we want to turn the relics of economic eras past into AI Growth Zones.”

As part of his visit to the US, Peter Kyle will also engage with prominent companies in the tech sector, including OpenAI, Anthropic, NVIDIA, and Vantage. His aim is to encourage more of these companies to establish a presence in the UK, positioning it as their “Silicon Valley home from home.”

Furthermore, the Technology Secretary is expected to state: “There is a real hunger for investment in Britain, and people who are optimistic about the future, and hopeful for the opportunities which AI will bring for them and their families. States owe it to their citizens to support it. Not through diktat or directive, but through partnership.”

The UK Prime Minister and the President of the US have placed AI at the forefront of the transatlantic relationship. During a visit to the White House last month, the Prime Minister confirmed that both nations are collaborating on a new economic deal with advanced technologies at its core.

Since unveiling its new AI strategy at the beginning of the year and assigning the technology a central role in delivering the government’s ‘Plan for Change,’ the UK has already witnessed significant investment from US companies seeking to establish AI bases in Britain.

Notable recent investments include a substantial £12 billion commitment from Vantage Data Centers to significantly expand Britain’s data infrastructure, which is projected to create approximately 11,500 jobs. Additionally, last month saw the UK Government formalise a partnership with Anthropic to enhance collaboration on leveraging AI to improve public services nationwide.

By strengthening these partnerships with leading US tech firms and investors, the UK’s AI sector is well-positioned for sustained growth as the government aims to continue to remove innovation barriers.

(Photo by Billy Joachim)

See also: OpenAI and Google call for US government action to secure AI lead

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK minister in US to pitch Britain as global AI investment hub appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-minister-in-us-pitch-britain-global-ai-investment-hub/feed/ 0
CERTAIN drives ethical AI compliance in Europe  https://www.artificialintelligence-news.com/news/certain-drives-ethical-ai-compliance-in-europe/ https://www.artificialintelligence-news.com/news/certain-drives-ethical-ai-compliance-in-europe/#respond Wed, 26 Feb 2025 17:27:42 +0000 https://www.artificialintelligence-news.com/?p=104623 EU-funded initiative CERTAIN aims to drive ethical AI compliance in Europe amid increasing regulations like the EU AI Act. CERTAIN — short for “Certification for Ethical and Regulatory Transparency in Artificial Intelligence” — will focus on the development of tools and frameworks that promote transparency, compliance, and sustainability in AI technologies. The project is led […]

The post CERTAIN drives ethical AI compliance in Europe  appeared first on AI News.

]]>
EU-funded initiative CERTAIN aims to drive ethical AI compliance in Europe amid increasing regulations like the EU AI Act.

CERTAIN — short for “Certification for Ethical and Regulatory Transparency in Artificial Intelligence” — will focus on the development of tools and frameworks that promote transparency, compliance, and sustainability in AI technologies.

The project is led by Idemia Identity & Security France in collaboration with 19 partners across ten European countries, including the St. Pölten University of Applied Sciences (UAS) in Austria. With its official launch in January 2025, CERTAIN could serve as a blueprint for global AI governance.

Driving ethical AI practices in Europe

According to Sebastian Neumaier, Senior Researcher at the St. Pölten UAS’ Institute of IT Security Research and project manager for CERTAIN, the goal is to address crucial regulatory and ethical challenges.  

“In CERTAIN, we want to develop tools that make AI systems transparent and verifiable in accordance with the requirements of the EU’s AI Act. Our goal is to develop practically feasible solutions that help companies to efficiently fulfil regulatory requirements and sustainably strengthen confidence in AI technologies,” emphasised Neumaier.  

To achieve this, CERTAIN aims to create user-friendly tools and guidelines that simplify even the most complex AI regulations—helping organisations both in the public and private sectors navigate and implement these rules effectively. The overall intent is to provide a bridge between regulation and innovation, empowering businesses to leverage AI responsibly while fostering public trust.

Harmonising standards and improving sustainability  

One of CERTAIN’s primary objectives is to establish consistent standards for data sharing and AI development across Europe. By setting industry-wide norms for interoperability, the project seeks to improve collaboration and efficiency in the use of AI-driven technologies.

The effort to harmonise data practices isn’t just limited to compliance; it also aims to unlock new opportunities for innovation. CERTAIN’s solutions will create open and trustworthy European data spaces—essential components for driving sustainable economic growth.  

In line with the EU’s Green Deal, CERTAIN places a strong focus on sustainability. AI technologies, while transformative, come with significant environmental challenges—such as high energy consumption and resource-intensive data processing.  

CERTAIN will address these issues by promoting energy-efficient AI systems and advocating for eco-friendly methods of data management. This dual approach not only aligns with EU sustainability goals but also ensures that AI development is carried out with the health of the planet in mind.

A collaborative framework to unlock AI innovation

A unique aspect of CERTAIN is its approach to fostering collaboration and dialogue among stakeholders. The project team at St. Pölten UAS is actively engaging with researchers, tech companies, policymakers, and end-users to co-develop, test, and refine ideas, tools, and standards.  

This practice-oriented exchange extends beyond product development. CERTAIN also serves as a central authority for informing stakeholders about legal, ethical, and technical matters related to AI and certification. By maintaining open channels of communication, CERTAIN ensures that its outcomes are not only practical but also widely adopted.   

CERTAIN is part of the EU’s Horizon Europe programme, specifically under Cluster 4: Digital, Industry, and Space.

The project’s multidisciplinary and international consortium includes leading academic institutions, industrial giants, and research organisations, making it a powerful collective effort to shape the future of AI in Europe.  

In January 2025, representatives from all 20 consortium members met in Osny, France, to kick off their collaborative mission. The two-day meeting set the tone for the project’s ambitious agenda, with partners devising strategies for tackling the regulatory, technical, and ethical hurdles of AI.  

Ensuring compliance with ethical AI regulations in Europe 

As the EU’s AI Act edges closer to implementation, guidelines and tools like those developed under CERTAIN will be pivotal.

The Act will impose strict requirements on AI systems, particularly those deemed “high-risk,” such as applications in healthcare, transportation, and law enforcement.

While these regulations aim to ensure safety and accountability, they also pose challenges for organisations seeking to comply.  

CERTAIN seeks to alleviate these challenges by providing actionable solutions that align with Europe’s legal framework while encouraging innovation. By doing so, the project will play a critical role in positioning Europe as a global leader in ethical AI development.  

See also: Endor Labs: AI transparency vs ‘open-washing’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CERTAIN drives ethical AI compliance in Europe  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/certain-drives-ethical-ai-compliance-in-europe/feed/ 0
UK must act to secure its semiconductor industry leadership https://www.artificialintelligence-news.com/news/uk-must-act-secure-its-semiconductor-industry-leadership/ https://www.artificialintelligence-news.com/news/uk-must-act-secure-its-semiconductor-industry-leadership/#respond Mon, 17 Feb 2025 11:47:01 +0000 https://www.artificialintelligence-news.com/?p=104518 The UK semiconductor industry is at a critical juncture, with techUK urging the government to act to maintain its global competitiveness. Laura Foster, Associate Director of Technology and Innovation at techUK, said: “The UK has a unique opportunity to lead in the global semiconductor landscape, but success will require bold action and sustained commitment. “By […]

The post UK must act to secure its semiconductor industry leadership appeared first on AI News.

]]>
The UK semiconductor industry is at a critical juncture, with techUK urging the government to act to maintain its global competitiveness.

Laura Foster, Associate Director of Technology and Innovation at techUK, said: “The UK has a unique opportunity to lead in the global semiconductor landscape, but success will require bold action and sustained commitment.

“By accelerating the implementation of the National Semiconductor Strategy, we can unlock investment, foster innovation, and strengthen our position in this critical industry.  

Semiconductors are the backbone of modern technology, powering everything from consumer electronics to AI data centres. With the global semiconductor market projected to reach $1 trillion by 2030, the UK must act to secure its historic leadership in this lucrative and strategically vital industry.

“We must act at pace to secure the UK’s semiconductor future and as such our technological and economic resilience,” explains Foster.

UK semiconductor industry strengths and challenges

The UK has long been a leader in semiconductor design and intellectual property (IP), with Cambridge in particular serving as a global hub for innovation.

Companies like Arm, which designs chips used in 99% of the world’s smartphones, exemplify the UK’s strengths in this area. However, a techUK report warns that these strengths are under threat due to insufficient investment, skills shortages, and a lack of tailored support for the sector.

“The UK is not starting from zero,” the report states. “We have globally competitive capabilities in design and IP, but we must double down on these strengths to compete internationally.”

The UK’s semiconductor industry contributed £12 billion in turnover in 2021, with 90% of companies expecting growth in the coming years. However, the sector faces significant challenges, including high costs, limited access to private capital, and a reliance on international talent.

The report highlights that only 5% of funding for UK semiconductor startups originates domestically, with many companies struggling to find qualified investors.

A fundamental need for strategic investment and innovation

The report makes 27 recommendations across six key areas, including design and IP, R&D, manufacturing, skills, and global partnerships.

Some of the key proposals include:

  • Turn current strengths into leadership: The UK must leverage its existing capabilities in design, IP, and compound semiconductors. This includes supporting regional clusters like Cambridge and South Wales, which have proven track records of innovation.
  • Establishing a National Semiconductor Centre: This would act as a central hub for the industry, providing support for businesses, coordinating R&D efforts, and fostering collaboration between academia and industry.
  • Expanding R&D tax credits: The report calls for the inclusion of capital expenditure in R&D tax credits to incentivise investment in new facilities and equipment.
  • Creating a Design Competence Centre: This would provide shared facilities for chip designers, reducing the financial risk of innovation and supporting the development of advanced designs.
  • Nurturing skills: The UK must address the skills shortage in the semiconductor sector by upskilling workers, attracting international talent, and promoting STEM education.
  • Capitalise on global partnerships: The UK must strengthen its position in the global semiconductor supply chain by forming strategic partnerships with allied countries. This includes collaborating on R&D, securing access to critical materials, and navigating export controls.

Urgent action is required to secure the UK semiconductor industry

The report warns that the UK risks falling behind other nations if it does not act quickly. Countries like the US, China, and the EU have already announced significant investments in their domestic semiconductor industries.

The European Chips Act, for example, has committed €43 billion to support semiconductor infrastructure, skills, and startups.

“Governments across the world are acting quickly to attract semiconductor companies while also building domestic capability,” the report states. “The UK must use its existing resources tactically, playing to its globally recognised strengths within the semiconductor value chain.”

The UK’s semiconductor industry has the potential to be a global leader, but this will require sustained investment, strategic planning, and collaboration between government, industry, and academia.

“The UK Government should look to its semiconductor ambitions as an essential part of delivering the wider Industrial Strategy and securing not just the fastest growth in the G7, but also secure and resilient economic growth,” the report concludes.

(Photo by Rocco Dipoppa)

See also: AI in 2025: Purpose-driven models, human integration, and more

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK must act to secure its semiconductor industry leadership appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-must-act-secure-its-semiconductor-industry-leadership/feed/ 0
Ursula von der Leyen: AI race ‘is far from over’ https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/ https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/#respond Tue, 11 Feb 2025 16:51:29 +0000 https://www.artificialintelligence-news.com/?p=104314 Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris. While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths […]

The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News.

]]>
Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris.

While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths to carve a leading role for itself.

“This is the third summit on AI safety in just over one year,” von der Leyen remarked. “In the same period, three new generations of ever more powerful AI models have been released. Some expect models that will approach human reasoning within a year’s time.”

The European Commission President set the tone of the event by contrasting the groundwork laid in previous summits with the urgency of this one.

“Past summits focused on laying the groundwork for AI safety. Together, we built a shared consensus that AI will be safe, that it will promote our values and benefit humanity. But this Summit is focused on action. And that is exactly what we need right now.”

As the world witnesses AI’s disruptive power, von der Leyen urged Europe to “formulate a vision of where we want AI to take us, as society and as humanity.” Growing adoption, “in the key sectors of our economy, and for the key challenges of our times,” provides a golden opportunity for the continent to lead, she argued.

The case for a European approach to the AI race 

Von der Leyen rejected notions that Europe has fallen behind its global competitors.

“Too often, I hear that Europe is late to the race – while the US and China have already got ahead. I disagree,” she stated. “The frontier is constantly moving. And global leadership is still up for grabs.”

Instead of replicating what other regions are doing, she called for doubling down on Europe’s unique strengths to define the continent’s distinct approach to AI.

“Too often, I have heard that we should replicate what others are doing and run after their strengths,” she said. “I think that instead, we should invest in what we can do best and build on our strengths here in Europe, which are our science and technology mastery that we have given to the world.”

Von der Leyen defined three pillars of the so-called “European brand of AI” that sets it apart: 1) focusing on high-complexity, industry-specific applications, 2) taking a cooperative, collaborative approach to innovation, and 3) embracing open-source principles.

“This summit shows there is a distinct European brand of AI,” she asserted. “It is already driving innovation and adoption. And it is picking up speed.”

Accelerating innovation: AI factories and gigafactories  

To maintain its competitive edge, Europe must supercharge its AI innovation, von der Leyen stressed.

A key component of this strategy lies in its computational infrastructure. Europe already boasts some of the world’s fastest supercomputers, which are now being leveraged through the creation of “AI factories.”

“In just a few months, we have set up a record of 12 AI factories,” von der Leyen revealed. “And we are investing €10 billion in them. This is not a promise—it is happening right now, and it is the largest public investment for AI in the world, which will unlock over ten times more private investment.”

Beyond these initial steps, von der Leyen unveiled an even more ambitious initiative. AI gigafactories, built on the scale of CERN’s Large Hadron Collider, will provide the infrastructure needed for training AI systems at unprecedented scales. They aim to foster collaboration between researchers, entrepreneurs, and industry leaders.

“We provide the infrastructure for large computational power,” von der Leyen explained. “Talents of the world are welcome. Industries will be able to collaborate and federate their data.”

The cooperative ethos underpinning AI gigafactories is part of a broader European push to balance competition with collaboration.

“AI needs competition but also collaboration,” she emphasised, highlighting that the initiative will serve as a “safe space” for these cooperative efforts.

Building trust with the AI Act

Crucially, von der Leyen reiterated Europe’s commitment to making AI safe and trustworthy. She pointed to the EU AI Act as the cornerstone of this strategy, framing it as a harmonised framework to replace fragmented national regulations across member states.

“The AI Act [will] provide one single set of safety rules across the European Union – 450 million people – instead of 27 different national regulations,” she said, before acknowledging businesses’ concerns about regulatory complexities.

“At the same time, I know, we have to make it easier, we have to cut red tape. And we will.”

€200 billion to remain in the AI race

Financing such ambitious plans naturally requires significant resources. Von der Leyen praised the recently launched EU AI Champions Initiative, which has already pledged €150 billion from providers, investors, and industry.

During her speech at the summit, von der Leyen announced the Commission’s complementary InvestAI initiative that will bring in an additional €50 billion. Altogether, the result is mobilising a massive €200 billion in public-private AI investments.

“We will have a focus on industrial and mission-critical applications,” she said. “It will be the largest public-private partnership in the world for the development of trustworthy AI.”

Ethical AI is a global responsibility

Von der Leyen closed her address by framing Europe’s AI ambitions within a broader, humanitarian perspective, arguing that ethical AI is a global responsibility.

“Cooperative AI can be attractive well beyond Europe, including for our partners in the Global South,” she proclaimed, extending a message of inclusivity.

Von der Leyen expressed full support for the AI Foundation launched at the summit, highlighting its mission to ensure widespread access to AI’s benefits.

“AI can be a gift to humanity. But we must make sure that benefits are widespread and accessible to all,” she remarked.

“We want AI to be a force for good. We want an AI where everyone collaborates and everyone benefits. That is our path – our European way.”

See also: AI Action Summit: Leaders call for unity and equitable development

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/feed/ 0
NEPC: AI sprint risks environmental catastrophe https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/ https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/#respond Fri, 07 Feb 2025 12:32:41 +0000 https://www.artificialintelligence-news.com/?p=104189 The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint. A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction. The report, Engineering […]

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint.

A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction.

The report, Engineering Responsible AI: Foundations for Environmentally Sustainable AI, was developed in collaboration with the Royal Academy of Engineering, the Institution of Engineering and Technology, and BCS, the Chartered Institute of IT.

While stressing that data centres enabling AI systems can be built to consume fewer resources like energy and water, the report highlights that infrastructure and regulatory conditions must align for these efficiencies to materialise.

Unlocking the potential of AI while minimising environmental risks  

AI is heralded as capable of driving economic growth, creating jobs, and improving livelihoods. Launched as a central pillar of the UK’s tech strategy, the AI Opportunities Action Plan is intended to “boost economic growth, provide jobs for the future and improve people’s everyday lives.”  

Use cases for AI that are already generating public benefits include accelerating drug discovery, forecasting weather events, optimising energy systems, and even aiding climate science and improving sustainability efforts. However, this growing reliance on AI also poses environmental risks from the infrastructure required to power these systems.  

Data centres, which serve as the foundation of AI technologies, consume vast amounts of energy and water. Increasing demand has raised concerns about global competition for limited resources, such as sustainable energy and drinking water. Google and Microsoft, for instance, have recorded rising water usage by their data centres each year since 2020. Much of this water comes from drinking sources, sparking fears about resource depletion.  

With plans already in place to reform the UK’s planning system to facilitate the construction of data centres, the report calls for urgent policies to manage their environmental impact. Accurate and transparent data on resource consumption is currently lacking, which hampers policymakers’ ability to assess the true scale of these impacts and act accordingly.

Five steps to sustainable AI  

The NEPC is urging the government to spearhead change by prioritising sustainable AI development. The report outlines five key steps policymakers can act upon immediately to position the UK as a leader in resource-efficient AI:  

  1. Expand environmental reporting mandates
  2. Communicate the sector’s environmental impacts
  3. Set sustainability requirements for data centres
  4. Reconsider data collection, storage, and management practices
  5. Lead by example with government investment

Mandatory environmental reporting forms a cornerstone of the recommendations. This involves measuring data centres’ energy sources, water consumption, carbon emissions, and e-waste recycling practices to provide the resource use data necessary for policymaking.  

Raising public awareness is also vital. Communicating the environmental costs of AI can encourage developers to optimise AI tools, use smaller datasets, and adopt more efficient approaches. Notably, the report recommends embedding environmental design and sustainability topics into computer science and AI education at both school and university levels.  

Smarter, greener data centres  

One of the most urgent calls to action involves redesigning data centres to reduce their environmental footprint. The report advocates for innovations like waste heat recovery systems, zero drinking water use for cooling, and the exclusive use of 100% carbon-free energy certificates.  

Efforts like those at Queen Mary University of London, where residual heat from a campus data centre is repurposed to provide heating and hot water, offer a glimpse into the possibilities of greener tech infrastructure.  

In addition, the report suggests revising legislation on mandatory data retention to reduce the unnecessary environmental costs of storing vast amounts of data long-term. Proposals for a National Data Library could drive best practices by centralising and streamlining data storage.  

Professor Tom Rodden, Pro-Vice-Chancellor at the University of Nottingham and Chair of the working group behind the report, urged swift action:  

“In recent years, advances in AI systems and services have largely been driven by a race for size and scale, demanding increasing amounts of computational power. As a result, AI systems and services are growing at a rate unparalleled by other high-energy systems—generally without much regard for resource efficiency.  

“This is a dangerous trend, and we face a real risk that our development, deployment, and use of AI could do irreparable damage to the environment.”  

Rodden added that reliable data on these impacts is critical. “To build systems and services that effectively use resources, we first need to effectively monitor their environmental cost. Once we have access to trustworthy data… we can begin to effectively target efficiency in development, deployment, and use – and plan a sustainable AI future for the UK.”

Dame Dawn Childs, CEO of Pure Data Centres Group, underscored the role of engineering in improving efficiency. “Some of this will come from improvements to AI models and hardware, making them less energy-intensive. But we must also ensure that the data centres housing AI’s computing power and storage are as sustainable as possible.  

“That means prioritising renewable energy, minimising water use, and reducing carbon emissions – both directly and indirectly. Using low-carbon building materials is also essential.”  

Childs emphasised the importance of a coordinated approach from the start of projects. “As the UK government accelerates AI adoption – through AI Growth Zones and streamlined planning for data centres – sustainability must be a priority at every step.”  

For Alex Bardell, Chair of BCS’ Green IT Specialist Group, the focus is on optimising AI processes. “Our report has discussed optimising models for efficiency. Previous attempts to limit the drive toward increased computational power and larger models have faced significant resistance, with concerns that the UK may fall behind in the AI arena; this may not necessarily be true.  

“It is crucial to reevaluate our approach to developing sustainable AI in the future.”  

Time for transparency around AI environmental risks

Public awareness of AI’s environmental toll remains low. Recent research by the Institution of Engineering and Technology (IET) found that fewer than one in six UK residents are aware of the significant environmental costs associated with AI systems.  

“AI providers must be transparent about these effects,” said Professor Sarvapali Ramchurn, CEO of Responsible AI UK and a Fellow of the IET. “If we cannot measure it, we cannot manage it, nor ensure benefits for all. This report’s recommendations will aid national discussions on the sustainability of AI systems and the trade-offs involved.”  

As the UK pushes forward with ambitious plans to lead in AI development, ensuring environmental sustainability must take centre stage. By adopting policies and practices outlined in the NEPC report, the government can support AI growth while safeguarding finite resources for future generations.

(Photo by Braden Collum)

See also: Sustainability is key in 2025 for businesses to advance AI efforts

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/feed/ 0
EU AI Act: What businesses need to know as regulations go live https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/ https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/#respond Fri, 31 Jan 2025 12:52:49 +0000 https://www.artificialintelligence-news.com/?p=17015 Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect. While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across […]

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect.

While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across the globe that operate in the EU must now navigate a regulatory landscape with strict rules and high stakes.

The new regulations prohibit the deployment or use of several high-risk AI systems. These include applications such as social scoring, emotion recognition, real-time remote biometric identification in public spaces, and other scenarios deemed unacceptable under the Act.

Companies found in violation of the rules could face penalties of up to 7% of their global annual turnover, making it imperative for organisations to understand and comply with the restrictions.  

Early compliance challenges  

“It’s finally here,” says Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica. “While we’re still in a phased approach, businesses’ hard-earned preparations for the EU AI Act will now face the ultimate test.”

Headshot of Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

Ergin highlights that even though most compliance requirements will not take effect until mid-2025, the early prohibitions set a decisive tone.

“For businesses, the pressure in 2025 is twofold. They must demonstrate tangible ROI from AI investments while navigating challenges around data quality and regulatory uncertainty. It’s already the perfect storm, with 89% of large businesses in the EU reporting conflicting expectations for their generative AI initiatives. At the same time, 48% say technology limitations are a major barrier to moving AI pilots into production,” he remarks.

Ergin believes the key to compliance and success lies in data governance.

“Without robust data foundations, organisations risk stagnation, limiting their ability to unlock AI’s full potential. After all, isn’t ensuring strong data governance a core principle that the EU AI Act is built upon?”

To adapt, companies must prioritise strengthening their approach to data quality.

“Strengthening data quality and governance is no longer optional, it’s critical. To ensure both compliance and prove the value of AI, businesses must invest in making sure data is accurate, holistic, integrated, up-to-date and well-governed,” says Ergin.

“This isn’t just about meeting regulatory demands; it’s about enabling AI to deliver real business outcomes. As 82% of EU companies plan to increase their GenAI investments in 2025, ensuring their data is AI-ready will be the difference between those who succeed and those who remain in the starting blocks.”

EU AI Act has no borders

The extraterritorial scope of the EU AI Act means non-EU organisations are assuredly not off the hook. As Marcus Evans, a partner at Norton Rose Fulbright, explains, the Act applies far beyond the EU’s borders.

Headshot of Marcus Evans, a partner at Norton Rose Fulbright, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“The AI Act will have a truly global application,” says Evans. “That’s because it applies not only to organisations in the EU using AI or those providing, importing, or distributing AI to the EU market, but also AI provision and use where the output is used in the EU. So, for instance, a company using AI for recruitment in the EU – even if it is based elsewhere – would still be captured by these new rules.”  

Evans advises businesses to start by auditing their AI use. “At this stage, businesses must first understand where AI is being used in their organisation so that they can then assess whether any use cases may trigger the prohibitions. Building on that initial inventory, a wider governance process can then be introduced to ensure AI use is assessed, remains outside the prohibitions, and complies with the AI Act.”  

While organisations work to align their AI practices with the new regulations, additional challenges remain. Compliance requires addressing other legal complexities such as data protection, intellectual property (IP), and discrimination risks.  

Evans emphasises that raising AI literacy within organisations is also a critical step.

“Any organisations in scope must also take measures to ensure their staff – and anyone else dealing with the operation and use of their AI systems on their behalf – have a sufficient level of AI literacy,” he states.

“AI literacy will play a vital role in AI Act compliance, as those involved in governing and using AI must understand the risks they are managing.”

Encouraging responsible innovation  

The EU AI Act is being hailed as a milestone for responsible AI development. By prohibiting harmful practices and requiring transparency and accountability, the regulation seeks to balance innovation with ethical considerations.

Headshot of Beatriz Sanz Sáiz, AI Sector Leader at EY Global, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“This framework is a pivotal step towards building a more responsible and sustainable future for artificial intelligence,” says Beatriz Sanz Sáiz, AI Sector Leader at EY Global.

Sanz Sáiz believes the legislation fosters trust while providing a foundation for transformative technological progress.

“It has the potential to foster further trust, accountability, and innovation in AI development, as well as strengthen the foundations upon which the technology continues to be built,” Sanz Sáiz asserts.

“It is critical that we focus on eliminating bias and prioritising fundamental rights like fairness, equity, and privacy. Responsible AI development is a crucial step in the quest to further accelerate innovation.”

What’s prohibited under the EU AI Act?  

To ensure compliance, businesses need to be crystal-clear on which activities fall under the EU AI Act’s strict prohibitions. The current list of prohibited activities includes:  

  • Harmful subliminal, manipulative, and deceptive techniques  
  • Harmful exploitation of vulnerabilities  
  • Unacceptable social scoring  
  • Individual crime risk assessment and prediction (with some exceptions)  
  • Untargeted scraping of internet or CCTV material to develop or expand facial recognition databases  
  • Emotion recognition in areas such as the workplace and education (with some exceptions)  
  • Biometric categorisation to infer sensitive categories (with some exceptions)  
  • Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes (with some exceptions)  

The Commission’s forthcoming guidance on which “AI systems” fall under these categories will be critical for businesses seeking to ensure compliance and reduce legal risks. Additionally, companies should anticipate further clarification and resources at the national and EU levels, such as the upcoming webinar hosted by the AI Office.

A new landscape for AI regulations

The early implementation of the EU AI Act represents just the beginning of what is a remarkably complex and ambitious regulatory endeavour. As AI continues to play an increasingly pivotal role in business strategy, organisations must learn to navigate new rules and continuously adapt to future changes.  

For now, businesses should focus on understanding the scope of their AI use, enhancing data governance, educating staff to build AI literacy, and adopting a proactive approach to compliance. By doing so, they can position themselves as leaders in a fast-evolving AI landscape and unlock the technology’s full potential while upholding ethical and legal standards.

(Photo by Guillaume Périgois)

See also: ChatGPT Gov aims to modernise US government agencies

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/feed/ 0
Rodolphe Malaguti, Conga: Poor data hinders AI in public services https://www.artificialintelligence-news.com/news/rodolphe-malaguti-conga-poor-data-ai-potential-in-public-services/ https://www.artificialintelligence-news.com/news/rodolphe-malaguti-conga-poor-data-ai-potential-in-public-services/#respond Tue, 21 Jan 2025 11:15:19 +0000 https://www.artificialintelligence-news.com/?p=16916 According to Rodolphe Malaguti, Product Strategy and Transformation at Conga, poor data structures and legacy systems are hindering the potential of AI in transforming public services. Taxpayer-funded services in the UK, from the NHS to local councils, are losing out on potential productivity savings of £45 billion per year due to an overwhelming reliance on […]

The post Rodolphe Malaguti, Conga: Poor data hinders AI in public services appeared first on AI News.

]]>
According to Rodolphe Malaguti, Product Strategy and Transformation at Conga, poor data structures and legacy systems are hindering the potential of AI in transforming public services.

Taxpayer-funded services in the UK, from the NHS to local councils, are losing out on potential productivity savings of £45 billion per year due to an overwhelming reliance on outdated technology—a figure equivalent to the total cost of running every primary school in the country for a year.   

A report published this week highlights how nearly half of public services are still not accessible online. This forces British citizens to engage in time-consuming and frustrating processes such as applying for support in person, enduring long wait times on hold, or travelling across towns to council offices. Public sector workers are similarly hindered by inefficiencies, such as sifting through mountains of physical letters, which slows down response times and leaves citizens to bear the brunt of government red tape.

Headshot of Rodolphe Malaguti, Product Strategy and Transformation at Conga, for an article on how poor data and legacy systems are holding back the potential of AI in transforming public services.

“As this report has shown, there is clearly a gap between what the government and public bodies intend to achieve with their digital projects and what they actually deliver,” explained Malaguti. “The public sector still relies heavily upon legacy systems and has clearly struggled to tackle existing poor data structures and inefficiencies across key departments. No doubt this has had a clear impact on decision-making and hindered vital services for vulnerable citizens.”

The struggles persist even in deeply personal and critical scenarios. For example, the current process for registering a death still demands a physical presence, requiring grieving individuals to manage cumbersome bureaucracy while mourning the loss of a loved one. Other outdated processes unnecessarily burden small businesses—one striking example being the need to publish notices in local newspapers simply to purchase a lorry licence, creating further delays and hindering economic growth.

A lack of coordination between departments amplifies these challenges. In some cases, government bodies are using over 500 paper-based processes, leaving systems fragmented and inefficient. Vulnerable individuals suffer disproportionately under this disjointed framework. For instance, patients with long-term health conditions can be forced into interactions with up to 40 different services, repeating the same information as departments repeatedly fail to share data.

“The challenge is that government leaders have previously focused on technology and online interactions, adding layers to services whilst still relying on old data and legacy systems—this has ultimately led to inefficiencies across departments,” added Malaguti.

“Put simply, they have failed to address existing issues or streamline their day-to-day operations. It is critical that data is more readily available and easily shared between departments, particularly if leaders are hoping to employ new technology like AI to analyse this data and drive better outcomes or make strategic decisions for the public sector as a whole.”

Ageing Infrastructure: High costs and security risks

The report underscores that ageing infrastructure comes at a steep financial and operational cost. More than one-in-four digital systems used across the UK’s central government are outdated, with this figure ballooning to 70 percent in some departments. Maintenance costs for legacy systems are significantly higher, up to three-to-four times more, compared to keeping technology up-to-date.  

Furthermore, a growing number of these outdated systems are now classified as “red-rated” for reliability and cybersecurity risk. Alarmingly, NHS England experienced 123 critical service outages last year alone. These outages often meant missed appointments and forced healthcare workers to resort to paper-based systems, making it harder for patients to access care when they needed it most.

Malaguti stresses that addressing such challenges goes beyond merely upgrading technology.

“The focus should be on improving data structure, quality, and timeliness. All systems, data, and workflows must be properly structured and fully optimised prior to implementation for these technologies to be effective. Public sector leaders should look to establish clear measurable objectives, as they continue to improve service delivery and core mission impacts.”

Transforming public services

In response to these challenges, Technology Secretary Peter Kyle is announcing an ambitious overhaul of public sector technology to usher in a more modern, efficient, and accessible system. Emphasising the use of AI, digital tools, and “common sense,” the goal is to reform how public services are designed and delivered—streamlining operations across local government, the NHS, and other critical departments.

A package of tools known as ‘Humphrey’ – named after the fictional Whitehall official in popular BBC drama ‘Yes, Minister’ – is set to be made available to all civil servants soon, with some available today.

Humphrey includes:

  • Consult: Analyses the thousands of responses received during government consultations within hours, presenting policymakers and experts with interactive dashboards to directly explore public feedback.
  • Parlex: A tool that enables policymakers to search and analyze decades of parliamentary debate, helping them refine their thinking and manage bills more effectively through both the Commons and the Lords.
  • Minute: A secure AI transcription service that creates customisable meeting summaries in the formats needed by public servants. It is currently being used by multiple central departments in meetings with ministers and is undergoing trials with local councils.
  • Redbox: A generative AI tool tailored to assist civil servants with everyday tasks, such as summarising policies and preparing briefings.
  • Lex: A tool designed to support officials in researching the law by providing analysis and summaries of relevant legislation for specific, complex issues.

The new tools and changes will help to tackle the inefficiencies highlighted in the report while delivering long-term cost savings. By reducing the burden of administrative tasks, the reforms aim to enable public servants, such as doctors and nurses, to spend more time helping the people they serve. For businesses, this could mean faster approvals for essential licences and permits, boosting economic growth and innovation.

“The government’s upcoming reforms and policy updates, where it is expected to deliver on its ‘AI Opportunities Action Plan,’ [will no doubt aim] to speed up processes,” said Malaguti. “Public sector leaders need to be more strategic with their investments and approach these projects with a level head, rolling out a programme in a phased manner, considering each phase of their operations.”

This sweeping transformation will also benefit from an expanded role for the Government Digital Service (GDS). Planned measures include using the GDS to identify cybersecurity vulnerabilities in public sector systems that could be exploited by hackers, enabling services to be made more robust and secure. Such reforms are critical to protect citizens, particularly as the reliance on digital solutions increases.

The broader aim of these reforms is to modernise the UK’s public services to reflect the convenience and efficiencies demanded in a digital-first world. By using technologies like AI, the government hopes to make interactions with public services faster and more intuitive while saving billions for taxpayers in the long run.

As technology reshapes the future of how services are delivered, leaders must ensure they are comprehensively addressing the root causes of inefficiency—primarily old data infrastructure and fragmented workflows. Only then can technological solutions, whether AI or otherwise, achieve their full potential in helping services deliver for the public.

(Photo by Claudio Schwarz)

See also: Biden’s executive order targets energy needs for AI data centres

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Rodolphe Malaguti, Conga: Poor data hinders AI in public services appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/rodolphe-malaguti-conga-poor-data-ai-potential-in-public-services/feed/ 0