Legislation & Government | AI News https://www.artificialintelligence-news.com/categories/ai-legislation-government/ Artificial Intelligence News Wed, 30 Apr 2025 11:35:05 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Legislation & Government | AI News https://www.artificialintelligence-news.com/categories/ai-legislation-government/ 32 32 UK opens Europe’s first E-Beam semiconductor chip lab https://www.artificialintelligence-news.com/news/uk-opens-europe-first-e-beam-semiconductor-chip-lab/ https://www.artificialintelligence-news.com/news/uk-opens-europe-first-e-beam-semiconductor-chip-lab/#respond Wed, 30 Apr 2025 11:35:03 +0000 https://www.artificialintelligence-news.com/?p=106228 The UK has cut the ribbon on a pioneering electron beam (E-Beam) lithography facility to build the semiconductor chips of the future. What makes this special? It’s the first of its kind in Europe, and only the second facility like it on the planet—the other being in Japan. So, what’s the big deal about E-Beam […]

The post UK opens Europe’s first E-Beam semiconductor chip lab appeared first on AI News.

]]>
The UK has cut the ribbon on a pioneering electron beam (E-Beam) lithography facility to build the semiconductor chips of the future. What makes this special? It’s the first of its kind in Europe, and only the second facility like it on the planet—the other being in Japan.

So, what’s the big deal about E-Beam lithography? Imagine trying to draw incredibly complex patterns, but thousands of times smaller than a human hair. That’s essentially what this technology does, using a focused beam of tiny electrons.

Such precision is vital for designing the microscopic components inside the chips that run everything from our smartphones and gaming consoles to life-saving medical scanners and advanced defence systems.

Semiconductors are already big business for the UK, adding around £10 billion to its economy each year. And that figure is only expected to climb, potentially hitting £17 billion by the end of the decade.

Nurturing this sector is a major opportunity for the UK—not just for bragging rights in advanced manufacturing, but for creating high-value jobs and driving real economic growth.

Speaking at the launch of the facility in Southampton, Science Minister Lord Patrick Vallance said: “Britain is home to some of the most exciting semiconductor research anywhere in the world—and Southampton’s new E-Beam facility is a major boost to our national capabilities.

“By investing in both infrastructure and talent, we’re giving our researchers and innovators the support they need to develop next-generation chips right here in the UK.”

Lord Vallance’s visit wasn’t just a photo opportunity, though. It came alongside some sobering news: fresh research published today highlights that one of the biggest hurdles facing the UK’s growing chip industry is finding enough people with the right skills.

We’re talking about a serious talent crunch. When you consider that a single person working in semiconductors contributes an average of £460,000 to the economy each year, you can see why plugging this skills gap is so critical.

So, what’s the plan? The government isn’t just acknowledging the problem; they’re putting money where their mouth is with a £4.75 million semiconductor skills package. The idea is to build up that talent pipeline, making sure universities like Southampton – already powerhouses of chip innovation – have resources like the E-Beam lab and the students they need.

“Our £4.75 million skills package will support our Plan for Change by helping more young people into high-value semiconductors careers, closing skills gaps and backing growth in this critical sector,” Lord Vallance explained.

Here’s where that cash is going:

  • Getting students hooked (£3 million): Fancy £5,000 towards your degree? 300 students starting Electronics and Electrical Engineering courses this year will get just that, along with specific learning modules to show them what a career in semiconductors actually involves, particularly in chip design and making the things.
  • Practical chip skills (£1.2 million): It’s one thing learning the theory, another designing a real chip. This pot will fund new hands-on chip design courses for students (undergrad and postgrad) and even train up lecturers. They’re also looking into creating conversion courses to tempt talented people from other fields into the chip world.
  • Inspiring the next generation (Nearly £550,000): To really build a long-term pipeline, you need to capture interest early. This funding aims to give 7,000 teenagers (15-18) and 450 teachers some real, hands-on experience with semiconductors, working with local companies in existing UK chip hotspots like Newport, Cambridge, and Glasgow. The goal is to show young people the cool career paths available right on their doorstep.

Ultimately, the hope is that this targeted support will give the UK semiconductor scene the skilled workforce it needs to thrive. It’s about encouraging more students to jump into these valuable careers, helping companies find the people they desperately need, and making sure the UK stays at the forefront of the technologies that will shape tomorrow’s economy.

Professor Graham Reed, who heads up the Optoelectronics Research Centre (ORC) at Southampton University, commented: “The introduction of the new E-Beam facility will reinforce our position of hosting the most advanced cleanroom in UK academia.

“It facilitates a vast array of innovative and industrially relevant research, and much needed semiconductor skills training.”

Putting world-class tools in the hands of researchers while simultaneously investing in the people who will use them will help to cement the UK’s leadership in semiconductors.

See also: AI in education: Balancing promises and pitfalls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK opens Europe’s first E-Beam semiconductor chip lab appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-opens-europe-first-e-beam-semiconductor-chip-lab/feed/ 0
Meta will train AI models using EU user data https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/ https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/#respond Tue, 15 Apr 2025 16:32:02 +0000 https://www.artificialintelligence-news.com/?p=105325 Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models. The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.    In a statement, […]

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models.

The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.   

In a statement, Meta wrote: “Today, we’re announcing our plans to train AI at Meta using public content – like public posts and comments – shared by adults on our products in the EU.

“People’s interactions with Meta AI – like questions and queries – will also be used to train and improve our models.”

Starting this week, users of Meta’s platforms (including Facebook, Instagram, WhatsApp, and Messenger) within the EU will receive notifications explaining the data usage. These notifications, delivered both in-app and via email, will detail the types of public data involved and link to an objection form.

“We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones,” Meta explained.

Meta explicitly clarified that certain data types remain off-limits for AI training purposes.

The company says it will not “use people’s private messages with friends and family” to train its generative AI models. Furthermore, public data associated with accounts belonging to users under the age of 18 in the EU will not be included in the training datasets.

Meta wants to build AI tools designed for EU users

Meta positions this initiative as a necessary step towards creating AI tools designed for EU users. Meta launched its AI chatbot functionality across its messaging apps in Europe last month, framing this data usage as the next phase in improving the service.

“We believe we have a responsibility to build AI that’s not just available to Europeans, but is actually built for them,” the company explained. 

“That means everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

This becomes increasingly pertinent as AI models evolve with multi-modal capabilities spanning text, voice, video, and imagery.   

Meta also situated its actions in the EU within the broader industry landscape, pointing out that training AI on user data is common practice.

“It’s important to note that the kind of AI training we’re doing is not unique to Meta, nor will it be unique to Europe,” the statement reads. 

“We’re following the example set by others including Google and OpenAI, both of which have already used data from European users to train their AI models.”

Meta further claimed its approach surpasses others in openness, stating, “We’re proud that our approach is more transparent than many of our industry counterparts.”   

Regarding regulatory compliance, Meta referenced prior engagement with regulators, including a delay initiated last year while awaiting clarification on legal requirements. The company also cited a favourable opinion from the European Data Protection Board (EDPB) in December 2024.

“We welcome the opinion provided by the EDPB in December, which affirmed that our original approach met our legal obligations,” wrote Meta.

Broader concerns over AI training data

While Meta presents its approach in the EU as transparent and compliant, the practice of using vast swathes of public user data from social media platforms to train large language models (LLMs) and generative AI continues to raise significant concerns among privacy advocates.

Firstly, the definition of “public” data can be contentious. Content shared publicly on platforms like Facebook or Instagram may not have been posted with the expectation that it would become raw material for training commercial AI systems capable of generating entirely new content or insights. Users might share personal anecdotes, opinions, or creative works publicly within their perceived community, without envisaging its large-scale, automated analysis and repurposing by the platform owner.

Secondly, the effectiveness and fairness of an “opt-out” system versus an “opt-in” system remain debatable. Placing the onus on users to actively object, often after receiving notifications buried amongst countless others, raises questions about informed consent. Many users may not see, understand, or act upon the notification, potentially leading to their data being used by default rather than explicit permission.

Thirdly, the issue of inherent bias looms large. Social media platforms reflect and sometimes amplify societal biases, including racism, sexism, and misinformation. AI models trained on this data risk learning, replicating, and even scaling these biases. While companies employ filtering and fine-tuning techniques, eradicating bias absorbed from billions of data points is an immense challenge. An AI trained on European public data needs careful curation to avoid perpetuating stereotypes or harmful generalisations about the very cultures it aims to understand.   

Furthermore, questions surrounding copyright and intellectual property persist. Public posts often contain original text, images, and videos created by users. Using this content to train commercial AI models, which may then generate competing content or derive value from it, enters murky legal territory regarding ownership and fair compensation—issues currently being contested in courts worldwide involving various AI developers.

Finally, while Meta highlights its transparency relative to competitors, the actual mechanisms of data selection, filtering, and its specific impact on model behaviour often remain opaque. Truly meaningful transparency would involve deeper insights into how specific data influences AI outputs and the safeguards in place to prevent misuse or unintended consequences.

The approach taken by Meta in the EU underscores the immense value technology giants place on user-generated content as fuel for the burgeoning AI economy. As these practices become more widespread, the debate surrounding data privacy, informed consent, algorithmic bias, and the ethical responsibilities of AI developers will undoubtedly intensify across Europe and beyond.

(Photo by Julio Lopez)

See also: Apple AI stresses privacy with synthetic and anonymised data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/feed/ 0
BCG: Analysing the geopolitics of generative AI https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/ https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/#respond Fri, 11 Apr 2025 16:11:17 +0000 https://www.artificialintelligence-news.com/?p=105294 Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike. Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and […]

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike.

Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and the implications for multinational corporations.

AI investments expose businesses to increasingly tense geopolitics

Sylvain Duranton, Global Leader at BCG X, noted the significant geopolitical risk companies face: “For large companies, close to half of them, 44%, have teams around the world, not just in one country where their headquarters are.”

Sylvain Duranton, Global Leader at BCG X

Many of these businesses operate across numerous countries, making them vulnerable to differing regulations and sovereignty issues. “They’ve built their AI teams and ecosystem far before there was such tension around the world.”

Duranton also pointed to the stark imbalance in the AI supply race, particularly in investment.

Comparing the market capitalisation of tech companies, the US dwarfs Europe by a factor of 20 and the Asia Pacific region by five. Investment figures paint a similar picture, showing a “completely disproportionate” imbalance compared to the relative sizes of the economies.

This AI race is fuelled by massive investments in compute power, frontier models, and the emergence of lighter, open-weight models changing the competitive dynamic.   

Benchmarking national AI capabilities

Nikolaus Lang, Global Leader at the BCG Henderson Institute – BCG’s think tank – detailed the extensive research undertaken to benchmark national GenAI capabilities objectively.

The team analysed the “upstream of GenAI,” focusing on large language model (LLM) development and its six key enablers: capital, computing power, intellectual property, talent, data, and energy.

Using hard data like AI researcher numbers, patents, data centre capacity, and VC investment, they created a comparative analysis. Unsurprisingly, the analysis revealed the US and China as the clear AI frontrunners and maintain leads in geopolitics.

Nikolaus Lang, Global Leader at the BCG Henderson Institute

The US boasts the largest pool of AI specialists (around half a million), immense capital power ($303bn in VC funding, $212bn in tech R&D), and leading compute power (45 GW).

Lang highlighted America’s historical dominance, noting, “the US has been the largest producer of notable AI models with 67%” since 1950, a lead reflected in today’s LLM landscape. This strength is reinforced by “outsized capital power” and strategic restrictions on advanced AI chip access through frameworks like the US AI Diffusion Framework.   

China, the second AI superpower, shows particular strength in data—ranking highly in e-governance and mobile broadband subscriptions, alongside significant data centre capacity (20 GW) and capital power. 

Despite restricted access to the latest chips, Chinese LLMs are rapidly closing the gap with US models. Lang mentioned the emergence of models like DeepSpeech as evidence of this trend, achieved with smaller teams, fewer GPU hours, and previous-generation chips.

China’s progress is also fuelled by heavy investment in AI academic institutions (hosting 45 of the world’s top 100), a leading position in AI patent applications, and significant government-backed VC funding. Lang predicts “governments will play an important role in funding AI work going forward.”

The middle powers: Europe, Middle East, and Asia

Beyond the superpowers, several “middle powers” are carving out niches.

  • EU: While trailing the US and China, the EU holds the third spot with significant data centre capacity (8 GW) and the world’s second-largest AI talent pool (275,000 specialists) when capabilities are combined. Europe also leads in top AI publications. Lang stressed the need for bundled capacities, suggesting AI, defence, and renewables are key areas for future EU momentum.
  • Middle East (UAE & Saudi Arabia): These nations leverage strong capital power via sovereign wealth funds and competitively low electricity prices to attract talent and build compute power, aiming to become AI drivers “from scratch”. They show positive dynamics in attracting AI specialists and are climbing the ranks in AI publications.   
  • Asia (Japan & South Korea): Leveraging strong existing tech ecosystems in hardware and gaming, these countries invest heavily in R&D (around $207bn combined by top tech firms). Government support, particularly in Japan, fosters both supply and demand. Local LLMs and strategic investments by companies like Samsung and SoftBank demonstrate significant activity.   
  • Singapore: Singapore is boosting its AI ecosystem by focusing on talent upskilling programmes, supporting Southeast Asia’s first LLM, ensuring data centre capacity, and fostering adoption through initiatives like establishing AI centres of excellence.   

The geopolitics of generative AI: Strategy and sovereignty

The geopolitics of generative AI is being shaped by four clear dynamics: the US retains its lead, driven by an unrivalled tech ecosystem; China is rapidly closing the gap; middle powers face a strategic choice between building supply or accelerating adoption; and government funding is set to play a pivotal role, particularly as R&D costs climb and commoditisation sets in.

As geopolitical tensions mount, businesses are likely to diversify their GenAI supply chains to spread risk. The race ahead will be defined by how nations and companies navigate the intersection of innovation, policy, and resilience.

(Photo by Markus Krisetya)

See also: OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/feed/ 0
UK forms AI Energy Council to align growth and sustainability goals https://www.artificialintelligence-news.com/news/uk-forms-ai-energy-council-align-growth-sustainability-goals/ https://www.artificialintelligence-news.com/news/uk-forms-ai-energy-council-align-growth-sustainability-goals/#respond Tue, 08 Apr 2025 14:10:49 +0000 https://www.artificialintelligence-news.com/?p=105230 The UK government has announced the first meeting of a new AI Energy Council aimed at ensuring the nation’s AI and clean energy goals work in tandem to drive economic growth. The inaugural meeting of the council will see members agree on its core objectives, with a central focus on how the government’s mission to […]

The post UK forms AI Energy Council to align growth and sustainability goals appeared first on AI News.

]]>
The UK government has announced the first meeting of a new AI Energy Council aimed at ensuring the nation’s AI and clean energy goals work in tandem to drive economic growth.

The inaugural meeting of the council will see members agree on its core objectives, with a central focus on how the government’s mission to become a clean energy superpower can support its commitment to advancing AI and compute infrastructure.

Unveiled earlier this year as part of the government’s response to the AI Opportunities Action Plan, the council will serve as a crucial platform for bringing together expert insights on the significant energy demands associated with the AI sector.

Concerns surrounding the substantial energy requirements of AI data centres are a global challenge. The UK is proactively addressing this issue through initiatives like the establishment of new AI Growth Zones.

These zones are dedicated hubs for AI development that are strategically located in areas with access to at least 500MW of power—an amount equivalent to powering approximately two million homes. This approach is designed to attract private investment from companies looking to establish operations in Britain, ultimately generating local jobs and boosting the economy.

Peter Kyle, Secretary of State for Science, Innovation, and Technology, said: “The work of the AI Energy Council will ensure we aren’t just powering our AI needs to deliver new waves of opportunity in all parts of the country, but can do so in a way which is responsible and sustainable.

“This requires a broad range of expertise from industry and regulators as we fire up the UK’s economic engine to make it fit for the age of AI—meaning we can deliver the growth which is the beating heart of our Plan for Change.”

The Council is also expected to delve into the role of clean energy sources, including renewables and nuclear, in powering the AI revolution.

A key aspect of its work will involve advising on how to improve energy efficiency and sustainability within AI and data centre infrastructure, with specific considerations for resource usage such as water. Furthermore, the council will take proactive steps to ensure the secure adoption of AI across the UK’s critical energy network itself.

Ed Miliband, Secretary of State for Energy Security and Net Zero, commented: “We are making the UK a clean energy superpower, building the homegrown energy this country needs to protect consumers and businesses, and drive economic growth, as part of our Plan for Change.

“AI can play an important role in building a new era of clean electricity for our country and as we unlock AI’s potential, this Council will help secure a sustainable scale up to benefit businesses and communities across the UK.”

In a parallel effort to facilitate the growth of the AI sector, the UK government has been working closely with energy regulator Ofgem and the National Energy System Operator (NESO) to implement fundamental reforms to the UK’s connections process.

Subject to final sign-offs from Ofgem, these reforms could potentially unlock more than 400GW of capacity from the connection queue. This acceleration of projects is deemed vital for economic growth, particularly for the delivery of new large-scale AI data centres that require significant power infrastructure.

The newly-formed AI Energy Council comprises representatives from 14 key organisations across the energy and technology sectors, including regulators and leading companies. These members will contribute their expert insights to support the council’s work and ensure a collaborative approach to addressing the energy challenges and opportunities presented by AI.

Among the prominent organisations joining the council are EDF, Scottish Power, National Grid, technology giants Google, Microsoft, Amazon Web Services (AWS), and chip designer ARM, as well as infrastructure investment firm Brookfield.

This collaborative framework, uniting the energy and technology sectors, aims to ensure seamless coordination in speeding up the connection of energy projects to the national grid. This is particularly crucial given the increasing number of technology companies announcing plans to build data centres across the UK.

Alison Kay, VP for UK and Ireland at AWS, said: “At Amazon, we’re working to meet the future energy needs of our customers, while remaining committed to powering our operations in a more sustainable way, and progressing toward our Climate Pledge commitment to become net-zero carbon by 2040.

“As the world’s largest corporate purchaser of renewable energy for the fifth year in a row, we share the government’s goal to ensure the UK has sufficient access to carbon-free energy to support its AI ambitions and to help drive economic growth.”

Jonathan Brearley, CEO of Ofgem, added: “AI will play an increasingly important role in transforming our energy system to be cleaner, more efficient, and more cost-effective for consumers, but only if used in a fair, secure, sustainable, and safe way.

“Working alongside other members of this Council, Ofgem will ensure AI implementation puts consumer interests first – from customer service to infrastructure planning and operation – so that everyone feels the benefits of this technological innovation in energy.”

This initiative aligns with the government’s Clean Power Action Plan, which focuses on connecting more homegrown clean power to the grid by building essential infrastructure and prioritising projects needed for 2030. The aim is to clear the grid connection queue, enabling crucial infrastructure projects – from housing to gigafactories and data centres – to gain access to the grid, thereby unlocking billions in investment and fostering economic growth.

Furthermore, the government is streamlining planning approvals to significantly reduce the time it takes for infrastructure projects to get off the ground. This accelerated process will ensure that AI innovators can readily access cutting-edge infrastructure and the necessary power to drive forward the next wave of AI advancements.

(Photo by Vlad Hilitanu)

See also: Tony Blair Institute AI copyright report sparks backlash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK forms AI Energy Council to align growth and sustainability goals appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-forms-ai-energy-council-align-growth-sustainability-goals/feed/ 0
Navigating the EU AI Act: Implications for UK businesses https://www.artificialintelligence-news.com/news/navigating-the-eu-ai-act-implications-for-uk-businesses/ https://www.artificialintelligence-news.com/news/navigating-the-eu-ai-act-implications-for-uk-businesses/#respond Mon, 07 Apr 2025 07:10:00 +0000 https://www.artificialintelligence-news.com/?p=105005 The EU AI Act, which came into effect on August 1, 2024, marks a turning point in the regulation of artificial intelligence. Aimed at governing the use and development of AI, it imposes rigorous standards for organisations operating within the EU or providing AI-driven products and services to its member states. Understanding and complying with […]

The post Navigating the EU AI Act: Implications for UK businesses appeared first on AI News.

]]>
The EU AI Act, which came into effect on August 1, 2024, marks a turning point in the regulation of artificial intelligence. Aimed at governing the use and development of AI, it imposes rigorous standards for organisations operating within the EU or providing AI-driven products and services to its member states. Understanding and complying with the Act is essential for UK businesses seeking to compete in the European market.

The scope and impact of the EU AI Act

The EU AI Act introduces a risk-based framework that classifies AI systems into four categories: minimal, limited, high, and unacceptable risk. High-risk systems, which include AI used in healthcare diagnostics, autonomous vehicles, and financial decision-making, face stringent regulations. This risk-based approach ensures that the level of oversight corresponds to the potential impact of the technology on individuals and society.

For UK businesses, non-compliance with these rules is not an option. Organisations must ensure their AI systems align with the Act’s requirements or risk hefty fines, reputational damage, and exclusion from the lucrative EU market. The first step is to evaluate how their AI systems are classified and adapt operations accordingly. For instance, a company using AI to automate credit scoring must ensure its system meets transparency, fairness, and data privacy standards.

Preparing for the UK’s next steps

While the EU AI Act directly affects UK businesses trading with the EU, the UK is also likely to implement its own AI regulations. The recent King’s Speech highlighted the government’s commitment to AI governance, focusing on ethical AI and data protection. Future UK legislation will likely mirror aspects of the EU framework, making it essential for businesses to proactively prepare for compliance in multiple jurisdictions.

The role of ISO 42001 in ensuring compliance

International standards like ISO 42001 provide a practical solution for businesses navigating this evolving regulatory landscape. As the global benchmark for AI management systems, ISO 42001 offers a structured framework to manage the development and deployment of AI responsibly.

Adopting ISO 42001 enables businesses to demonstrate compliance with EU requirements while fostering trust among customers, partners, and regulators. Its focus on continuous improvement ensures that organisations can adapt to future regulatory changes, whether from the EU, UK, or other regions. Moreover, the standard promotes

transparency, safety, and ethical practices, which are essential for building AI systems that are not only compliant but also aligned with societal values.

Using AI as a catalyst for growth

Compliance with the EU AI Act and ISO 42001 isn’t just about avoiding penalties; it’s an opportunity to use AI as a sustainable growth and innovation driver. Businesses prioritising ethical AI practices can gain a competitive edge by enhancing customer trust and delivering high-value solutions.

For example, AI can revolutionise patient care in the healthcare sector by enabling faster diagnostics and personalised treatments. By aligning these technologies with ISO 42001, organisations can ensure their tools meet the highest safety and privacy standards. Similarly, financial firms can harness AI to optimise decision-making processes while maintaining transparency and fairness in customer interactions.

The risks of non-compliance

Recent incidents, such as AI-driven fraud schemes and cases of algorithmic bias, highlight the risks of neglecting proper governance. The EU AI Act directly addresses these challenges by enforcing strict guidelines on data usage, transparency, and accountability. Failure to comply risks significant fines and undermines stakeholder confidence, with long-lasting consequences for an organisation’s reputation.

The MOVEit and Capita breaches serve as stark reminders of the vulnerabilities associated with technology when governance and security measures are lacking. For UK businesses, robust compliance strategies are essential to mitigate such risks and ensure resilience in an increasingly regulated environment.

How UK businesses can adapt

1. Understand the risk level of AI systems: Conduct a comprehensive review of how AI is used within the organisation to determine risk levels. This assessment should consider the impact of the technology on users, stakeholders, and society.

2. Update compliance programs: Align data collection, system monitoring, and auditing practices with the requirements of the EU AI Act.

3. Adopt ISO 42001: Implementing the standard provides a scalable framework to manage AI responsibly, ensuring compliance while fostering innovation.

4. Invest in employee education: Equip teams with the knowledge to manage AI responsibly and adapt to evolving regulations.

5. Leverage advanced technologies: Use AI itself to monitor compliance, identify risks, and improve operational efficiency.

The future of AI regulation

As AI becomes an integral part of business operations, regulatory frameworks will continue to evolve. The EU AI Act will likely inspire similar legislation worldwide, creating a more complex compliance landscape. Businesses that act now to adopt international standards and align with best practices will be better positioned to navigate these changes.

The EU AI Act is a wake-up call for UK businesses to prioritise ethical AI practices and proactive compliance. By implementing tools like ISO 42001 and preparing for future regulations, organisations can turn compliance into an opportunity for growth, innovation, and resilience.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Navigating the EU AI Act: Implications for UK businesses appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/navigating-the-eu-ai-act-implications-for-uk-businesses/feed/ 0
Tony Blair Institute AI copyright report sparks backlash https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/ https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/#respond Wed, 02 Apr 2025 11:04:11 +0000 https://www.artificialintelligence-news.com/?p=105140 The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI. According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up […]

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI.

According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up for grabs, and the UK has a golden opportunity to take the lead.

The report emphasises that countries that “embrace change and harness the power of artificial intelligence in creative ways will set the technical, aesthetic, and regulatory standards for others to follow.”

Highlighting that we are in the midst of another revolution in media and communication, the report notes that AI is disrupting how textual, visual, and auditive content is created, distributed, and experienced, much like the printing press, gramophone, and camera did before it.

“AI will usher in a new era of interactive and bespoke works, as well as a counter-revolution that celebrates everything that AI can never be,” the report states.

However, far from signalling the end of human creativity, the TBI suggests AI will open up “new ways of being original.”

The AI revolution’s impact isn’t limited to the creative industries; it’s being felt across all areas of society. Scientists are using AI to accelerate discoveries, healthcare providers are employing it to analyse X-ray images, and emergency services utilise it to locate houses damaged by earthquakes.

The report stresses that these cross-industry advancements are just the beginning, with future AI systems set to become increasingly capable, fuelled by advancements in computing power, data, model architectures, and access to talent.

The UK government has expressed its ambition to be a global leader in AI through its AI Opportunities Action Plan, announced by Prime Minister Keir Starmer on 13 January 2025. For its part, the TBI welcomes the UK government’s ambition, stating that “if properly designed and deployed, AI can make human lives healthier, safer, and more prosperous.”

However, the rapid spread of AI across sectors raises urgent policy questions, particularly concerning the data used for AI training. The application of UK copyright law to the training of AI models is currently contested, with the debate often framed as a “zero-sum game” between AI developers and rights holders. The TBI argues that this framing “misrepresents the nature of the challenge and the opportunity before us.”

The report emphasises that “bold policy solutions are needed to provide all parties with legal clarity and unlock investments that spur innovation, job creation, and economic growth.”

According to the TBI, AI presents opportunities for creators—noting its use in various fields from podcasts to filmmaking. The report draws parallels with past technological innovations – such as the printing press and the internet – which were initially met with resistance, but ultimately led to societal adaptation and human ingenuity prevailing.

The TBI proposes that the solution lies not in clinging to outdated copyright laws but in allowing them to “co-evolve with technological change” to remain effective in the age of AI.

The UK government has proposed a text and data mining exception with an opt-out option for rights holders. While the TBI views this as a good starting point for balancing stakeholder interests, it acknowledges the “significant implementation and enforcement challenges” that come with it, spanning legal, technical, and geopolitical dimensions.

In the report, the Tony Blair Institute for Global Change “assesses the merits of the UK government’s proposal and outlines a holistic policy framework to make it work in practice.”

The report includes recommendations and examines novel forms of art that will emerge from AI. It also delves into the disagreement between rights holders and developers on copyright, the wider implications of copyright policy, and the serious hurdles the UK’s text and data mining proposal faces.

Furthermore, the Tony Blair Institute explores the challenges of governing an opt-out policy, implementation problems with opt-outs, making opt-outs useful and accessible, and tackling the diffusion problem. AI summaries and the problems they present regarding identity are also addressed, along with defensive tools as a partial solution and solving licensing problems.

The report also seeks to clarify the standards on human creativity, address digital watermarking, and discuss the uncertainty around the impact of generative AI on the industry. It proposes establishing a Centre for AI and the Creative Industries and discusses the risk of judicial review, the benefits of a remuneration scheme, and the advantages of a targeted levy on ISPs to raise funding for the Centre.

However, the report has faced strong criticism. Ed Newton-Rex, CEO of Fairly Trained, raised several concerns on Bluesky. These concerns include:

  • The report repeats the “misleading claim” that existing UK copyright law is uncertain, which Newton-Rex asserts is not the case.
  • The suggestion that an opt-out scheme would give rights holders more control over how their works are used is misleading. Newton-Rex argues that licensing is currently required by law, so moving to an opt-out system would actually decrease control, as some rights holders will inevitably miss the opt-out.
  • The report likens machine learning (ML) training to human learning, a comparison that Newton-Rex finds shocking, given the vastly different scalability of the two.
  • The report’s claim that AI developers won’t make long-term profits from training on people’s work is questioned, with Newton-Rex pointing to the significant funding raised by companies like OpenAI.
  • Newton-Rex suggests the report uses strawman arguments, such as stating that generative AI may not replace all human paid activities.
  • A key criticism is that the report omits data showing how generative AI replaces demand for human creative labour.
  • Newton-Rex also criticises the report’s proposed solutions, specifically the suggestion to set up an academic centre, which he notes “no one has asked for.”
  • Furthermore, he highlights the proposal to tax every household in the UK to fund this academic centre, arguing that this would place the financial burden on consumers rather than the AI companies themselves, and the revenue wouldn’t even go to creators.

Adding to these criticisms, British novelist and author Jonathan Coe noted that “the five co-authors of this report on copyright, AI, and the arts are all from the science and technology sectors. Not one artist or creator among them.”

While the report from Tony Blair Institute for Global Change supports the government’s ambition to be an AI leader, it also raises critical policy questions—particularly around copyright law and AI training data.

(Photo by Jez Timms)

See also: Amazon Nova Act: A step towards smarter, web-native AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/feed/ 0
Is America falling behind in the AI race? https://www.artificialintelligence-news.com/news/is-america-falling-behind-in-the-ai-race/ https://www.artificialintelligence-news.com/news/is-america-falling-behind-in-the-ai-race/#respond Mon, 24 Mar 2025 09:35:32 +0000 https://www.artificialintelligence-news.com/?p=104963 Several major US artificial intelligence companies have expressed fear around an erosion of America’s edge in AI development. In recent submissions to the US government, the companies warned that Chinese models, such as DeepSeek R1, are becoming more sophisticated and competitive. The submissions, filed in March 2025 in response to a request for input on […]

The post Is America falling behind in the AI race? appeared first on AI News.

]]>
Several major US artificial intelligence companies have expressed fear around an erosion of America’s edge in AI development.

In recent submissions to the US government, the companies warned that Chinese models, such as DeepSeek R1, are becoming more sophisticated and competitive. The submissions, filed in March 2025 in response to a request for input on an AI Action Plan, highlight the growing challenge from China in technological capability and price.

China’s growing AI presence

Chinese state-supported AI model DeepSeek R1 has piqued the interest of US developers. According to OpenAI, DeepSeek demonstrates that the technological gap between the US and China is narrowing. The company described DeepSeek as “state-subsidised, state-controlled, and freely available,” raises concerns about the model’s ability to influence global AI development.

OpenAI compared DeepSeek to Chinese telecommunications company Huawei, warning that Chinese regulations could allow the government to compel DeepSeek to compromise sensitive US systems or infrastructure. Concerns about data privacy were also raised, with OpenAI pointing out that Chinese rules could force DeepSeek to disclose user data to the government, and enhance China’s ability to develop more advanced AI systems.

The competition from China also includes Ernie X1 and Ernie 4.5, released by Baidu, which are designed to compete with Western systems.

According to Baidu, Ernie X1 “delivers performance on par with DeepSeek R1 at only half the price.” Meanwhile, Ernie 4.5 is priced at just 1% of OpenAI’s GPT-4.5 while outperforming it in multiple benchmarks.

DeepSeek’s aggressive pricing strategy is also raising concerns with the US companies. According to Bernstein Research, DeepSeek’s V3 and R1 models are priced “anywhere from 20-40x cheaper” than equivalent models from OpenAI. The pricing pressure could force US developers to adjust their business models to remain competitive.

Baidu’s strategy of open-sourcing its models is also gaining traction. “One thing we learned from DeepSeek is that open-sourcing the best models can greatly help adoption,” Baidu CEO Robin Li said in February. Baidu plans to open-source the Ernie 4.5 series starting June 30, which could accelerate adoption and further increase competitive pressure on US firms.

Cost aside, early user feedback on Baidu’s models has been positive. “[I’ve] been playing around with it for hours, impressive performance,” Alvin Foo, a venture partner at Zero2Launch, said in a post on social media, suggesting China’s AI models are becoming more affordable and effective.

US AI security and economic risks

The submissions also highlight what the US companies perceive as risks to security and the economy.

OpenAI warned that Chinese regulations could allow the government to compel DeepSeek to manipulate its models to compromise infrastructure or sensitive applications, creating vulnerabilities in important systems.

Anthropic’s concerns centred on biosecurity. It disclosed that its own Claude 3.7 Sonnet model demonstrated capabilities in biological weapon development, highlighting the dual-use nature of AI systems.

Anthropic also raised issues with US export controls on AI chips. While Nvidia’s H20 chips meet US export restrictions, they nonetheless perform well in text generation – a important feature for reinforcement learning. Anthropic called on the government to tighten controls to prevent China from gaining a technological edge using the chips.

Google took a more cautious approach, acknowledging security risks yet warned against over-regulation. The company argues that strict AI export rules could harm US competitiveness by limiting business opportunities for domestic cloud providers. Google recommended targeted export controls to protect national security but without disruption to its business operations.

Maintaining US AI competitiveness

All US three companies emphasised the need for better government oversight and infrastructure investment to maintain US AI leadership.

Anthropic warned that by 2027, training a single advanced AI model could require up to five gigawatts of power – enough to power a small city. The company proposed a national target to build 50 additional gigawatts of AI-dedicated power capacity by 2027 and to streamline regulations around power transmission infrastructure.

OpenAI positioned the competition between US and Chinese AI as a contest between democratic and authoritarian AI models. The company argued that promoting a free-market approach would drive better outcomes and maintain America’s technological edge.

Google focused on urging practical measures, including increased federal funding for AI research, improved access to government contracts, and streamlined export controls. The company also recommended more flexible procurement rules to accelerate AI adoption by federal agencies.

Regulatory strategies for US AI

The US companies called for a unified federal approach to AI regulation.

OpenAI proposed a regulatory framework managed by the Department of Commerce, warning that fragmented state-level regulations could drive AI development overseas. The company supported a tiered export control framework, allowing broader access to US-developed AI in democratic countries while restricting it in authoritarian states.

Anthropic called for stricter export controls on AI hardware and training data, warning that even minor improvements in model performance could give China a strategic advantage.

Google focused on copyright and intellectual property rights, stressing that its interpretation of ‘fair use’ is important for AI development. The company warned that overly restrictive copyright rules could disadvantage US AI firms compared to their Chinese competitors.

All three companies stressed the need for faster government adoption of AI. OpenAI recommended removing some existing testing and procurement barriers, while Anthropic supported streamlined procurement processes. Google emphasised the need for improved interoperability in government cloud infrastructure.

See also: The best AI prompt generator: Create perfect AI prompts

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Is America falling behind in the AI race? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/is-america-falling-behind-in-the-ai-race/feed/ 0
Hugging Face calls for open-source focus in the AI Action Plan https://www.artificialintelligence-news.com/news/hugging-face-open-source-focus-ai-action-plan/ https://www.artificialintelligence-news.com/news/hugging-face-open-source-focus-ai-action-plan/#respond Thu, 20 Mar 2025 17:41:39 +0000 https://www.artificialintelligence-news.com/?p=104946 Hugging Face has called on the US government to prioritise open-source development in its forthcoming AI Action Plan. In a statement to the Office of Science and Technology Policy (OSTP), Hugging Face emphasised that “thoughtful policy can support innovation while ensuring that AI development remains competitive, and aligned with American values.” Hugging Face, which hosts […]

The post Hugging Face calls for open-source focus in the AI Action Plan appeared first on AI News.

]]>
Hugging Face has called on the US government to prioritise open-source development in its forthcoming AI Action Plan.

In a statement to the Office of Science and Technology Policy (OSTP), Hugging Face emphasised that “thoughtful policy can support innovation while ensuring that AI development remains competitive, and aligned with American values.”

Hugging Face, which hosts over 1.5 million public models across various sectors and serves seven million users, proposes an AI Action Plan centred on three interconnected pillars:

  1. Hugging Face stresses the importance of strengthening open-source AI ecosystems.  The company argues that technical innovation stems from diverse actors across institutions and that support for infrastructure – such as the National AI Research Resource (NAIRR), and investment in open science and data – allows these contributions to have an additive effect and accelerate robust innovation.
  1. The company prioritises efficient and reliable adoption of AI. Hugging Face believes that spreading the benefits of the technology by facilitating its adoption along the value chain requires actors across sectors of activity to shape its development. It states that more efficient, modular, and robust AI models require research and infrastructural investments to enable the broadest possible participation and innovation—enabling diffusion of technology across the US economy.
  1. Hugging Face also highlights the need to promote security and standards. The company suggests that decades of practices in open-source software cybersecurity, information security, and standards can inform safer AI technology. It advocates for promoting traceability, disclosure, and interoperability standards to foster a more resilient and robust technology ecosystem.

Open-source is key for AI advancement in the US (and beyond)

Hugging Face underlines that modern AI is built on decades of open research, with commercial giants relying heavily on open-source contributions. Recent breakthroughs – such as OLMO-2 and Olympic-Coder – demonstrate that open research remains a promising path to developing systems that match the performance of commercial models, and can often surpass them, especially in terms of efficiency and performance in specific domains.

“Perhaps most striking is the rapid compression of development timelines,” notes the company, “what once required over 100B parameter models just two years ago can now be accomplished with 2B parameter models, suggesting an accelerating path to parity.”

This trend towards more accessible, efficient, and collaborative AI development indicates that open approaches to AI development have a critical role to play in enabling a successful AI strategy that maintains technical leadership and supports more widespread and secure adoption of the technology.

Hugging Face argues that open models, infrastructure, and scientific practices constitute the foundation of AI innovation, allowing a diverse ecosystem of researchers, companies, and developers to build upon shared knowledge.

The company’s platform hosts AI models and datasets from both small actors (e.g., startups, universities) and large organisations (e.g., Microsoft, Google, OpenAI, Meta), demonstrating how open approaches accelerate progress and democratise access to AI capabilities.

“The United States must lead in open-source AI and open science, which can enhance American competitiveness by fostering a robust ecosystem of innovation and ensuring a healthy balance of competition and shared innovation,” states Hugging Face.

Research has shown that open technical systems act as force multipliers for economic impact, with an estimated 2000x multiplier effect. This means that $4 billion invested in open systems could potentially generate $8 trillion in value for companies using them.

These economic benefits extend to national economies as well. Without any open-source software contributions, the average country would lose 2.2% of its GDP. Open-source drove between €65 billion and €95 billion of European GDP in 2018 alone, a finding so significant that the European Commission cited it when establishing new rules to streamline the process for open-sourcing government software.

This demonstrates how open-source impact translates directly into policy action and economic advantage at the national level, underlining the importance of open-source as a public good.

Practical factors driving commercial adoption of open-source AI

Hugging Face identifies several practical factors driving the commercial adoption of open models:

  • Cost efficiency is a major driver, as developing AI models from scratch requires significant investment, so leveraging open foundations reduces R&D expenses.
  • Customisation is crucial, as organisations can adapt and deploy models specifically tailored to their use cases rather than relying on one-size-fits-all solutions.
  • Open models reduce vendor lock-in, giving companies greater control over their technology stack and independence from single providers.
  • Open models have caught up to and, in certain cases, surpassed the capabilities of closed, proprietary systems.

These factors are particularly valuable for startups and mid-sized companies, which can access cutting-edge technology without massive infrastructure investments. Banks, pharmaceutical companies, and other industries have been adapting open models to specific market needs—demonstrating how open-source foundations support a vibrant commercial ecosystem across the value chain.

Hugging Face’s policy recommendations to support open-source AI in the US

To support the development and adoption of open AI systems, Hugging Face offers several policy recommendations:

  • Enhance research infrastructure: Fully implement and expand the National AI Research Resource (NAIRR) pilot. Hugging Face’s active participation in the NAIRR pilot has demonstrated the value of providing researchers with access to computing resources, datasets, and collaborative tools.
  • Allocate public computing resources for open-source: The public should have ways to participate via public AI infrastructure. One way to do this would be to dedicate a portion of publicly-funded computing infrastructure to support open-source AI projects, reducing barriers to innovation for smaller research teams and companies that cannot afford proprietary systems.
  • Enable access to data for developing open systems: Create sustainable data ecosystems through targeted policies that address the decreasing data commons. Publishers are increasingly signing data licensing deals with proprietary AI model developers, meaning that quality data acquisition costs are now approaching or even surpassing computational expenses of training frontier models, threatening to lock out small open developers from access to quality data.  Support organisations that contribute to public data repositories and streamline compliance pathways that reduce legal barriers to responsible data sharing.
  • Develop open datasets: Invest in the creation, curation, and maintenance of robust, representative datasets that can support the next generation of AI research and applications. Expand initiatives like the IBM AI Alliance Trusted Data Catalog and support projects like IDI’s AI-driven Digitization of the public collections in the Boston Public Library.
  • Strengthen rights-respecting data access frameworks: Establish clear guidelines for data usage, including standardised protocols for anonymisation, consent management, and usage tracking.  Support public-private partnerships to create specialised data trusts for high-value domains like healthcare and climate science, ensuring that individuals and organisations maintain appropriate control over their data while enabling innovation.    
  • Invest in stakeholder-driven innovation: Create and support programmes that enable organisations across diverse sectors (healthcare, manufacturing, education) to develop customised AI systems for their specific needs, rather than relying exclusively on general-purpose systems from major providers. This enables broader participation in the AI ecosystem and ensures that the benefits of AI extend throughout the economy.
  • Strengthen centres of excellence: Expand NIST’s role as a convener for AI experts across academia, industry, and government to share lessons and develop best practices.  In particular, the AI Risk Management Framework has played a significant role in identifying stages of AI development and research questions that are critical to ensuring more robust and secure technology deployment for all. The tools developed at Hugging Face, from model documentation to evaluation libraries, are directly shaped by these questions.
  • Support high-quality data for performance and reliability evaluation: AI development depends heavily on data, both to train models and to reliably evaluate their progress, strengths, risks, and limitations. Fostering greater access to public data in a safe and secure way and ensuring that the evaluation data used to characterise models is sound and evidence-based will accelerate progress in both performance and reliability of the technology.

Prioritising efficient and reliable AI adoption

Hugging Face highlights that smaller companies and startups face significant barriers to AI adoption due to high costs and limited resources. According to IDC, global AI spending will reach $632 billion in 2028, but these costs remain prohibitive for many small organisations.

For organisations adopting open-source AI tools, it brings financial returns. 51% of surveyed companies currently utilising open-source AI tools report positive ROI, compared to just 41% of those not using open-source.

However, energy scarcity presents a growing concern, with the International Energy Agency projecting that data centres’ electricity consumption could double from 2022 levels to 1,000 TWh by 2026, equivalent to Japan’s entire electricity demand. While training AI models is energy-intensive, inference, due to its scale and frequency, can ultimately exceed training energy consumption.

Ensuring broad AI accessibility requires both hardware optimisations and scalable software frameworks.  A range of organisations are developing models tailored to their specific needs, and US leadership in efficiency-focused AI development presents a strategic advantage. The DOE’s AI for Energy initiative further supports research into energy-efficient AI, facilitating wider adoption without excessive computational demands.

With its letter to the OSTP, Hugging Face advocates for an AI Action Plan centred on open-source principles. By taking decisive action, the US can secure its leadership, drive innovation, enhance security, and ensure the widespread benefits of AI are realised across society and the economy.

See also: UK minister in US to pitch Britain as global AI investment hub

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Hugging Face calls for open-source focus in the AI Action Plan appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/hugging-face-open-source-focus-ai-action-plan/feed/ 0
UK minister in US to pitch Britain as global AI investment hub https://www.artificialintelligence-news.com/news/uk-minister-in-us-pitch-britain-global-ai-investment-hub/ https://www.artificialintelligence-news.com/news/uk-minister-in-us-pitch-britain-global-ai-investment-hub/#respond Thu, 20 Mar 2025 13:18:04 +0000 https://www.artificialintelligence-news.com/?p=104940 The UK aims to secure its position as a global leader with additional AI investment, with Technology Secretary Peter Kyle currently in the US to champion Britain’s credentials. As the UK government prioritises AI within its “Plan for Change,” Kyle’s visit aims to strengthen the special relationship between the UK and the US that has […]

The post UK minister in US to pitch Britain as global AI investment hub appeared first on AI News.

]]>
The UK aims to secure its position as a global leader with additional AI investment, with Technology Secretary Peter Kyle currently in the US to champion Britain’s credentials.

As the UK government prioritises AI within its “Plan for Change,” Kyle’s visit aims to strengthen the special relationship between the UK and the US that has been under particular strain in recent years.

Speaking at NVIDIA’s annual conference in San Jose on 20th March, Kyle outlined the government’s strategy to “rewire” the British economy around AI. This initiative seeks to distribute the benefits of AI-driven wealth creation beyond traditional hubs like Silicon Valley and London, empowering communities across the UK to embrace its opportunities.

Addressing an audience of business leaders, developers, and innovators, the Technology Secretary articulated his vision for leveraging AI and advanced technologies to tackle complex global challenges, positioning Britain as a beacon of innovation.

The UK is actively deploying AI to enhance public services and stimulate economic growth, a cornerstone of the government’s “Plan for Change.”

Kyle is now highlighting the significant potential of the UK’s AI sector, currently valued at over $92 billion and projected to exceed $1 trillion by 2035. This growth trajectory, according to the government, will position Britain as the second-leading AI nation in the democratic world—presenting a wealth of investment opportunities for US companies and financial institutions.

A central theme of Kyle’s message is the readiness of the UK to embrace AI investment, with a particular emphasis on transforming “the relics of economic eras past into the UK’s innovative AI Growth Zones.”

These “AI Growth Zones” are a key element of the government’s AI Opportunities Action Plan. They are strategically designated areas designed to rapidly attract large-scale AI investment through streamlined regulations and dedicated infrastructure.

AI Growth Zones, as the name suggests, are envisioned as vibrant hubs for AI development with a pipeline of new opportunities for companies to scale up and innovate. The Technology Secretary is actively encouraging investors to participate in this new form of partnership.

During his speech at the NVIDIA conference, Kyle is expected to detail how these Growth Zones – benefiting from access to substantial power connections and a planning system designed to expedite construction – will facilitate the development of a compute infrastructure on a scale that the UK “has never seen before.”

The government has already received numerous proposals from local leaders and industry stakeholders across the nation, demonstrating Britain’s eagerness to utilise AI to revitalise communities and drive economic growth throughout the country.

This initiative is expected to contribute to higher living standards across the UK, a key priority for the government over the next four years. The AI Growth Zones are intended to deliver the jobs, investment, and a thriving business environment necessary to improve the financial well-being of citizens and deliver on the “Plan for Change.”

At the NVIDIA conference, Kyle is expected to say: “In empty factories and abandoned mines, in derelict sites and unused power supplies, I see the places where we can begin to build a new economic model. A model completely rewired around the immense power of artificial intelligence.

“Where, faced with that power, the state is neither a blocker nor a shirker—but an agile, proactive partner. In Britain, we want to turn the relics of economic eras past into AI Growth Zones.”

As part of his visit to the US, Peter Kyle will also engage with prominent companies in the tech sector, including OpenAI, Anthropic, NVIDIA, and Vantage. His aim is to encourage more of these companies to establish a presence in the UK, positioning it as their “Silicon Valley home from home.”

Furthermore, the Technology Secretary is expected to state: “There is a real hunger for investment in Britain, and people who are optimistic about the future, and hopeful for the opportunities which AI will bring for them and their families. States owe it to their citizens to support it. Not through diktat or directive, but through partnership.”

The UK Prime Minister and the President of the US have placed AI at the forefront of the transatlantic relationship. During a visit to the White House last month, the Prime Minister confirmed that both nations are collaborating on a new economic deal with advanced technologies at its core.

Since unveiling its new AI strategy at the beginning of the year and assigning the technology a central role in delivering the government’s ‘Plan for Change,’ the UK has already witnessed significant investment from US companies seeking to establish AI bases in Britain.

Notable recent investments include a substantial £12 billion commitment from Vantage Data Centers to significantly expand Britain’s data infrastructure, which is projected to create approximately 11,500 jobs. Additionally, last month saw the UK Government formalise a partnership with Anthropic to enhance collaboration on leveraging AI to improve public services nationwide.

By strengthening these partnerships with leading US tech firms and investors, the UK’s AI sector is well-positioned for sustained growth as the government aims to continue to remove innovation barriers.

(Photo by Billy Joachim)

See also: OpenAI and Google call for US government action to secure AI lead

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK minister in US to pitch Britain as global AI investment hub appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-minister-in-us-pitch-britain-global-ai-investment-hub/feed/ 0
OpenAI and Google call for US government action to secure AI lead https://www.artificialintelligence-news.com/news/openai-and-google-call-us-government-action-secure-ai-lead/ https://www.artificialintelligence-news.com/news/openai-and-google-call-us-government-action-secure-ai-lead/#respond Fri, 14 Mar 2025 16:12:54 +0000 https://www.artificialintelligence-news.com/?p=104794 OpenAI and Google are each urging the US government to take decisive action to secure the nation’s AI leadership. “As America’s world-leading AI sector approaches AGI, with a Chinese Communist Party (CCP) determined to overtake us by 2030, the Trump Administration’s new AI Action Plan can ensure that American-led AI built on democratic principles continues […]

The post OpenAI and Google call for US government action to secure AI lead appeared first on AI News.

]]>
OpenAI and Google are each urging the US government to take decisive action to secure the nation’s AI leadership.

“As America’s world-leading AI sector approaches AGI, with a Chinese Communist Party (CCP) determined to overtake us by 2030, the Trump Administration’s new AI Action Plan can ensure that American-led AI built on democratic principles continues to prevail over CCP-built autocratic, authoritarian AI,” wrote OpenAI, in a letter to the Office of Science and Technology Policy.

In a separate letter, Google echoed this sentiment by stating, “While America currently leads the world in AI – and is home to the most capable and widely adopted AI models and tools – our lead is not assured.”    

A plan for the AI Action Plan

OpenAI highlighted AI’s potential to “scale human ingenuity,” driving productivity, prosperity, and freedom.  The company likened the current advancements in AI to historical leaps in innovation, such as the domestication of the horse, the invention of the printing press, and the advent of the computer.

We are at “the doorstep of the next leap in prosperity,” according to OpenAI CEO Sam Altman. The company stresses the importance of “freedom of intelligence,” advocating for open access to AGI while safeguarding against autocratic control and bureaucratic barriers.

OpenAI also outlined three scaling principles:

  1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.
  1. The cost to use a given level of AI capability falls by about 10x every 12 months.
  1. The amount of calendar time it takes to improve an AI model keeps decreasing.

Google also has a three-point plan for the US to focus on:

  1. Invest in AI: Google called for coordinated action to address the surging energy needs of AI infrastructure, balanced export controls, continued funding for R&D, and pro-innovation federal policy frameworks.
  1. Accelerate and modernise government AI adoption: Google urged the federal government to lead by example through AI adoption and deployment, including implementing multi-vendor, interoperable AI solutions and streamlining procurement processes.
  1. Promote pro-innovation approaches internationally: Google advocated for an active international economic policy to support AI innovation, championing market-driven technical standards, working with aligned countries to address national security risks, and combating restrictive foreign AI barriers.

AI policy recommendations for the US government

Both companies provided detailed policy recommendations to the US government.

OpenAI’s proposals include:

  • A regulatory strategy that ensures the freedom to innovate through voluntary partnership between the federal government and the private sector.    
  • An export control strategy that promotes the global adoption of American AI systems while protecting America’s AI lead.    
  • A copyright strategy that protects the rights of content creators while preserving American AI models’ ability to learn from copyrighted material.    
  • An infrastructure opportunity strategy to drive growth, including policies to support a thriving AI-ready workforce and ecosystems of labs, start-ups, and larger companies.    
  • An ambitious government adoption strategy to ensure the US government itself sets an example of using AI to benefit its citizens.    

Google’s recommendations include:

  • Advancing energy policies to power domestic data centres, including transmission and permitting reform.    
  • Adopting balanced export control policies that support market access while targeting pertinent risks.    
  • Accelerating AI R&D, streamlining access to computational resources, and incentivising public-private partnerships.    
  • Crafting a pro-innovation federal framework for AI, including federal legislation that prevents a patchwork of state laws, ensuring industry has access to data that enables fair learning, emphasising sector-specific and risk-based AI governance, and supporting workforce initiatives to develop AI skills.    

Both OpenAI and Google emphasise the need for swift and decisive action. OpenAI warned that America’s lead in AI is narrowing, while Google stressed that policy decisions will determine the outcome of the global AI competition.

“We are in a global AI competition, and policy decisions will determine the outcome,” Google explained. “A pro-innovation approach that protects national security and ensures that everyone benefits from AI is essential to realising AI’s transformative potential and ensuring that America’s lead endures.”

(Photo by Nils Huenerfuerst

See also: Gemma 3: Google launches its latest open AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI and Google call for US government action to secure AI lead appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-and-google-call-us-government-action-secure-ai-lead/feed/ 0
CERTAIN drives ethical AI compliance in Europe  https://www.artificialintelligence-news.com/news/certain-drives-ethical-ai-compliance-in-europe/ https://www.artificialintelligence-news.com/news/certain-drives-ethical-ai-compliance-in-europe/#respond Wed, 26 Feb 2025 17:27:42 +0000 https://www.artificialintelligence-news.com/?p=104623 EU-funded initiative CERTAIN aims to drive ethical AI compliance in Europe amid increasing regulations like the EU AI Act. CERTAIN — short for “Certification for Ethical and Regulatory Transparency in Artificial Intelligence” — will focus on the development of tools and frameworks that promote transparency, compliance, and sustainability in AI technologies. The project is led […]

The post CERTAIN drives ethical AI compliance in Europe  appeared first on AI News.

]]>
EU-funded initiative CERTAIN aims to drive ethical AI compliance in Europe amid increasing regulations like the EU AI Act.

CERTAIN — short for “Certification for Ethical and Regulatory Transparency in Artificial Intelligence” — will focus on the development of tools and frameworks that promote transparency, compliance, and sustainability in AI technologies.

The project is led by Idemia Identity & Security France in collaboration with 19 partners across ten European countries, including the St. Pölten University of Applied Sciences (UAS) in Austria. With its official launch in January 2025, CERTAIN could serve as a blueprint for global AI governance.

Driving ethical AI practices in Europe

According to Sebastian Neumaier, Senior Researcher at the St. Pölten UAS’ Institute of IT Security Research and project manager for CERTAIN, the goal is to address crucial regulatory and ethical challenges.  

“In CERTAIN, we want to develop tools that make AI systems transparent and verifiable in accordance with the requirements of the EU’s AI Act. Our goal is to develop practically feasible solutions that help companies to efficiently fulfil regulatory requirements and sustainably strengthen confidence in AI technologies,” emphasised Neumaier.  

To achieve this, CERTAIN aims to create user-friendly tools and guidelines that simplify even the most complex AI regulations—helping organisations both in the public and private sectors navigate and implement these rules effectively. The overall intent is to provide a bridge between regulation and innovation, empowering businesses to leverage AI responsibly while fostering public trust.

Harmonising standards and improving sustainability  

One of CERTAIN’s primary objectives is to establish consistent standards for data sharing and AI development across Europe. By setting industry-wide norms for interoperability, the project seeks to improve collaboration and efficiency in the use of AI-driven technologies.

The effort to harmonise data practices isn’t just limited to compliance; it also aims to unlock new opportunities for innovation. CERTAIN’s solutions will create open and trustworthy European data spaces—essential components for driving sustainable economic growth.  

In line with the EU’s Green Deal, CERTAIN places a strong focus on sustainability. AI technologies, while transformative, come with significant environmental challenges—such as high energy consumption and resource-intensive data processing.  

CERTAIN will address these issues by promoting energy-efficient AI systems and advocating for eco-friendly methods of data management. This dual approach not only aligns with EU sustainability goals but also ensures that AI development is carried out with the health of the planet in mind.

A collaborative framework to unlock AI innovation

A unique aspect of CERTAIN is its approach to fostering collaboration and dialogue among stakeholders. The project team at St. Pölten UAS is actively engaging with researchers, tech companies, policymakers, and end-users to co-develop, test, and refine ideas, tools, and standards.  

This practice-oriented exchange extends beyond product development. CERTAIN also serves as a central authority for informing stakeholders about legal, ethical, and technical matters related to AI and certification. By maintaining open channels of communication, CERTAIN ensures that its outcomes are not only practical but also widely adopted.   

CERTAIN is part of the EU’s Horizon Europe programme, specifically under Cluster 4: Digital, Industry, and Space.

The project’s multidisciplinary and international consortium includes leading academic institutions, industrial giants, and research organisations, making it a powerful collective effort to shape the future of AI in Europe.  

In January 2025, representatives from all 20 consortium members met in Osny, France, to kick off their collaborative mission. The two-day meeting set the tone for the project’s ambitious agenda, with partners devising strategies for tackling the regulatory, technical, and ethical hurdles of AI.  

Ensuring compliance with ethical AI regulations in Europe 

As the EU’s AI Act edges closer to implementation, guidelines and tools like those developed under CERTAIN will be pivotal.

The Act will impose strict requirements on AI systems, particularly those deemed “high-risk,” such as applications in healthcare, transportation, and law enforcement.

While these regulations aim to ensure safety and accountability, they also pose challenges for organisations seeking to comply.  

CERTAIN seeks to alleviate these challenges by providing actionable solutions that align with Europe’s legal framework while encouraging innovation. By doing so, the project will play a critical role in positioning Europe as a global leader in ethical AI development.  

See also: Endor Labs: AI transparency vs ‘open-washing’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CERTAIN drives ethical AI compliance in Europe  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/certain-drives-ethical-ai-compliance-in-europe/feed/ 0
Endor Labs: AI transparency vs ‘open-washing’ https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/ https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/#respond Mon, 24 Feb 2025 18:15:45 +0000 https://www.artificialintelligence-news.com/?p=104605 As the AI industry focuses on transparency and security, debates around the true meaning of “openness” are intensifying. Experts from open-source security firm Endor Labs weighed in on these pressing topics. Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, emphasised the importance of applying lessons learned from software security to AI systems. “The US […]

The post Endor Labs: AI transparency vs ‘open-washing’ appeared first on AI News.

]]>
As the AI industry focuses on transparency and security, debates around the true meaning of “openness” are intensifying. Experts from open-source security firm Endor Labs weighed in on these pressing topics.

Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, emphasised the importance of applying lessons learned from software security to AI systems.

“The US government’s 2021 Executive Order on Improving America’s Cybersecurity includes a provision requiring organisations to produce a software bill of materials (SBOM) for each product sold to federal government agencies.”

An SBOM is essentially an inventory detailing the open-source components within a product, helping detect vulnerabilities. Stiefel argued that “applying these same principles to AI systems is the logical next step.”  

“Providing better transparency for citizens and government employees not only improves security,” he explained, “but also gives visibility into a model’s datasets, training, weights, and other components.”

What does it mean for an AI model to be “open”?  

Julien Sobrier, Senior Product Manager at Endor Labs, added crucial context to the ongoing discussion about AI transparency and “openness.” Sobrier broke down the complexity inherent in categorising AI systems as truly open.

“An AI model is made of many components: the training set, the weights, and programs to train and test the model, etc. It is important to make the whole chain available as open source to call the model ‘open’. It is a broad definition for now.”  

Sobrier noted the lack of consistency across major players, which has led to confusion about the term.

“Among the main players, the concerns about the definition of ‘open’ started with OpenAI, and Meta is in the news now for their LLAMA model even though that’s ‘more open’. We need a common understanding of what an open model means. We want to watch out for any ‘open-washing,’ as we saw it with free vs open-source software.”  

One potential pitfall, Sobrier highlighted, is the increasingly common practice of “open-washing,” where organisations claim transparency while imposing restrictions.

“With cloud providers offering a paid version of open-source projects (such as databases) without contributing back, we’ve seen a shift in many open-source projects: The source code is still open, but they added many commercial restrictions.”  

“Meta and other ‘open’ LLM providers might go this route to keep their competitive advantage: more openness about the models, but preventing competitors from using them,” Sobrier warned.

DeepSeek aims to increase AI transparency

DeepSeek, one of the rising — albeit controversial — players in the AI industry, has taken steps to address some of these concerns by making portions of its models and code open-source. The move has been praised for advancing transparency while providing security insights.  

“DeepSeek has already released the models and their weights as open-source,” said Andrew Stiefel. “This next move will provide greater transparency into their hosted services, and will give visibility into how they fine-tune and run these models in production.”

Such transparency has significant benefits, noted Stiefel. “This will make it easier for the community to audit their systems for security risks and also for individuals and organisations to run their own versions of DeepSeek in production.”  

Beyond security, DeepSeek also offers a roadmap on how to manage AI infrastructure at scale.

“From a transparency side, we’ll see how DeepSeek is running their hosted services. This will help address security concerns that emerged after it was discovered they left some of their Clickhouse databases unsecured.”

Stiefel highlighted that DeepSeek’s practices with tools like Docker, Kubernetes (K8s), and other infrastructure-as-code (IaC) configurations could empower startups and hobbyists to build similar hosted instances.  

Open-source AI is hot right now

DeepSeek’s transparency initiatives align with the broader trend toward open-source AI. A report by IDC reveals that 60% of organisations are opting for open-source AI models over commercial alternatives for their generative AI (GenAI) projects.  

Endor Labs research further indicates that organisations use, on average, between seven and twenty-one open-source models per application. The reasoning is clear: leveraging the best model for specific tasks and controlling API costs.

“As of February 7th, Endor Labs found that more than 3,500 additional models have been trained or distilled from the original DeepSeek R1 model,” said Stiefel. “This shows both the energy in the open-source AI model community, and why security teams need to understand both a model’s lineage and its potential risks.”  

For Sobrier, the growing adoption of open-source AI models reinforces the need to evaluate their dependencies.

“We need to look at AI models as major dependencies that our software depends on. Companies need to ensure they are legally allowed to use these models but also that they are safe to use in terms of operational risks and supply chain risks, just like open-source libraries.”

He emphasised that any risks can extend to training data: “They need to be confident that the datasets used for training the LLM were not poisoned or had sensitive private information.”  

Building a systematic approach to AI model risk  

As open-source AI adoption accelerates, managing risk becomes ever more critical. Stiefel outlined a systematic approach centred around three key steps:  

  1. Discovery: Detect the AI models your organisation currently uses.  
  2. Evaluation: Review these models for potential risks, including security and operational concerns.  
  3. Response: Set and enforce guardrails to ensure safe and secure model adoption.  

“The key is finding the right balance between enabling innovation and managing risk,” Stiefel said. “We need to give software engineering teams latitude to experiment but must do so with full visibility. The security team needs line-of-sight and the insight to act.”  

Sobrier further argued that the community must develop best practices for safely building and adopting AI models. A shared methodology is needed to evaluate AI models across parameters such as security, quality, operational risks, and openness.

Beyond transparency: Measures for a responsible AI future  

To ensure the responsible growth of AI, the industry must adopt controls that operate across several vectors:  

  • SaaS models: Safeguarding employee use of hosted models.
  • API integrations: Developers embedding third-party APIs like DeepSeek into applications, which, through tools like OpenAI integrations, can switch deployment with just two lines of code.
  • Open-source models: Developers leveraging community-built models or creating their own models from existing foundations maintained by companies like DeepSeek.

Sobrier warned of complacency in the face of rapid AI progress. “The community needs to build best practices to develop safe and open AI models,” he advised, “and a methodology to rate them along security, quality, operational risks, and openness.”  

As Stiefel succinctly summarised: “Think about security across multiple vectors and implement the appropriate controls for each.”

See also: AI in 2025: Purpose-driven models, human integration, and more

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Endor Labs: AI transparency vs ‘open-washing’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/feed/ 0