regulation Archives - AI News https://www.artificialintelligence-news.com/news/tag/regulation/ Artificial Intelligence News Fri, 25 Apr 2025 14:07:35 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png regulation Archives - AI News https://www.artificialintelligence-news.com/news/tag/regulation/ 32 32 Navigating the EU AI Act: Implications for UK businesses https://www.artificialintelligence-news.com/news/navigating-the-eu-ai-act-implications-for-uk-businesses/ https://www.artificialintelligence-news.com/news/navigating-the-eu-ai-act-implications-for-uk-businesses/#respond Mon, 07 Apr 2025 07:10:00 +0000 https://www.artificialintelligence-news.com/?p=105005 The EU AI Act, which came into effect on August 1, 2024, marks a turning point in the regulation of artificial intelligence. Aimed at governing the use and development of AI, it imposes rigorous standards for organisations operating within the EU or providing AI-driven products and services to its member states. Understanding and complying with […]

The post Navigating the EU AI Act: Implications for UK businesses appeared first on AI News.

]]>
The EU AI Act, which came into effect on August 1, 2024, marks a turning point in the regulation of artificial intelligence. Aimed at governing the use and development of AI, it imposes rigorous standards for organisations operating within the EU or providing AI-driven products and services to its member states. Understanding and complying with the Act is essential for UK businesses seeking to compete in the European market.

The scope and impact of the EU AI Act

The EU AI Act introduces a risk-based framework that classifies AI systems into four categories: minimal, limited, high, and unacceptable risk. High-risk systems, which include AI used in healthcare diagnostics, autonomous vehicles, and financial decision-making, face stringent regulations. This risk-based approach ensures that the level of oversight corresponds to the potential impact of the technology on individuals and society.

For UK businesses, non-compliance with these rules is not an option. Organisations must ensure their AI systems align with the Act’s requirements or risk hefty fines, reputational damage, and exclusion from the lucrative EU market. The first step is to evaluate how their AI systems are classified and adapt operations accordingly. For instance, a company using AI to automate credit scoring must ensure its system meets transparency, fairness, and data privacy standards.

Preparing for the UK’s next steps

While the EU AI Act directly affects UK businesses trading with the EU, the UK is also likely to implement its own AI regulations. The recent King’s Speech highlighted the government’s commitment to AI governance, focusing on ethical AI and data protection. Future UK legislation will likely mirror aspects of the EU framework, making it essential for businesses to proactively prepare for compliance in multiple jurisdictions.

The role of ISO 42001 in ensuring compliance

International standards like ISO 42001 provide a practical solution for businesses navigating this evolving regulatory landscape. As the global benchmark for AI management systems, ISO 42001 offers a structured framework to manage the development and deployment of AI responsibly.

Adopting ISO 42001 enables businesses to demonstrate compliance with EU requirements while fostering trust among customers, partners, and regulators. Its focus on continuous improvement ensures that organisations can adapt to future regulatory changes, whether from the EU, UK, or other regions. Moreover, the standard promotes

transparency, safety, and ethical practices, which are essential for building AI systems that are not only compliant but also aligned with societal values.

Using AI as a catalyst for growth

Compliance with the EU AI Act and ISO 42001 isn’t just about avoiding penalties; it’s an opportunity to use AI as a sustainable growth and innovation driver. Businesses prioritising ethical AI practices can gain a competitive edge by enhancing customer trust and delivering high-value solutions.

For example, AI can revolutionise patient care in the healthcare sector by enabling faster diagnostics and personalised treatments. By aligning these technologies with ISO 42001, organisations can ensure their tools meet the highest safety and privacy standards. Similarly, financial firms can harness AI to optimise decision-making processes while maintaining transparency and fairness in customer interactions.

The risks of non-compliance

Recent incidents, such as AI-driven fraud schemes and cases of algorithmic bias, highlight the risks of neglecting proper governance. The EU AI Act directly addresses these challenges by enforcing strict guidelines on data usage, transparency, and accountability. Failure to comply risks significant fines and undermines stakeholder confidence, with long-lasting consequences for an organisation’s reputation.

The MOVEit and Capita breaches serve as stark reminders of the vulnerabilities associated with technology when governance and security measures are lacking. For UK businesses, robust compliance strategies are essential to mitigate such risks and ensure resilience in an increasingly regulated environment.

How UK businesses can adapt

1. Understand the risk level of AI systems: Conduct a comprehensive review of how AI is used within the organisation to determine risk levels. This assessment should consider the impact of the technology on users, stakeholders, and society.

2. Update compliance programs: Align data collection, system monitoring, and auditing practices with the requirements of the EU AI Act.

3. Adopt ISO 42001: Implementing the standard provides a scalable framework to manage AI responsibly, ensuring compliance while fostering innovation.

4. Invest in employee education: Equip teams with the knowledge to manage AI responsibly and adapt to evolving regulations.

5. Leverage advanced technologies: Use AI itself to monitor compliance, identify risks, and improve operational efficiency.

The future of AI regulation

As AI becomes an integral part of business operations, regulatory frameworks will continue to evolve. The EU AI Act will likely inspire similar legislation worldwide, creating a more complex compliance landscape. Businesses that act now to adopt international standards and align with best practices will be better positioned to navigate these changes.

The EU AI Act is a wake-up call for UK businesses to prioritise ethical AI practices and proactive compliance. By implementing tools like ISO 42001 and preparing for future regulations, organisations can turn compliance into an opportunity for growth, innovation, and resilience.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Navigating the EU AI Act: Implications for UK businesses appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/navigating-the-eu-ai-act-implications-for-uk-businesses/feed/ 0
OpenAI and Google call for US government action to secure AI lead https://www.artificialintelligence-news.com/news/openai-and-google-call-us-government-action-secure-ai-lead/ https://www.artificialintelligence-news.com/news/openai-and-google-call-us-government-action-secure-ai-lead/#respond Fri, 14 Mar 2025 16:12:54 +0000 https://www.artificialintelligence-news.com/?p=104794 OpenAI and Google are each urging the US government to take decisive action to secure the nation’s AI leadership. “As America’s world-leading AI sector approaches AGI, with a Chinese Communist Party (CCP) determined to overtake us by 2030, the Trump Administration’s new AI Action Plan can ensure that American-led AI built on democratic principles continues […]

The post OpenAI and Google call for US government action to secure AI lead appeared first on AI News.

]]>
OpenAI and Google are each urging the US government to take decisive action to secure the nation’s AI leadership.

“As America’s world-leading AI sector approaches AGI, with a Chinese Communist Party (CCP) determined to overtake us by 2030, the Trump Administration’s new AI Action Plan can ensure that American-led AI built on democratic principles continues to prevail over CCP-built autocratic, authoritarian AI,” wrote OpenAI, in a letter to the Office of Science and Technology Policy.

In a separate letter, Google echoed this sentiment by stating, “While America currently leads the world in AI – and is home to the most capable and widely adopted AI models and tools – our lead is not assured.”    

A plan for the AI Action Plan

OpenAI highlighted AI’s potential to “scale human ingenuity,” driving productivity, prosperity, and freedom.  The company likened the current advancements in AI to historical leaps in innovation, such as the domestication of the horse, the invention of the printing press, and the advent of the computer.

We are at “the doorstep of the next leap in prosperity,” according to OpenAI CEO Sam Altman. The company stresses the importance of “freedom of intelligence,” advocating for open access to AGI while safeguarding against autocratic control and bureaucratic barriers.

OpenAI also outlined three scaling principles:

  1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.
  1. The cost to use a given level of AI capability falls by about 10x every 12 months.
  1. The amount of calendar time it takes to improve an AI model keeps decreasing.

Google also has a three-point plan for the US to focus on:

  1. Invest in AI: Google called for coordinated action to address the surging energy needs of AI infrastructure, balanced export controls, continued funding for R&D, and pro-innovation federal policy frameworks.
  1. Accelerate and modernise government AI adoption: Google urged the federal government to lead by example through AI adoption and deployment, including implementing multi-vendor, interoperable AI solutions and streamlining procurement processes.
  1. Promote pro-innovation approaches internationally: Google advocated for an active international economic policy to support AI innovation, championing market-driven technical standards, working with aligned countries to address national security risks, and combating restrictive foreign AI barriers.

AI policy recommendations for the US government

Both companies provided detailed policy recommendations to the US government.

OpenAI’s proposals include:

  • A regulatory strategy that ensures the freedom to innovate through voluntary partnership between the federal government and the private sector.    
  • An export control strategy that promotes the global adoption of American AI systems while protecting America’s AI lead.    
  • A copyright strategy that protects the rights of content creators while preserving American AI models’ ability to learn from copyrighted material.    
  • An infrastructure opportunity strategy to drive growth, including policies to support a thriving AI-ready workforce and ecosystems of labs, start-ups, and larger companies.    
  • An ambitious government adoption strategy to ensure the US government itself sets an example of using AI to benefit its citizens.    

Google’s recommendations include:

  • Advancing energy policies to power domestic data centres, including transmission and permitting reform.    
  • Adopting balanced export control policies that support market access while targeting pertinent risks.    
  • Accelerating AI R&D, streamlining access to computational resources, and incentivising public-private partnerships.    
  • Crafting a pro-innovation federal framework for AI, including federal legislation that prevents a patchwork of state laws, ensuring industry has access to data that enables fair learning, emphasising sector-specific and risk-based AI governance, and supporting workforce initiatives to develop AI skills.    

Both OpenAI and Google emphasise the need for swift and decisive action. OpenAI warned that America’s lead in AI is narrowing, while Google stressed that policy decisions will determine the outcome of the global AI competition.

“We are in a global AI competition, and policy decisions will determine the outcome,” Google explained. “A pro-innovation approach that protects national security and ensures that everyone benefits from AI is essential to realising AI’s transformative potential and ensuring that America’s lead endures.”

(Photo by Nils Huenerfuerst

See also: Gemma 3: Google launches its latest open AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI and Google call for US government action to secure AI lead appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-and-google-call-us-government-action-secure-ai-lead/feed/ 0
CERTAIN drives ethical AI compliance in Europe  https://www.artificialintelligence-news.com/news/certain-drives-ethical-ai-compliance-in-europe/ https://www.artificialintelligence-news.com/news/certain-drives-ethical-ai-compliance-in-europe/#respond Wed, 26 Feb 2025 17:27:42 +0000 https://www.artificialintelligence-news.com/?p=104623 EU-funded initiative CERTAIN aims to drive ethical AI compliance in Europe amid increasing regulations like the EU AI Act. CERTAIN — short for “Certification for Ethical and Regulatory Transparency in Artificial Intelligence” — will focus on the development of tools and frameworks that promote transparency, compliance, and sustainability in AI technologies. The project is led […]

The post CERTAIN drives ethical AI compliance in Europe  appeared first on AI News.

]]>
EU-funded initiative CERTAIN aims to drive ethical AI compliance in Europe amid increasing regulations like the EU AI Act.

CERTAIN — short for “Certification for Ethical and Regulatory Transparency in Artificial Intelligence” — will focus on the development of tools and frameworks that promote transparency, compliance, and sustainability in AI technologies.

The project is led by Idemia Identity & Security France in collaboration with 19 partners across ten European countries, including the St. Pölten University of Applied Sciences (UAS) in Austria. With its official launch in January 2025, CERTAIN could serve as a blueprint for global AI governance.

Driving ethical AI practices in Europe

According to Sebastian Neumaier, Senior Researcher at the St. Pölten UAS’ Institute of IT Security Research and project manager for CERTAIN, the goal is to address crucial regulatory and ethical challenges.  

“In CERTAIN, we want to develop tools that make AI systems transparent and verifiable in accordance with the requirements of the EU’s AI Act. Our goal is to develop practically feasible solutions that help companies to efficiently fulfil regulatory requirements and sustainably strengthen confidence in AI technologies,” emphasised Neumaier.  

To achieve this, CERTAIN aims to create user-friendly tools and guidelines that simplify even the most complex AI regulations—helping organisations both in the public and private sectors navigate and implement these rules effectively. The overall intent is to provide a bridge between regulation and innovation, empowering businesses to leverage AI responsibly while fostering public trust.

Harmonising standards and improving sustainability  

One of CERTAIN’s primary objectives is to establish consistent standards for data sharing and AI development across Europe. By setting industry-wide norms for interoperability, the project seeks to improve collaboration and efficiency in the use of AI-driven technologies.

The effort to harmonise data practices isn’t just limited to compliance; it also aims to unlock new opportunities for innovation. CERTAIN’s solutions will create open and trustworthy European data spaces—essential components for driving sustainable economic growth.  

In line with the EU’s Green Deal, CERTAIN places a strong focus on sustainability. AI technologies, while transformative, come with significant environmental challenges—such as high energy consumption and resource-intensive data processing.  

CERTAIN will address these issues by promoting energy-efficient AI systems and advocating for eco-friendly methods of data management. This dual approach not only aligns with EU sustainability goals but also ensures that AI development is carried out with the health of the planet in mind.

A collaborative framework to unlock AI innovation

A unique aspect of CERTAIN is its approach to fostering collaboration and dialogue among stakeholders. The project team at St. Pölten UAS is actively engaging with researchers, tech companies, policymakers, and end-users to co-develop, test, and refine ideas, tools, and standards.  

This practice-oriented exchange extends beyond product development. CERTAIN also serves as a central authority for informing stakeholders about legal, ethical, and technical matters related to AI and certification. By maintaining open channels of communication, CERTAIN ensures that its outcomes are not only practical but also widely adopted.   

CERTAIN is part of the EU’s Horizon Europe programme, specifically under Cluster 4: Digital, Industry, and Space.

The project’s multidisciplinary and international consortium includes leading academic institutions, industrial giants, and research organisations, making it a powerful collective effort to shape the future of AI in Europe.  

In January 2025, representatives from all 20 consortium members met in Osny, France, to kick off their collaborative mission. The two-day meeting set the tone for the project’s ambitious agenda, with partners devising strategies for tackling the regulatory, technical, and ethical hurdles of AI.  

Ensuring compliance with ethical AI regulations in Europe 

As the EU’s AI Act edges closer to implementation, guidelines and tools like those developed under CERTAIN will be pivotal.

The Act will impose strict requirements on AI systems, particularly those deemed “high-risk,” such as applications in healthcare, transportation, and law enforcement.

While these regulations aim to ensure safety and accountability, they also pose challenges for organisations seeking to comply.  

CERTAIN seeks to alleviate these challenges by providing actionable solutions that align with Europe’s legal framework while encouraging innovation. By doing so, the project will play a critical role in positioning Europe as a global leader in ethical AI development.  

See also: Endor Labs: AI transparency vs ‘open-washing’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CERTAIN drives ethical AI compliance in Europe  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/certain-drives-ethical-ai-compliance-in-europe/feed/ 0
Eric Schmidt: AI misuse poses an ‘extreme risk’ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/#respond Thu, 13 Feb 2025 12:17:38 +0000 https://www.artificialintelligence-news.com/?p=104423 Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm. Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.” Schmidt expressed concern that rapid […]

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm.

Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.”

Schmidt expressed concern that rapid AI advancements could be exploited to create weapons, including biological attacks. Highlighting the dangers, he said: “The real fears that I have are not the ones that most people talk about AI, I talk about extreme risk.”

Using a chilling analogy, Schmidt referenced the al-Qaeda leader responsible for the 9/11 attacks: “I’m always worried about the Osama bin Laden scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.”

He emphasised the pace of AI development and its potential to be co-opted by nations or groups with malevolent intent.

“Think about North Korea, or Iran, or even Russia, who have some evil goal … they could misuse it and do real harm,” Schmidt warns.

Oversight without stifling innovation

Schmidt urged governments to closely monitor private tech companies pioneering AI research. While noting that tech leaders are generally aware of AI’s societal implications, they may make decisions based on different values from those of public officials.

“My experience with the tech leaders is that they do have an understanding of the impact they’re having, but they might make a different values judgement than the government would make.”

Schmidt also endorsed the export controls introduced under former US President Joe Biden last year to restrict the sale of advanced microchips. The measure is aimed at slowing the progress of geopolitical adversaries in AI research.  

Global divisions around preventing AI misuse

The tech veteran was in Paris when he made his remarks, attending the AI Action Summit, a two-day event that wrapped up on Tuesday.

The summit, attended by 57 countries, saw the announcement of an agreement on “inclusive” AI development. Signatories included major players like China, India, the EU, and the African Union.  

However, the UK and the US declined to sign the communique. The UK government said the agreement lacked “practical clarity” and failed to address critical “harder questions” surrounding national security. 

Schmidt cautioned against excessive regulation that might hinder progress in this transformative field. This was echoed by US Vice-President JD Vance who warned that heavy-handed regulation “would kill a transformative industry just as it’s taking off”.  

This reluctance to endorse sweeping international accords reflects diverging approaches to AI governance. The EU has championed a more restrictive framework for AI, prioritising consumer protections, while countries like the US and UK are opting for more agile and innovation-driven strategies. 

Schmidt pointed to the consequences of Europe’s tight regulatory stance, predicting that the region would miss out on pioneering roles in AI.

“The AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe,” he remarked.

Prioritising national and global safety

Schmidt’s comments come against a backdrop of increasing scrutiny over AI’s dual-use potential—its ability to be used for both beneficial and harmful purposes.

From deepfakes to autonomous weapons, AI poses a bevy of risks if left without measures to guard against misuse. Leaders and experts, including Schmidt, are advocating for a balanced approach that fosters innovation while addressing these dangers head-on.

While international cooperation remains a complex and contentious issue, the overarching consensus is clear: without safeguards, AI’s evolution could have unintended – and potentially catastrophic – consequences.

(Photo by Guillaume Paumier under CC BY 3.0 license. Cropped to landscape from original version.)

See also: NEPC: AI sprint risks environmental catastrophe

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/feed/ 0
EU AI Act: What businesses need to know as regulations go live https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/ https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/#respond Fri, 31 Jan 2025 12:52:49 +0000 https://www.artificialintelligence-news.com/?p=17015 Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect. While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across […]

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect.

While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across the globe that operate in the EU must now navigate a regulatory landscape with strict rules and high stakes.

The new regulations prohibit the deployment or use of several high-risk AI systems. These include applications such as social scoring, emotion recognition, real-time remote biometric identification in public spaces, and other scenarios deemed unacceptable under the Act.

Companies found in violation of the rules could face penalties of up to 7% of their global annual turnover, making it imperative for organisations to understand and comply with the restrictions.  

Early compliance challenges  

“It’s finally here,” says Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica. “While we’re still in a phased approach, businesses’ hard-earned preparations for the EU AI Act will now face the ultimate test.”

Headshot of Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

Ergin highlights that even though most compliance requirements will not take effect until mid-2025, the early prohibitions set a decisive tone.

“For businesses, the pressure in 2025 is twofold. They must demonstrate tangible ROI from AI investments while navigating challenges around data quality and regulatory uncertainty. It’s already the perfect storm, with 89% of large businesses in the EU reporting conflicting expectations for their generative AI initiatives. At the same time, 48% say technology limitations are a major barrier to moving AI pilots into production,” he remarks.

Ergin believes the key to compliance and success lies in data governance.

“Without robust data foundations, organisations risk stagnation, limiting their ability to unlock AI’s full potential. After all, isn’t ensuring strong data governance a core principle that the EU AI Act is built upon?”

To adapt, companies must prioritise strengthening their approach to data quality.

“Strengthening data quality and governance is no longer optional, it’s critical. To ensure both compliance and prove the value of AI, businesses must invest in making sure data is accurate, holistic, integrated, up-to-date and well-governed,” says Ergin.

“This isn’t just about meeting regulatory demands; it’s about enabling AI to deliver real business outcomes. As 82% of EU companies plan to increase their GenAI investments in 2025, ensuring their data is AI-ready will be the difference between those who succeed and those who remain in the starting blocks.”

EU AI Act has no borders

The extraterritorial scope of the EU AI Act means non-EU organisations are assuredly not off the hook. As Marcus Evans, a partner at Norton Rose Fulbright, explains, the Act applies far beyond the EU’s borders.

Headshot of Marcus Evans, a partner at Norton Rose Fulbright, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“The AI Act will have a truly global application,” says Evans. “That’s because it applies not only to organisations in the EU using AI or those providing, importing, or distributing AI to the EU market, but also AI provision and use where the output is used in the EU. So, for instance, a company using AI for recruitment in the EU – even if it is based elsewhere – would still be captured by these new rules.”  

Evans advises businesses to start by auditing their AI use. “At this stage, businesses must first understand where AI is being used in their organisation so that they can then assess whether any use cases may trigger the prohibitions. Building on that initial inventory, a wider governance process can then be introduced to ensure AI use is assessed, remains outside the prohibitions, and complies with the AI Act.”  

While organisations work to align their AI practices with the new regulations, additional challenges remain. Compliance requires addressing other legal complexities such as data protection, intellectual property (IP), and discrimination risks.  

Evans emphasises that raising AI literacy within organisations is also a critical step.

“Any organisations in scope must also take measures to ensure their staff – and anyone else dealing with the operation and use of their AI systems on their behalf – have a sufficient level of AI literacy,” he states.

“AI literacy will play a vital role in AI Act compliance, as those involved in governing and using AI must understand the risks they are managing.”

Encouraging responsible innovation  

The EU AI Act is being hailed as a milestone for responsible AI development. By prohibiting harmful practices and requiring transparency and accountability, the regulation seeks to balance innovation with ethical considerations.

Headshot of Beatriz Sanz Sáiz, AI Sector Leader at EY Global, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“This framework is a pivotal step towards building a more responsible and sustainable future for artificial intelligence,” says Beatriz Sanz Sáiz, AI Sector Leader at EY Global.

Sanz Sáiz believes the legislation fosters trust while providing a foundation for transformative technological progress.

“It has the potential to foster further trust, accountability, and innovation in AI development, as well as strengthen the foundations upon which the technology continues to be built,” Sanz Sáiz asserts.

“It is critical that we focus on eliminating bias and prioritising fundamental rights like fairness, equity, and privacy. Responsible AI development is a crucial step in the quest to further accelerate innovation.”

What’s prohibited under the EU AI Act?  

To ensure compliance, businesses need to be crystal-clear on which activities fall under the EU AI Act’s strict prohibitions. The current list of prohibited activities includes:  

  • Harmful subliminal, manipulative, and deceptive techniques  
  • Harmful exploitation of vulnerabilities  
  • Unacceptable social scoring  
  • Individual crime risk assessment and prediction (with some exceptions)  
  • Untargeted scraping of internet or CCTV material to develop or expand facial recognition databases  
  • Emotion recognition in areas such as the workplace and education (with some exceptions)  
  • Biometric categorisation to infer sensitive categories (with some exceptions)  
  • Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes (with some exceptions)  

The Commission’s forthcoming guidance on which “AI systems” fall under these categories will be critical for businesses seeking to ensure compliance and reduce legal risks. Additionally, companies should anticipate further clarification and resources at the national and EU levels, such as the upcoming webinar hosted by the AI Office.

A new landscape for AI regulations

The early implementation of the EU AI Act represents just the beginning of what is a remarkably complex and ambitious regulatory endeavour. As AI continues to play an increasingly pivotal role in business strategy, organisations must learn to navigate new rules and continuously adapt to future changes.  

For now, businesses should focus on understanding the scope of their AI use, enhancing data governance, educating staff to build AI literacy, and adopting a proactive approach to compliance. By doing so, they can position themselves as leaders in a fast-evolving AI landscape and unlock the technology’s full potential while upholding ethical and legal standards.

(Photo by Guillaume Périgois)

See also: ChatGPT Gov aims to modernise US government agencies

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/feed/ 0
Meta accused of using pirated data for AI development https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/ https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/#respond Fri, 10 Jan 2025 12:16:52 +0000 https://www.artificialintelligence-news.com/?p=16840 Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models. The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States […]

The post Meta accused of using pirated data for AI development appeared first on AI News.

]]>
Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models.

The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States District Court in the Northern District of California.

The filing accuses Meta of systematically torrenting and stripping copyright management information (CMI) from pirated datasets, including works from the notorious shadow library LibGen.

According to documents recently submitted to the court, evidence reveals highly incriminating practices involving Meta’s senior leaders. Plaintiffs allege that Meta CEO Mark Zuckerberg gave explicit approval for the use of the LibGen dataset, despite internal concerns raised by the company’s AI executives.

A December 2024 memo from internal Meta discussions acknowledged LibGen as “a dataset we know to be pirated,” with debates arising about the ethical and legal ramifications of using such materials. Documents also revealed that top engineers hesitated to torrent the datasets, citing concerns about using corporate laptops for potentially unlawful activities.

Additionally, internal communications suggest that after acquiring the LibGen dataset, Meta stripped CMI from the copyrighted works contained within—a practice that plaintiffs highlight as central to claims of copyright infringement.

According to the deposition of Michael Clark – a corporate representative for Meta – the company implemented scripts designed to remove any information identifying these works as copyrighted, including keywords like “copyright,” “acknowledgements,” or lines commonly used in such texts. Clark attested that this practice was done intentionally to prepare the dataset for training Meta’s Llama AI models.  

“Doesn’t feel right”

The allegations against Meta paint a portrait of a company knowingly partaking in a widespread piracy scheme facilitated through torrenting.

According to a string of emails included as exhibits, Meta engineers expressed concerns about the optics of torrenting pirated datasets from within corporate spaces. One engineer noted that “torrenting from a [Meta-owned] corporate laptop doesn’t feel right,” but despite hesitation, the rapid downloading and distribution – or “seeding” – of pirated data took place.

Legal counsel for the plaintiffs has stated that as late as January 2024, Meta had “already torrented (both downloaded and distributed) data from LibGen.” Moreover, records show that hundreds of related documents were initially obtained by Meta months prior but were withheld during early discovery processes. Plaintiffs argue this delayed disclosure amounts to bad-faith attempts by Meta to obstruct access to vital evidence.

During a deposition on 17 December 2024, Zuckerberg himself reportedly admitted that such activities would raise “lots of red flags” and stated it “seems like a bad thing,” though he provided limited direct responses regarding Meta’s broader AI training practices.

This case originally began as an intellectual property infringement action on behalf of authors and publishers claiming violations relating to AI use of their materials. However, the plaintiffs are now seeking to add two major claims to their suit: a violation of the Digital Millennium Copyright Act (DMCA) and a breach of the California Comprehensive Data Access and Fraud Act (CDAFA).  

Under the DMCA, the plaintiffs assert that Meta knowingly removed copyright protections to conceal unauthorised uses of copyrighted texts in its Llama models.

As cited in the complaint, Meta allegedly stripped CMI “to reduce the chance that the models will memorise this data” and that this removal of rights management indicators made discovering the infringement more difficult for copyright holders. 

The CDAFA allegations involve Meta’s methods for obtaining the LibGen dataset, including allegedly engaging in torrenting to acquire copyrighted datasets without permission. Internal documentation shows Meta engineers openly discussed concerns that seeding and torrenting might prove to be “legally not ok.” 

Meta case may impact emerging legislation around AI development

At the heart of this expanding legal battle lies growing concern over the intersection of copyright law and AI.

Plaintiffs argue the stripping of copyright protections from textual datasets denies rightful compensation to copyright owners and allows Meta to build AI systems like Llama on the financial ruins of authors’ and publishers’ creative efforts.

The timing of these allegations arises amidst heightened global scrutiny surrounding “generative AI” technologies. Companies like OpenAI, Google, and Meta have all come under fire regarding the use of copyrighted data to train their models. Courts across jurisdictions are currently grappling with the long-term impact of AI on rights management, with potentially landmark cases being decided in both the US and the UK.  

In this particular case, US courts have shown increasing willingness to hear complaints about AI’s potential harm to long-established copyright law precedents. Plaintiffs, in their motion, referred to The Intercept Media v. OpenAI, a recent decision from New York in which a similar DMCA claim was allowed to proceed.

Meta continues to deny all allegations in the case and has yet to publicly respond to Zuckerberg’s reported deposition statements.

Whether or not plaintiffs succeed in these amendments, authors across the world face growing anxieties about how their creative works are handled within the context of AI. With copyright law struggling to keep pace with technological advances, this case underscores the need for clearer guidance at an international level to protect both creators and innovators.

For Meta, these claims also represent a reputational risk. As AI becomes the central focus of its future strategy, the allegations of reliance on pirated libraries are unlikely to help its ambitions of maintaining leadership in the field.  

The unfolding case of Kadrey et al. vs. Meta could have far-reaching ramifications for the development of AI models moving forward, potentially setting legal precedents in the US and beyond.

(Photo by Amy Syiek)

See also: UK wants to prove AI can modernise public services responsibly

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta accused of using pirated data for AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/feed/ 0
AI governance: Analysing emerging global regulations https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/ https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/#respond Thu, 19 Dec 2024 16:21:18 +0000 https://www.artificialintelligence-news.com/?p=16742 Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more. AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation. “The boom of […]

The post AI governance: Analysing emerging global regulations appeared first on AI News.

]]>
Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more.

AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation.

“The boom of the last few years appears to have sparked a push to establish regulatory frameworks for AI governance,” explains Šveistys.

“This is a natural development, as the rise of AI seems to pose issues in data privacy and protection, bias and discrimination, safety, intellectual property, and other legal areas, as well as ethics that need to be addressed.”

Regions diverge in regulatory strategy

The European Union’s AI Act has, unsurprisingly, positioned the region with a strict, centralised approach. The regulation, which came into force this year, is set to be fully effective by 2026.

Šveistys pointed out that the EU has acted relatively swiftly compared to other jurisdictions: “The main difference we can see is the comparative quickness with which the EU has released a uniform regulation to govern the use of all types of AI.”

Meanwhile, other regions have opted for more piecemeal approaches. China, for instance, has been implementing regulations specific to certain AI technologies in a phased-out manner. According to Šveistys, China began regulating AI models as early as 2021.

“In 2021, they introduced regulation on recommendation algorithms, which [had] increased their capabilities in digital advertising. It was followed by regulations on deep synthesis models or, in common terms, deepfakes and content generation in 2022,” he said.

“Then, in 2023, regulation on generative AI models was introduced as these models were making a splash in commercial usage.”

The US, in contrast, remains relatively uncoordinated in its approach. Federal-level regulations are yet to be enacted, with efforts mostly emerging at the state level.

“There are proposed regulations at the state level, such as the so-called California AI Act, but even if they come into power, it may still take some time before they do,” Šveistys noted.

This delay in implementing unified AI regulations in the US has raised questions about the extent to which business pushback may be contributing to the slow rollout. Šveistys said that while lobbyist pressure is a known factor, it’s not the only potential reason.

“There was pushback to the EU AI Act, too, which was nevertheless introduced. Thus, it is not clear whether the delay in the US is only due to lobbyism or other obstacles in the legislation enactment process,” explains Šveistys.

“It might also be because some still see AI as a futuristic concern, not fully appreciating the extent to which it is already a legal issue of today.”

Balancing innovation and safety

Differentiated regulatory approaches could affect the pace of innovation and business competitiveness across regions.

Europe’s regulatory framework, though more stringent, aims to ensure consumer protection and ethical adherence—something that less-regulated environments may lack.

“More rigid regulatory frameworks may impose compliance costs for businesses in the AI field and stifle competitiveness and innovation. On the other hand, they bring the benefits of protecting consumers and adhering to certain ethical norms,” comments Šveistys.

This trade-off is especially pronounced in AI-related sectors such as targeted advertising, where algorithmic bias is increasingly scrutinised.

AI governance often extends beyond laws that specifically target AI, incorporating related legal areas like those governing data collection and privacy. For example, the EU AI Act also regulates the use of AI in physical devices, such as elevators.

“Additionally, all businesses that collect data for advertisement are potentially affected as AI regulation can also cover algorithmic bias in targeted advertising,” emphasises Šveistys.

Impact on related industries

One industry that is deeply intertwined with AI developments is web scraping. Typically used for collecting publicly available data, web scraping is undergoing an AI-driven evolution.

“From data collection, validation, analysis, or overcoming anti-scraping measures, there is a lot of potential for AI to massively improve the efficiency, accuracy, and adaptability of web scraping operations,” said Šveistys. 

However, as AI regulation and related laws tighten, web scraping companies will face greater scrutiny.

“AI regulations may also bring the spotlight on certain areas of law that were always very relevant to the web scraping industry, such as privacy or copyright laws,” Šveistys added.

“At the end of the day, scraping content protected by such laws without proper authorisation could always lead to legal issues, and now so can using AI this way.”

Copyright battles and legal precedents

The implications of AI regulation are also playing out on a broader legal stage, particularly in cases involving generative AI tools.

High-profile lawsuits have been launched against AI giants like OpenAI and its primary backer, Microsoft, by authors, artists, and musicians who claim their copyrighted materials were used to train AI systems without proper permission.

“These cases are pivotal in determining the legal boundaries of using copyrighted material for AI development and establishing legal precedents for protecting intellectual property in the digital age,” said Šveistys.

While these lawsuits could take years to resolve, their outcomes may fundamentally shape the future of AI development. So, what can businesses do now as the regulatory and legal landscape continues to evolve?

“Speaking about the specific cases of using copyrighted material for AI training, businesses should approach this the same way as any web-scraping activity – that is, evaluate the specific data they wish to collect with the help of a legal expert in the field,” recommends Šveistys.

“It is important to recognise that the AI legal landscape is very new and rapidly evolving, with not many precedents in place to refer to as of yet. Hence, continuous monitoring and adaptation of your AI usage are crucial.”

Just this week, the UK Government made headlines with its announcement of a consultation on the use of copyrighted material for training AI models. Under the proposals, tech firms could be permitted to use copyrighted material unless owners have specifically opted out.

Despite the diversity of approaches globally, the AI regulatory push marks a significant moment for technological governance. Whether through the EU’s comprehensive model, China’s step-by-step strategy, or narrower, state-level initiatives like in the US, businesses worldwide must navigate a complex, evolving framework.

The challenge ahead will be striking the right balance between fostering innovation and mitigating risks, ensuring that AI remains a force for good while avoiding potential harms.

(Photo by Nathan Bingle)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI governance: Analysing emerging global regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/feed/ 0
MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption https://www.artificialintelligence-news.com/news/mhra-pilots-ai-airlock-accelerate-healthcare-adoption/ https://www.artificialintelligence-news.com/news/mhra-pilots-ai-airlock-accelerate-healthcare-adoption/#respond Wed, 04 Dec 2024 11:46:32 +0000 https://www.artificialintelligence-news.com/?p=16631 The Medicines and Healthcare products Regulatory Agency (MHRA) has announced the selection of five healthcare technologies for its ‘AI Airlock’ scheme. AI Airlock aims to refine the process of regulating AI-driven medical devices and help fast-track their safe introduction to the UK’s National Health Service (NHS) and patients in need. The technologies chosen for this […]

The post MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption appeared first on AI News.

]]>
The Medicines and Healthcare products Regulatory Agency (MHRA) has announced the selection of five healthcare technologies for its ‘AI Airlock’ scheme.

AI Airlock aims to refine the process of regulating AI-driven medical devices and help fast-track their safe introduction to the UK’s National Health Service (NHS) and patients in need.

The technologies chosen for this scheme include solutions targeting cancer and chronic respiratory diseases, as well as advancements in radiology diagnostics. These AI systems promise to revolutionise the accuracy and efficiency of healthcare, potentially driving better diagnostic tools and patient care.

The AI Airlock, as described by the MHRA, is a “sandbox” environment—an experimental framework designed to help manufacturers determine how best to collect real-world evidence to support the regulatory approval of their devices.

Unlike traditional medical devices, AI models continue to evolve through learning, making the establishment of safety and efficacy evidence more complex. The Airlock enables this exploration within a monitored virtual setting, giving developers insight into the practical challenges of regulation while supporting the NHS’s broader adoption of transformative AI technologies.

Safely enabling AI healthcare innovation  

Laura Squire, the lead figure in MedTech regulatory reform and Chief Officer at the MHRA, said: “New AI medical devices have the potential to increase the accuracy of healthcare decisions, save time, and improve efficiency—leading to better outcomes for the NHS and patients across all healthcare settings. 

“But we need to be confident that AI-powered medical devices introduced into the NHS are safe, stay safe, and perform as intended through their lifetime of use.”

Squire emphasised that the AI Airlock pilot allows collaboration “in partnership with technology specialists, developers and the NHS,” facilitating the exploration of best practices and accelerating safe patient access to innovative solutions.

Government representatives have praised the initiative for its forward-thinking framework.

Karin Smyth, Minister of State for Health, commented: “As part of our 10-Year Health Plan, we’re shifting NHS care from analogue to digital, and this project will help bring the most promising technology to patients.

“AI has the power to revolutionise care by supporting doctors to diagnose diseases, automating time-consuming admin tasks, and reducing hospital admissions by predicting future ill health.”

Science Minister Lord Vallance lauded the AI Airlock pilot as “a great example of government working with businesses to enable them to turn ideas into products that improve lives.” He added, “This shows how good regulation can facilitate emerging technologies for the benefit of the UK and our economy.”

Selected technologies  

The deployment of AI-powered medical devices requires meeting stringent criteria to ensure innovation, patient benefits, and regulatory challenge readiness. The five technologies selected for this inaugural pilot offer vital insights into healthcare’s future: 

  1. Lenus Stratify

Patients with Chronic Obstructive Pulmonary Disease (COPD) are among those who stand to benefit significantly from AI innovation. Lenus Stratify, developed by Lenus Health, analyses patient data to predict severe lung disease outcomes, reducing unscheduled hospital admissions. The system empowers care providers to adopt earlier interventions, affording patients an improved quality of life while alleviating NHS resource strain.  

  1. Philips Radiology Reporting Enhancer

Philips has integrated AI into existing radiology workflows to enhance the efficiency and accuracy of critical radiology reports. This system uses AI to prepare the “Impression” section of reports, summarising essential diagnostic information for healthcare providers. By automating this process, Philips aims to minimise workload struggles, human errors, and miscommunication, creating a more seamless diagnostic experience.  

  1. Federated AI Monitoring Service (FAMOS)

One recurring AI challenge is the concept of “drift,” when changing real-world conditions impair system performance over time. Newton’s Tree has developed FAMOS to monitor AI models in real time, flagging degradation and enabling rapid corrections. Hospitals, regulators, and software developers can use this tool to ensure algorithms remain high-performing, adapting to evolving circumstances while prioritising patient safety.  

  1. OncoFlow Personalised Cancer Management

Targeting the pressing healthcare challenge of reducing waiting times for cancer treatment, OncoFlow speeds up clinical workflows through its intelligent care pathway platform. Initially applied to breast cancer protocols, the system later aims to expand across other oncology domains. With quicker access to tailored therapies, patients gain increased survival rates amidst mounting NHS pressures.  

  1. SmartGuideline

Developed to simplify complex clinical decision-making processes, SmartGuideline uses large-language AI trained on official NICE medical guidelines. This technology allows clinicians to ask routine questions and receive verified, precise answers, eliminating the ambiguity associated with current AI language models. By integrating this tool, patients benefit from more accurate treatments grounded in up-to-date medical knowledge.  

Broader implications  

The influence of the AI Airlock extends beyond its current applications. The MHRA expects pilot findings, due in 2025, to inform future medical device regulations and create a clearer path for manufacturers developing AI-enabled technologies. 

The evidence derived will contribute to shaping post-Brexit UKCA marking processes, helping manufacturers achieve compliance with higher levels of transparency. By improving regulatory frameworks, the UK could position itself as a global hub for med-tech innovation while ensuring faster access to life-saving tools.

The urgency of these developments was underscored earlier this year in Lord Darzi’s review of health and care. The report outlined the “critical state” of the NHS, offering AI interventions as a promising pathway to sustainability. The work on AI Airlock by the MHRA addresses one of the report’s major recommendations for enabling regulatory solutions and “unlocking the AI revolution” for healthcare advancements.

While being selected into the AI Airlock pilot does not indicate regulatory approval, the technologies chosen represent a potential leap forward in applying AI to some of healthcare’s most pressing challenges. The coming years will test the potential of these solutions under regulatory scrutiny.

If successful, the initiative from the MHRA could redefine how pioneering technologies like AI are adopted in healthcare, balancing the need for speed, safety, and efficiency. With the NHS under immense pressure from growing demand, AI’s ability to augment clinicians, predict illnesses, and streamline workflows may well be the game-changer the system urgently needs.

(Photo by National Cancer Institute)

See also: AI’s role in helping to prevent skin cancer through behaviour change

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/mhra-pilots-ai-airlock-accelerate-healthcare-adoption/feed/ 0
EU introduces draft regulatory guidance for AI models https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/ https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/#respond Fri, 15 Nov 2024 14:52:05 +0000 https://www.artificialintelligence-news.com/?p=16496 The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models. The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each […]

The post EU introduces draft regulatory guidance for AI models appeared first on AI News.

]]>
The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models.

The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each addressing specific aspects of AI governance and risk mitigation:

  • Working Group 1: Transparency and copyright-related rules
  • Working Group 2: Risk identification and assessment for systemic risk
  • Working Group 3: Technical risk mitigation for systemic risk
  • Working Group 4: Governance risk mitigation for systemic risk

The draft is aligned with existing laws such as the Charter of Fundamental Rights of the European Union. It takes into account international approaches, striving for proportionality to risks, and aims to be future-proof by contemplating rapid technological changes.

Key objectives outlined in the draft include:

  • Clarifying compliance methods for providers of general-purpose AI models
  • Facilitating understanding across the AI value chain, ensuring seamless integration of AI models into downstream products
  • Ensuring compliance with Union law on copyrights, especially concerning the use of copyrighted material for model training
  • Continuously assessing and mitigating systemic risks associated with AI models

Recognising and mitigating systemic risks

A core feature of the draft is its taxonomy of systemic risks, which includes types, natures, and sources of such risks. The document outlines various threats such as cyber offences, biological risks, loss of control over autonomous AI models, and large-scale disinformation. By acknowledging the continuously evolving nature of AI technology, the draft recognises that this taxonomy will need updates to remain relevant.

As AI models with systemic risks become more common, the draft emphasises the need for robust safety and security frameworks (SSFs). It proposes a hierarchy of measures, sub-measures, and key performance indicators (KPIs) to ensure appropriate risk identification, analysis, and mitigation throughout a model’s lifecycle.

The draft suggests that providers establish processes to identify and report serious incidents associated with their AI models, offering detailed assessments and corrections as needed. It also encourages collaboration with independent experts for risk assessment, especially for models posing significant systemic risks.

Taking a proactive stance to AI regulatory guidance

The EU AI Act, which came into force on 1 August 2024, mandates that the final version of this Code be ready by 1 May 2025. This initiative underscores the EU’s proactive stance towards AI regulation, emphasising the need for AI safety, transparency, and accountability.

As the draft continues to evolve, the working groups invite stakeholders to participate actively in refining the document. Their collaborative input will shape a regulatory framework aimed at safeguarding innovation while protecting society from the potential pitfalls of AI technology.

While still in draft form, the EU’s Code of Practice for general-purpose AI models could set a benchmark for responsible AI development and deployment globally. By addressing key issues such as transparency, risk management, and copyright compliance, the Code aims to create a regulatory environment that fosters innovation, upholds fundamental rights, and ensures a high level of consumer protection.

This draft is open for written feedback until 28 November 2024. 

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU introduces draft regulatory guidance for AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/feed/ 0
Anthropic urges AI regulation to avoid catastrophes https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/ https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/#respond Fri, 01 Nov 2024 16:46:42 +0000 https://www.artificialintelligence-news.com/?p=16415 Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers. As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even […]

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers.

As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even biological and chemical disciplines significantly increases.

Anthropic warns the next 18 months are critical for policymakers to act, as the window for proactive prevention is narrowing. Notably, Anthropic’s Frontier Red Team highlights how current models can already contribute to various cyber offense-related tasks and expects future models to be even more effective.

Of particular concern is the potential for AI systems to exacerbate chemical, biological, radiological, and nuclear (CBRN) misuse. The UK AI Safety Institute found that several AI models can now match PhD-level human expertise in providing responses to science-related inquiries.

In addressing these risks, Anthropic has detailed its Responsible Scaling Policy (RSP) that was released in September 2023 as a robust countermeasure. RSP mandates an increase in safety and security measures corresponding to the sophistication of AI capabilities.

The RSP framework is designed to be adaptive and iterative, with regular assessments of AI models allowing for timely refinement of safety protocols. Anthropic says that it’s committed to maintaining and enhancing safety spans various team expansions, particularly in security, interpretability, and trust sectors, ensuring readiness for the rigorous safety standards set by its RSP.

Anthropic believes the widespread adoption of RSPs across the AI industry, while primarily voluntary, is essential for addressing AI risks.

Transparent, effective regulation is crucial to reassure society of AI companies’ adherence to promises of safety. Regulatory frameworks, however, must be strategic, incentivising sound safety practices without imposing unnecessary burdens.

Anthropic envisions regulations that are clear, focused, and adaptive to evolving technological landscapes, arguing that these are vital in achieving a balance between risk mitigation and fostering innovation.

In the US, Anthropic suggests that federal legislation could be the ultimate answer to AI risk regulation—though state-driven initiatives might need to step in if federal action lags. Legislative frameworks developed by countries worldwide should allow for standardisation and mutual recognition to support a global AI safety agenda, minimising the cost of regulatory adherence across different regions.

Furthermore, Anthropic addresses scepticism towards imposing regulations—highlighting that overly broad use-case-focused regulations would be inefficient for general AI systems, which have diverse applications. Instead, regulations should target fundamental properties and safety measures of AI models. 

While covering broad risks, Anthropic acknowledges that some immediate threats – like deepfakes – aren’t the focus of their current proposals since other initiatives are tackling these nearer-term issues.

Ultimately, Anthropic stresses the importance of instituting regulations that spur innovation rather than stifle it. The initial compliance burden, though inevitable, can be minimised through flexible and carefully-designed safety tests. Proper regulation can even help safeguard both national interests and private sector innovation by securing intellectual property against threats internally and externally.

By focusing on empirically measured risks, Anthropic plans for a regulatory landscape that neither biases against nor favours open or closed-source models. The objective remains clear: to manage the significant risks of frontier AI models with rigorous but adaptable regulation.

(Image Credit: Anthropic)

See also: President Biden issues first National Security Memorandum on AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/feed/ 0
EU AI Act: Early prep could give businesses competitive edge https://www.artificialintelligence-news.com/news/eu-ai-act-early-prep-could-give-businesses-competitive-edge/ https://www.artificialintelligence-news.com/news/eu-ai-act-early-prep-could-give-businesses-competitive-edge/#respond Tue, 22 Oct 2024 13:21:32 +0000 https://www.artificialintelligence-news.com/?p=16358 The EU AI Act is set to fully take effect in August 2026, but some provisions are coming into force even earlier. The legislation establishes a first-of-its-kind regulatory framework for AI systems, employing a risk-based approach that categorises AI applications based on their potential impact on safety, human rights, and societal wellbeing. “Some systems are […]

The post EU AI Act: Early prep could give businesses competitive edge appeared first on AI News.

]]>
The EU AI Act is set to fully take effect in August 2026, but some provisions are coming into force even earlier.

The legislation establishes a first-of-its-kind regulatory framework for AI systems, employing a risk-based approach that categorises AI applications based on their potential impact on safety, human rights, and societal wellbeing.

“Some systems are banned entirely, while systems deemed ‘high-risk’ are subject to stricter requirements and assessments before deployment,” explains the DPO Centre, a data protection consultancy.

Similar to GDPR, the Act’s extra-territorial reach means it applies to any organisation marketing, deploying, or using AI systems within the EU, regardless of where the system is developed. Businesses will be classified primarily as either ‘Providers’ or ‘Deployers,’ with additional categories for ‘Distributors,’ ‘Importers,’ ‘Product Manufacturers,’ and ‘Authorised Representatives.’

For organisations developing or deploying AI systems, particularly those classified as high-risk, compliance preparation promises to be complex. However, experts suggest viewing this as an opportunity rather than a burden.

“By embracing compliance as a catalyst for more transparent AI usage, businesses can turn regulatory demands into a competitive advantage,” notes the DPO Centre.

Key preparation strategies include comprehensive staff training, establishing robust corporate governance, and implementing strong cybersecurity measures. The legislation’s requirements often overlap with existing GDPR frameworks, particularly regarding transparency and accountability.

Organisations must also adhere to ethical AI principles and maintain clear documentation of their systems’ functionality, limitations, and intended use. The EU is currently developing specific codes of practice and templates to assist with compliance obligations.

For businesses uncertain about their obligations, experts recommend seeking professional guidance early. Tools like the EU AI Act Compliance Checker can help organisations verify their systems’ alignment with regulatory requirements.

Rather than viewing compliance as merely a regulatory burden, forward-thinking organisations should view the EU’s AI Act as an opportunity to demonstrate commitment to responsible AI development and build greater trust with their customers.

See also: AI governance gap: 95% of firms haven’t implemented frameworks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI Act: Early prep could give businesses competitive edge appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-ai-act-early-prep-could-give-businesses-competitive-edge/feed/ 0
Tech industry giants urge EU to streamline AI regulations https://www.artificialintelligence-news.com/news/tech-industry-giants-urge-eu-streamline-ai-regulations/ https://www.artificialintelligence-news.com/news/tech-industry-giants-urge-eu-streamline-ai-regulations/#respond Thu, 19 Sep 2024 15:20:55 +0000 https://www.artificialintelligence-news.com/?p=16117 Meta has spearheaded an open letter calling for urgent reform of AI regulations in the EU. The letter, which garnered support from over 50 prominent companies – including Ericsson, SAP, and Spotify – was published as an advert in the Financial Times. The collective voice of these industry leaders highlights a pressing issue: Europe’s bureaucratic […]

The post Tech industry giants urge EU to streamline AI regulations appeared first on AI News.

]]>
Meta has spearheaded an open letter calling for urgent reform of AI regulations in the EU. The letter, which garnered support from over 50 prominent companies – including Ericsson, SAP, and Spotify – was published as an advert in the Financial Times.

The collective voice of these industry leaders highlights a pressing issue: Europe’s bureaucratic approach to AI regulation may be stifling innovation and causing the region to lag behind its global counterparts.

“Europe has become less competitive and less innovative compared to other regions and it now risks falling further behind in the AI era due to inconsistent regulatory decision making,” the letter states, painting a stark picture of the continent’s current position in the AI race.

The signatories emphasise two key areas of concern. Firstly, they point to the development of ‘open’ models, which are freely available for use, modification, and further development. These models are lauded for their potential to “multiply the benefits and spread social and economic opportunity” while simultaneously bolstering sovereignty and control.

Secondly, the letter underscores the importance of ‘multimodal’ models, which integrate text, images, and speech capabilities. The signatories argue that the leap from text-only to multimodal models is akin to “the difference between having only one sense and having all five of them”. They assert that these advanced models could significantly boost productivity, drive scientific research, and inject hundreds of billions of euros into the European economy.

However, the crux of the matter lies in the regulatory landscape. The letter expresses frustration with the uncertainty surrounding data usage for AI model training, stemming from interventions by European Data Protection Authorities. This ambiguity, they argue, could result in Large Language Models (LLMs) lacking crucial Europe-specific training data.

To address these challenges, the signatories call for “harmonised, consistent, quick and clear decisions under EU data regulations that enable European data to be used in AI training for the benefit of Europeans”. They stress the need for “decisive action” to unlock Europe’s potential for creativity, ingenuity, and entrepreneurship, which they believe is essential for the region’s prosperity and technological leadership.

A copy of the letter can be found below:

While the letter acknowledges the importance of consumer protection, it also highlights the delicate balance regulators must strike to avoid hindering commercial progress. The European Commission’s approach to regulation has often been criticised for its perceived heavy-handedness, and this latest appeal from industry leaders adds weight to growing concerns about the region’s global competitiveness in the AI sector.

The pressure is rapidly mounting on European policymakers to create a regulatory environment that fosters innovation while maintaining appropriate safeguards. The coming months will likely see intensified dialogue between industry stakeholders and regulators as they grapple with these complex issues that will shape the future of AI development in Europe.

(Photo by Sara Kurfeß)

See also: SolarWinds: IT professionals want stronger AI regulation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tech industry giants urge EU to streamline AI regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/tech-industry-giants-urge-eu-streamline-ai-regulations/feed/ 0