legal Archives - AI News https://www.artificialintelligence-news.com/news/tag/legal/ Artificial Intelligence News Fri, 25 Apr 2025 14:07:35 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png legal Archives - AI News https://www.artificialintelligence-news.com/news/tag/legal/ 32 32 OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival https://www.artificialintelligence-news.com/news/openai-counter-sues-elon-musk-attempts-take-down-ai-rival/ https://www.artificialintelligence-news.com/news/openai-counter-sues-elon-musk-attempts-take-down-ai-rival/#respond Thu, 10 Apr 2025 12:05:31 +0000 https://www.artificialintelligence-news.com/?p=105285 OpenAI has launched a legal counteroffensive against one of its co-founders, Elon Musk, and his competing AI venture, xAI. In court documents filed yesterday, OpenAI accuses Musk of orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the organisation years ago. The court filing, submitted to the US District Court […]

The post OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival appeared first on AI News.

]]>
OpenAI has launched a legal counteroffensive against one of its co-founders, Elon Musk, and his competing AI venture, xAI.

In court documents filed yesterday, OpenAI accuses Musk of orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the organisation years ago.

The court filing, submitted to the US District Court for the Northern District of California, alleges Musk could not tolerate OpenAI’s success after he had “abandoned and declared [it] doomed.”

OpenAI is now seeking legal remedies, including an injunction to stop Musk’s alleged “unlawful and unfair action” and compensation for damages already caused.   

Origin story of OpenAI and the departure of Elon Musk

The legal documents recount OpenAI’s origins in 2015, stemming from an idea discussed by current CEO Sam Altman and President Greg Brockman to create an AI lab focused on developing artificial general intelligence (AGI) – AI capable of outperforming humans – for the “benefit of all humanity.”

Musk was involved in the launch, serving on the initial non-profit board and pledging $1 billion in donations.   

However, the relationship fractured. OpenAI claims that between 2017 and 2018, Musk’s demands for “absolute control” of the enterprise – or its potential absorption into Tesla – were rebuffed by Altman, Brockman, and then-Chief Scientist Ilya Sutskever. The filing quotes Sutskever warning Musk against creating an “AGI dictatorship.”

Following this disagreement, OpenAI alleges Elon Musk quit in February 2018, declaring the venture would fail without him and that he would pursue AGI development at Tesla instead. Critically, OpenAI contends the pledged $1 billion “was never satisfied—not even close”.   

Restructuring, success, and Musk’s alleged ‘malicious’ campaign

Facing escalating costs for computing power and talent retention, OpenAI restructured and created a “capped-profit” entity in 2019 to attract investment while remaining controlled by the non-profit board and bound by its mission. This structure, OpenAI states, was announced publicly and Musk was offered equity in the new entity but declined and raised no objection at the time.   

OpenAI highlights its subsequent breakthroughs – including GPT-3, ChatGPT, and GPT-4 – achieved massive public adoption and critical acclaim. These successes, OpenAI emphasises, were made after the departure of Elon Musk and allegedly spurred his antagonism.

The filing details a chronology of alleged actions by Elon Musk aimed at harming OpenAI:   

  • Founding xAI: Musk “quietly created” his competitor, xAI, in March 2023.   
  • Moratorium call: Days later, Musk supported a call for a development moratorium on AI more advanced than GPT-4, a move OpenAI claims was intended “to stall OpenAI while all others, most notably Musk, caught up”.   
  • Records demand: Musk allegedly made a “pretextual demand” for confidential OpenAI documents, feigning concern while secretly building xAI.   
  • Public attacks: Using his social media platform X (formerly Twitter), Musk allegedly broadcast “press attacks” and “malicious campaigns” to his vast following, labelling OpenAI a “lie,” “evil,” and a “total scam”.   
  • Legal actions: Musk filed lawsuits, first in state court (later withdrawn) and then the current federal action, based on what OpenAI dismisses as meritless claims of a “Founding Agreement” breach.   
  • Regulatory pressure: Musk allegedly urged state Attorneys General to investigate OpenAI and force an asset auction.   
  • “Sham bid”: In February 2025, a Musk-led consortium made a purported $97.375 billion offer for OpenAI, Inc.’s assets. OpenAI derides this as a “sham bid” and a “stunt” lacking evidence of financing and designed purely to disrupt OpenAI’s operations, potential restructuring, fundraising, and relationships with investors and employees, particularly as OpenAI considers evolving its capped-profit arm into a Public Benefit Corporation (PBC). One investor involved allegedly admitted the bid’s aim was to gain “discovery”.   

Based on these allegations, OpenAI asserts two primary counterclaims against both Elon Musk and xAI:

  • Unfair competition: Alleging the “sham bid” constitutes an unfair and fraudulent business practice under California law, intended to disrupt OpenAI and gain an unfair advantage for xAI.   
  • Tortious interference with prospective economic advantage: Claiming the sham bid intentionally disrupted OpenAI’s existing and potential relationships with investors, employees, and customers. 

OpenAI argues Musk’s actions have forced it to divert resources and expend funds, causing harm. They claim his campaign threatens “irreparable harm” to their mission, governance, and crucial business relationships. The filing also touches upon concerns regarding xAI’s own safety record, citing reports of its AI Grok generating harmful content and misinformation.

The counterclaims mark a dramatic escalation in the legal battle between the AI pioneer and its departed co-founder. While Elon Musk initially sued OpenAI alleging a betrayal of its founding non-profit, open-source principles, OpenAI now contends Musk’s actions are a self-serving attempt to undermine a competitor he couldn’t control.

With billions at stake and the future direction of AGI in the balance, this dispute is far from over.

See also: Deep Cogito open LLMs use IDA to outperform same size models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-counter-sues-elon-musk-attempts-take-down-ai-rival/feed/ 0
Tony Blair Institute AI copyright report sparks backlash https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/ https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/#respond Wed, 02 Apr 2025 11:04:11 +0000 https://www.artificialintelligence-news.com/?p=105140 The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI. According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up […]

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI.

According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up for grabs, and the UK has a golden opportunity to take the lead.

The report emphasises that countries that “embrace change and harness the power of artificial intelligence in creative ways will set the technical, aesthetic, and regulatory standards for others to follow.”

Highlighting that we are in the midst of another revolution in media and communication, the report notes that AI is disrupting how textual, visual, and auditive content is created, distributed, and experienced, much like the printing press, gramophone, and camera did before it.

“AI will usher in a new era of interactive and bespoke works, as well as a counter-revolution that celebrates everything that AI can never be,” the report states.

However, far from signalling the end of human creativity, the TBI suggests AI will open up “new ways of being original.”

The AI revolution’s impact isn’t limited to the creative industries; it’s being felt across all areas of society. Scientists are using AI to accelerate discoveries, healthcare providers are employing it to analyse X-ray images, and emergency services utilise it to locate houses damaged by earthquakes.

The report stresses that these cross-industry advancements are just the beginning, with future AI systems set to become increasingly capable, fuelled by advancements in computing power, data, model architectures, and access to talent.

The UK government has expressed its ambition to be a global leader in AI through its AI Opportunities Action Plan, announced by Prime Minister Keir Starmer on 13 January 2025. For its part, the TBI welcomes the UK government’s ambition, stating that “if properly designed and deployed, AI can make human lives healthier, safer, and more prosperous.”

However, the rapid spread of AI across sectors raises urgent policy questions, particularly concerning the data used for AI training. The application of UK copyright law to the training of AI models is currently contested, with the debate often framed as a “zero-sum game” between AI developers and rights holders. The TBI argues that this framing “misrepresents the nature of the challenge and the opportunity before us.”

The report emphasises that “bold policy solutions are needed to provide all parties with legal clarity and unlock investments that spur innovation, job creation, and economic growth.”

According to the TBI, AI presents opportunities for creators—noting its use in various fields from podcasts to filmmaking. The report draws parallels with past technological innovations – such as the printing press and the internet – which were initially met with resistance, but ultimately led to societal adaptation and human ingenuity prevailing.

The TBI proposes that the solution lies not in clinging to outdated copyright laws but in allowing them to “co-evolve with technological change” to remain effective in the age of AI.

The UK government has proposed a text and data mining exception with an opt-out option for rights holders. While the TBI views this as a good starting point for balancing stakeholder interests, it acknowledges the “significant implementation and enforcement challenges” that come with it, spanning legal, technical, and geopolitical dimensions.

In the report, the Tony Blair Institute for Global Change “assesses the merits of the UK government’s proposal and outlines a holistic policy framework to make it work in practice.”

The report includes recommendations and examines novel forms of art that will emerge from AI. It also delves into the disagreement between rights holders and developers on copyright, the wider implications of copyright policy, and the serious hurdles the UK’s text and data mining proposal faces.

Furthermore, the Tony Blair Institute explores the challenges of governing an opt-out policy, implementation problems with opt-outs, making opt-outs useful and accessible, and tackling the diffusion problem. AI summaries and the problems they present regarding identity are also addressed, along with defensive tools as a partial solution and solving licensing problems.

The report also seeks to clarify the standards on human creativity, address digital watermarking, and discuss the uncertainty around the impact of generative AI on the industry. It proposes establishing a Centre for AI and the Creative Industries and discusses the risk of judicial review, the benefits of a remuneration scheme, and the advantages of a targeted levy on ISPs to raise funding for the Centre.

However, the report has faced strong criticism. Ed Newton-Rex, CEO of Fairly Trained, raised several concerns on Bluesky. These concerns include:

  • The report repeats the “misleading claim” that existing UK copyright law is uncertain, which Newton-Rex asserts is not the case.
  • The suggestion that an opt-out scheme would give rights holders more control over how their works are used is misleading. Newton-Rex argues that licensing is currently required by law, so moving to an opt-out system would actually decrease control, as some rights holders will inevitably miss the opt-out.
  • The report likens machine learning (ML) training to human learning, a comparison that Newton-Rex finds shocking, given the vastly different scalability of the two.
  • The report’s claim that AI developers won’t make long-term profits from training on people’s work is questioned, with Newton-Rex pointing to the significant funding raised by companies like OpenAI.
  • Newton-Rex suggests the report uses strawman arguments, such as stating that generative AI may not replace all human paid activities.
  • A key criticism is that the report omits data showing how generative AI replaces demand for human creative labour.
  • Newton-Rex also criticises the report’s proposed solutions, specifically the suggestion to set up an academic centre, which he notes “no one has asked for.”
  • Furthermore, he highlights the proposal to tax every household in the UK to fund this academic centre, arguing that this would place the financial burden on consumers rather than the AI companies themselves, and the revenue wouldn’t even go to creators.

Adding to these criticisms, British novelist and author Jonathan Coe noted that “the five co-authors of this report on copyright, AI, and the arts are all from the science and technology sectors. Not one artist or creator among them.”

While the report from Tony Blair Institute for Global Change supports the government’s ambition to be an AI leader, it also raises critical policy questions—particularly around copyright law and AI training data.

(Photo by Jez Timms)

See also: Amazon Nova Act: A step towards smarter, web-native AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/feed/ 0
Eric Schmidt: AI misuse poses an ‘extreme risk’ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/#respond Thu, 13 Feb 2025 12:17:38 +0000 https://www.artificialintelligence-news.com/?p=104423 Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm. Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.” Schmidt expressed concern that rapid […]

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm.

Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.”

Schmidt expressed concern that rapid AI advancements could be exploited to create weapons, including biological attacks. Highlighting the dangers, he said: “The real fears that I have are not the ones that most people talk about AI, I talk about extreme risk.”

Using a chilling analogy, Schmidt referenced the al-Qaeda leader responsible for the 9/11 attacks: “I’m always worried about the Osama bin Laden scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.”

He emphasised the pace of AI development and its potential to be co-opted by nations or groups with malevolent intent.

“Think about North Korea, or Iran, or even Russia, who have some evil goal … they could misuse it and do real harm,” Schmidt warns.

Oversight without stifling innovation

Schmidt urged governments to closely monitor private tech companies pioneering AI research. While noting that tech leaders are generally aware of AI’s societal implications, they may make decisions based on different values from those of public officials.

“My experience with the tech leaders is that they do have an understanding of the impact they’re having, but they might make a different values judgement than the government would make.”

Schmidt also endorsed the export controls introduced under former US President Joe Biden last year to restrict the sale of advanced microchips. The measure is aimed at slowing the progress of geopolitical adversaries in AI research.  

Global divisions around preventing AI misuse

The tech veteran was in Paris when he made his remarks, attending the AI Action Summit, a two-day event that wrapped up on Tuesday.

The summit, attended by 57 countries, saw the announcement of an agreement on “inclusive” AI development. Signatories included major players like China, India, the EU, and the African Union.  

However, the UK and the US declined to sign the communique. The UK government said the agreement lacked “practical clarity” and failed to address critical “harder questions” surrounding national security. 

Schmidt cautioned against excessive regulation that might hinder progress in this transformative field. This was echoed by US Vice-President JD Vance who warned that heavy-handed regulation “would kill a transformative industry just as it’s taking off”.  

This reluctance to endorse sweeping international accords reflects diverging approaches to AI governance. The EU has championed a more restrictive framework for AI, prioritising consumer protections, while countries like the US and UK are opting for more agile and innovation-driven strategies. 

Schmidt pointed to the consequences of Europe’s tight regulatory stance, predicting that the region would miss out on pioneering roles in AI.

“The AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe,” he remarked.

Prioritising national and global safety

Schmidt’s comments come against a backdrop of increasing scrutiny over AI’s dual-use potential—its ability to be used for both beneficial and harmful purposes.

From deepfakes to autonomous weapons, AI poses a bevy of risks if left without measures to guard against misuse. Leaders and experts, including Schmidt, are advocating for a balanced approach that fosters innovation while addressing these dangers head-on.

While international cooperation remains a complex and contentious issue, the overarching consensus is clear: without safeguards, AI’s evolution could have unintended – and potentially catastrophic – consequences.

(Photo by Guillaume Paumier under CC BY 3.0 license. Cropped to landscape from original version.)

See also: NEPC: AI sprint risks environmental catastrophe

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/feed/ 0
DeepSeek ban? China data transfer boosts security concerns https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/ https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/#respond Fri, 07 Feb 2025 17:44:01 +0000 https://www.artificialintelligence-news.com/?p=104228 US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part […]

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.

DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga.

Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free.

However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the Chinese AI startup.

A threat to US AI dominance

DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the Chinese Communist Party (CCP).  

According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns.

Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight.

Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. 

Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions.

Potential ban of DeepSeek a TikTok redux?

The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by Chinese company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise.

In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat.

Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy.  

AI becomes a geopolitical battleground

The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China.

American AI firms like OpenAI have enjoyed a dominant position in recent years, but Chinese companies have poured resources into catching up and, in some cases, surpassing their US competitors.  

DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security.

China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in Chinese surveillance activities or even potential economic manipulations.

A nationwide DeepSeek ban is on the cards

If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match.  

In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools.  

DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen.

(Photo by Solen Feyissa)

See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/feed/ 0
NEPC: AI sprint risks environmental catastrophe https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/ https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/#respond Fri, 07 Feb 2025 12:32:41 +0000 https://www.artificialintelligence-news.com/?p=104189 The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint. A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction. The report, Engineering […]

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint.

A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction.

The report, Engineering Responsible AI: Foundations for Environmentally Sustainable AI, was developed in collaboration with the Royal Academy of Engineering, the Institution of Engineering and Technology, and BCS, the Chartered Institute of IT.

While stressing that data centres enabling AI systems can be built to consume fewer resources like energy and water, the report highlights that infrastructure and regulatory conditions must align for these efficiencies to materialise.

Unlocking the potential of AI while minimising environmental risks  

AI is heralded as capable of driving economic growth, creating jobs, and improving livelihoods. Launched as a central pillar of the UK’s tech strategy, the AI Opportunities Action Plan is intended to “boost economic growth, provide jobs for the future and improve people’s everyday lives.”  

Use cases for AI that are already generating public benefits include accelerating drug discovery, forecasting weather events, optimising energy systems, and even aiding climate science and improving sustainability efforts. However, this growing reliance on AI also poses environmental risks from the infrastructure required to power these systems.  

Data centres, which serve as the foundation of AI technologies, consume vast amounts of energy and water. Increasing demand has raised concerns about global competition for limited resources, such as sustainable energy and drinking water. Google and Microsoft, for instance, have recorded rising water usage by their data centres each year since 2020. Much of this water comes from drinking sources, sparking fears about resource depletion.  

With plans already in place to reform the UK’s planning system to facilitate the construction of data centres, the report calls for urgent policies to manage their environmental impact. Accurate and transparent data on resource consumption is currently lacking, which hampers policymakers’ ability to assess the true scale of these impacts and act accordingly.

Five steps to sustainable AI  

The NEPC is urging the government to spearhead change by prioritising sustainable AI development. The report outlines five key steps policymakers can act upon immediately to position the UK as a leader in resource-efficient AI:  

  1. Expand environmental reporting mandates
  2. Communicate the sector’s environmental impacts
  3. Set sustainability requirements for data centres
  4. Reconsider data collection, storage, and management practices
  5. Lead by example with government investment

Mandatory environmental reporting forms a cornerstone of the recommendations. This involves measuring data centres’ energy sources, water consumption, carbon emissions, and e-waste recycling practices to provide the resource use data necessary for policymaking.  

Raising public awareness is also vital. Communicating the environmental costs of AI can encourage developers to optimise AI tools, use smaller datasets, and adopt more efficient approaches. Notably, the report recommends embedding environmental design and sustainability topics into computer science and AI education at both school and university levels.  

Smarter, greener data centres  

One of the most urgent calls to action involves redesigning data centres to reduce their environmental footprint. The report advocates for innovations like waste heat recovery systems, zero drinking water use for cooling, and the exclusive use of 100% carbon-free energy certificates.  

Efforts like those at Queen Mary University of London, where residual heat from a campus data centre is repurposed to provide heating and hot water, offer a glimpse into the possibilities of greener tech infrastructure.  

In addition, the report suggests revising legislation on mandatory data retention to reduce the unnecessary environmental costs of storing vast amounts of data long-term. Proposals for a National Data Library could drive best practices by centralising and streamlining data storage.  

Professor Tom Rodden, Pro-Vice-Chancellor at the University of Nottingham and Chair of the working group behind the report, urged swift action:  

“In recent years, advances in AI systems and services have largely been driven by a race for size and scale, demanding increasing amounts of computational power. As a result, AI systems and services are growing at a rate unparalleled by other high-energy systems—generally without much regard for resource efficiency.  

“This is a dangerous trend, and we face a real risk that our development, deployment, and use of AI could do irreparable damage to the environment.”  

Rodden added that reliable data on these impacts is critical. “To build systems and services that effectively use resources, we first need to effectively monitor their environmental cost. Once we have access to trustworthy data… we can begin to effectively target efficiency in development, deployment, and use – and plan a sustainable AI future for the UK.”

Dame Dawn Childs, CEO of Pure Data Centres Group, underscored the role of engineering in improving efficiency. “Some of this will come from improvements to AI models and hardware, making them less energy-intensive. But we must also ensure that the data centres housing AI’s computing power and storage are as sustainable as possible.  

“That means prioritising renewable energy, minimising water use, and reducing carbon emissions – both directly and indirectly. Using low-carbon building materials is also essential.”  

Childs emphasised the importance of a coordinated approach from the start of projects. “As the UK government accelerates AI adoption – through AI Growth Zones and streamlined planning for data centres – sustainability must be a priority at every step.”  

For Alex Bardell, Chair of BCS’ Green IT Specialist Group, the focus is on optimising AI processes. “Our report has discussed optimising models for efficiency. Previous attempts to limit the drive toward increased computational power and larger models have faced significant resistance, with concerns that the UK may fall behind in the AI arena; this may not necessarily be true.  

“It is crucial to reevaluate our approach to developing sustainable AI in the future.”  

Time for transparency around AI environmental risks

Public awareness of AI’s environmental toll remains low. Recent research by the Institution of Engineering and Technology (IET) found that fewer than one in six UK residents are aware of the significant environmental costs associated with AI systems.  

“AI providers must be transparent about these effects,” said Professor Sarvapali Ramchurn, CEO of Responsible AI UK and a Fellow of the IET. “If we cannot measure it, we cannot manage it, nor ensure benefits for all. This report’s recommendations will aid national discussions on the sustainability of AI systems and the trade-offs involved.”  

As the UK pushes forward with ambitious plans to lead in AI development, ensuring environmental sustainability must take centre stage. By adopting policies and practices outlined in the NEPC report, the government can support AI growth while safeguarding finite resources for future generations.

(Photo by Braden Collum)

See also: Sustainability is key in 2025 for businesses to advance AI efforts

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/feed/ 0
EU AI Act: What businesses need to know as regulations go live https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/ https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/#respond Fri, 31 Jan 2025 12:52:49 +0000 https://www.artificialintelligence-news.com/?p=17015 Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect. While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across […]

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect.

While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across the globe that operate in the EU must now navigate a regulatory landscape with strict rules and high stakes.

The new regulations prohibit the deployment or use of several high-risk AI systems. These include applications such as social scoring, emotion recognition, real-time remote biometric identification in public spaces, and other scenarios deemed unacceptable under the Act.

Companies found in violation of the rules could face penalties of up to 7% of their global annual turnover, making it imperative for organisations to understand and comply with the restrictions.  

Early compliance challenges  

“It’s finally here,” says Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica. “While we’re still in a phased approach, businesses’ hard-earned preparations for the EU AI Act will now face the ultimate test.”

Headshot of Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

Ergin highlights that even though most compliance requirements will not take effect until mid-2025, the early prohibitions set a decisive tone.

“For businesses, the pressure in 2025 is twofold. They must demonstrate tangible ROI from AI investments while navigating challenges around data quality and regulatory uncertainty. It’s already the perfect storm, with 89% of large businesses in the EU reporting conflicting expectations for their generative AI initiatives. At the same time, 48% say technology limitations are a major barrier to moving AI pilots into production,” he remarks.

Ergin believes the key to compliance and success lies in data governance.

“Without robust data foundations, organisations risk stagnation, limiting their ability to unlock AI’s full potential. After all, isn’t ensuring strong data governance a core principle that the EU AI Act is built upon?”

To adapt, companies must prioritise strengthening their approach to data quality.

“Strengthening data quality and governance is no longer optional, it’s critical. To ensure both compliance and prove the value of AI, businesses must invest in making sure data is accurate, holistic, integrated, up-to-date and well-governed,” says Ergin.

“This isn’t just about meeting regulatory demands; it’s about enabling AI to deliver real business outcomes. As 82% of EU companies plan to increase their GenAI investments in 2025, ensuring their data is AI-ready will be the difference between those who succeed and those who remain in the starting blocks.”

EU AI Act has no borders

The extraterritorial scope of the EU AI Act means non-EU organisations are assuredly not off the hook. As Marcus Evans, a partner at Norton Rose Fulbright, explains, the Act applies far beyond the EU’s borders.

Headshot of Marcus Evans, a partner at Norton Rose Fulbright, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“The AI Act will have a truly global application,” says Evans. “That’s because it applies not only to organisations in the EU using AI or those providing, importing, or distributing AI to the EU market, but also AI provision and use where the output is used in the EU. So, for instance, a company using AI for recruitment in the EU – even if it is based elsewhere – would still be captured by these new rules.”  

Evans advises businesses to start by auditing their AI use. “At this stage, businesses must first understand where AI is being used in their organisation so that they can then assess whether any use cases may trigger the prohibitions. Building on that initial inventory, a wider governance process can then be introduced to ensure AI use is assessed, remains outside the prohibitions, and complies with the AI Act.”  

While organisations work to align their AI practices with the new regulations, additional challenges remain. Compliance requires addressing other legal complexities such as data protection, intellectual property (IP), and discrimination risks.  

Evans emphasises that raising AI literacy within organisations is also a critical step.

“Any organisations in scope must also take measures to ensure their staff – and anyone else dealing with the operation and use of their AI systems on their behalf – have a sufficient level of AI literacy,” he states.

“AI literacy will play a vital role in AI Act compliance, as those involved in governing and using AI must understand the risks they are managing.”

Encouraging responsible innovation  

The EU AI Act is being hailed as a milestone for responsible AI development. By prohibiting harmful practices and requiring transparency and accountability, the regulation seeks to balance innovation with ethical considerations.

Headshot of Beatriz Sanz Sáiz, AI Sector Leader at EY Global, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“This framework is a pivotal step towards building a more responsible and sustainable future for artificial intelligence,” says Beatriz Sanz Sáiz, AI Sector Leader at EY Global.

Sanz Sáiz believes the legislation fosters trust while providing a foundation for transformative technological progress.

“It has the potential to foster further trust, accountability, and innovation in AI development, as well as strengthen the foundations upon which the technology continues to be built,” Sanz Sáiz asserts.

“It is critical that we focus on eliminating bias and prioritising fundamental rights like fairness, equity, and privacy. Responsible AI development is a crucial step in the quest to further accelerate innovation.”

What’s prohibited under the EU AI Act?  

To ensure compliance, businesses need to be crystal-clear on which activities fall under the EU AI Act’s strict prohibitions. The current list of prohibited activities includes:  

  • Harmful subliminal, manipulative, and deceptive techniques  
  • Harmful exploitation of vulnerabilities  
  • Unacceptable social scoring  
  • Individual crime risk assessment and prediction (with some exceptions)  
  • Untargeted scraping of internet or CCTV material to develop or expand facial recognition databases  
  • Emotion recognition in areas such as the workplace and education (with some exceptions)  
  • Biometric categorisation to infer sensitive categories (with some exceptions)  
  • Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes (with some exceptions)  

The Commission’s forthcoming guidance on which “AI systems” fall under these categories will be critical for businesses seeking to ensure compliance and reduce legal risks. Additionally, companies should anticipate further clarification and resources at the national and EU levels, such as the upcoming webinar hosted by the AI Office.

A new landscape for AI regulations

The early implementation of the EU AI Act represents just the beginning of what is a remarkably complex and ambitious regulatory endeavour. As AI continues to play an increasingly pivotal role in business strategy, organisations must learn to navigate new rules and continuously adapt to future changes.  

For now, businesses should focus on understanding the scope of their AI use, enhancing data governance, educating staff to build AI literacy, and adopting a proactive approach to compliance. By doing so, they can position themselves as leaders in a fast-evolving AI landscape and unlock the technology’s full potential while upholding ethical and legal standards.

(Photo by Guillaume Périgois)

See also: ChatGPT Gov aims to modernise US government agencies

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/feed/ 0
Meta accused of using pirated data for AI development https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/ https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/#respond Fri, 10 Jan 2025 12:16:52 +0000 https://www.artificialintelligence-news.com/?p=16840 Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models. The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States […]

The post Meta accused of using pirated data for AI development appeared first on AI News.

]]>
Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models.

The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States District Court in the Northern District of California.

The filing accuses Meta of systematically torrenting and stripping copyright management information (CMI) from pirated datasets, including works from the notorious shadow library LibGen.

According to documents recently submitted to the court, evidence reveals highly incriminating practices involving Meta’s senior leaders. Plaintiffs allege that Meta CEO Mark Zuckerberg gave explicit approval for the use of the LibGen dataset, despite internal concerns raised by the company’s AI executives.

A December 2024 memo from internal Meta discussions acknowledged LibGen as “a dataset we know to be pirated,” with debates arising about the ethical and legal ramifications of using such materials. Documents also revealed that top engineers hesitated to torrent the datasets, citing concerns about using corporate laptops for potentially unlawful activities.

Additionally, internal communications suggest that after acquiring the LibGen dataset, Meta stripped CMI from the copyrighted works contained within—a practice that plaintiffs highlight as central to claims of copyright infringement.

According to the deposition of Michael Clark – a corporate representative for Meta – the company implemented scripts designed to remove any information identifying these works as copyrighted, including keywords like “copyright,” “acknowledgements,” or lines commonly used in such texts. Clark attested that this practice was done intentionally to prepare the dataset for training Meta’s Llama AI models.  

“Doesn’t feel right”

The allegations against Meta paint a portrait of a company knowingly partaking in a widespread piracy scheme facilitated through torrenting.

According to a string of emails included as exhibits, Meta engineers expressed concerns about the optics of torrenting pirated datasets from within corporate spaces. One engineer noted that “torrenting from a [Meta-owned] corporate laptop doesn’t feel right,” but despite hesitation, the rapid downloading and distribution – or “seeding” – of pirated data took place.

Legal counsel for the plaintiffs has stated that as late as January 2024, Meta had “already torrented (both downloaded and distributed) data from LibGen.” Moreover, records show that hundreds of related documents were initially obtained by Meta months prior but were withheld during early discovery processes. Plaintiffs argue this delayed disclosure amounts to bad-faith attempts by Meta to obstruct access to vital evidence.

During a deposition on 17 December 2024, Zuckerberg himself reportedly admitted that such activities would raise “lots of red flags” and stated it “seems like a bad thing,” though he provided limited direct responses regarding Meta’s broader AI training practices.

This case originally began as an intellectual property infringement action on behalf of authors and publishers claiming violations relating to AI use of their materials. However, the plaintiffs are now seeking to add two major claims to their suit: a violation of the Digital Millennium Copyright Act (DMCA) and a breach of the California Comprehensive Data Access and Fraud Act (CDAFA).  

Under the DMCA, the plaintiffs assert that Meta knowingly removed copyright protections to conceal unauthorised uses of copyrighted texts in its Llama models.

As cited in the complaint, Meta allegedly stripped CMI “to reduce the chance that the models will memorise this data” and that this removal of rights management indicators made discovering the infringement more difficult for copyright holders. 

The CDAFA allegations involve Meta’s methods for obtaining the LibGen dataset, including allegedly engaging in torrenting to acquire copyrighted datasets without permission. Internal documentation shows Meta engineers openly discussed concerns that seeding and torrenting might prove to be “legally not ok.” 

Meta case may impact emerging legislation around AI development

At the heart of this expanding legal battle lies growing concern over the intersection of copyright law and AI.

Plaintiffs argue the stripping of copyright protections from textual datasets denies rightful compensation to copyright owners and allows Meta to build AI systems like Llama on the financial ruins of authors’ and publishers’ creative efforts.

The timing of these allegations arises amidst heightened global scrutiny surrounding “generative AI” technologies. Companies like OpenAI, Google, and Meta have all come under fire regarding the use of copyrighted data to train their models. Courts across jurisdictions are currently grappling with the long-term impact of AI on rights management, with potentially landmark cases being decided in both the US and the UK.  

In this particular case, US courts have shown increasing willingness to hear complaints about AI’s potential harm to long-established copyright law precedents. Plaintiffs, in their motion, referred to The Intercept Media v. OpenAI, a recent decision from New York in which a similar DMCA claim was allowed to proceed.

Meta continues to deny all allegations in the case and has yet to publicly respond to Zuckerberg’s reported deposition statements.

Whether or not plaintiffs succeed in these amendments, authors across the world face growing anxieties about how their creative works are handled within the context of AI. With copyright law struggling to keep pace with technological advances, this case underscores the need for clearer guidance at an international level to protect both creators and innovators.

For Meta, these claims also represent a reputational risk. As AI becomes the central focus of its future strategy, the allegations of reliance on pirated libraries are unlikely to help its ambitions of maintaining leadership in the field.  

The unfolding case of Kadrey et al. vs. Meta could have far-reaching ramifications for the development of AI models moving forward, potentially setting legal precedents in the US and beyond.

(Photo by Amy Syiek)

See also: UK wants to prove AI can modernise public services responsibly

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta accused of using pirated data for AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/feed/ 0
AI governance: Analysing emerging global regulations https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/ https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/#respond Thu, 19 Dec 2024 16:21:18 +0000 https://www.artificialintelligence-news.com/?p=16742 Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more. AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation. “The boom of […]

The post AI governance: Analysing emerging global regulations appeared first on AI News.

]]>
Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more.

AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation.

“The boom of the last few years appears to have sparked a push to establish regulatory frameworks for AI governance,” explains Šveistys.

“This is a natural development, as the rise of AI seems to pose issues in data privacy and protection, bias and discrimination, safety, intellectual property, and other legal areas, as well as ethics that need to be addressed.”

Regions diverge in regulatory strategy

The European Union’s AI Act has, unsurprisingly, positioned the region with a strict, centralised approach. The regulation, which came into force this year, is set to be fully effective by 2026.

Šveistys pointed out that the EU has acted relatively swiftly compared to other jurisdictions: “The main difference we can see is the comparative quickness with which the EU has released a uniform regulation to govern the use of all types of AI.”

Meanwhile, other regions have opted for more piecemeal approaches. China, for instance, has been implementing regulations specific to certain AI technologies in a phased-out manner. According to Šveistys, China began regulating AI models as early as 2021.

“In 2021, they introduced regulation on recommendation algorithms, which [had] increased their capabilities in digital advertising. It was followed by regulations on deep synthesis models or, in common terms, deepfakes and content generation in 2022,” he said.

“Then, in 2023, regulation on generative AI models was introduced as these models were making a splash in commercial usage.”

The US, in contrast, remains relatively uncoordinated in its approach. Federal-level regulations are yet to be enacted, with efforts mostly emerging at the state level.

“There are proposed regulations at the state level, such as the so-called California AI Act, but even if they come into power, it may still take some time before they do,” Šveistys noted.

This delay in implementing unified AI regulations in the US has raised questions about the extent to which business pushback may be contributing to the slow rollout. Šveistys said that while lobbyist pressure is a known factor, it’s not the only potential reason.

“There was pushback to the EU AI Act, too, which was nevertheless introduced. Thus, it is not clear whether the delay in the US is only due to lobbyism or other obstacles in the legislation enactment process,” explains Šveistys.

“It might also be because some still see AI as a futuristic concern, not fully appreciating the extent to which it is already a legal issue of today.”

Balancing innovation and safety

Differentiated regulatory approaches could affect the pace of innovation and business competitiveness across regions.

Europe’s regulatory framework, though more stringent, aims to ensure consumer protection and ethical adherence—something that less-regulated environments may lack.

“More rigid regulatory frameworks may impose compliance costs for businesses in the AI field and stifle competitiveness and innovation. On the other hand, they bring the benefits of protecting consumers and adhering to certain ethical norms,” comments Šveistys.

This trade-off is especially pronounced in AI-related sectors such as targeted advertising, where algorithmic bias is increasingly scrutinised.

AI governance often extends beyond laws that specifically target AI, incorporating related legal areas like those governing data collection and privacy. For example, the EU AI Act also regulates the use of AI in physical devices, such as elevators.

“Additionally, all businesses that collect data for advertisement are potentially affected as AI regulation can also cover algorithmic bias in targeted advertising,” emphasises Šveistys.

Impact on related industries

One industry that is deeply intertwined with AI developments is web scraping. Typically used for collecting publicly available data, web scraping is undergoing an AI-driven evolution.

“From data collection, validation, analysis, or overcoming anti-scraping measures, there is a lot of potential for AI to massively improve the efficiency, accuracy, and adaptability of web scraping operations,” said Šveistys. 

However, as AI regulation and related laws tighten, web scraping companies will face greater scrutiny.

“AI regulations may also bring the spotlight on certain areas of law that were always very relevant to the web scraping industry, such as privacy or copyright laws,” Šveistys added.

“At the end of the day, scraping content protected by such laws without proper authorisation could always lead to legal issues, and now so can using AI this way.”

Copyright battles and legal precedents

The implications of AI regulation are also playing out on a broader legal stage, particularly in cases involving generative AI tools.

High-profile lawsuits have been launched against AI giants like OpenAI and its primary backer, Microsoft, by authors, artists, and musicians who claim their copyrighted materials were used to train AI systems without proper permission.

“These cases are pivotal in determining the legal boundaries of using copyrighted material for AI development and establishing legal precedents for protecting intellectual property in the digital age,” said Šveistys.

While these lawsuits could take years to resolve, their outcomes may fundamentally shape the future of AI development. So, what can businesses do now as the regulatory and legal landscape continues to evolve?

“Speaking about the specific cases of using copyrighted material for AI training, businesses should approach this the same way as any web-scraping activity – that is, evaluate the specific data they wish to collect with the help of a legal expert in the field,” recommends Šveistys.

“It is important to recognise that the AI legal landscape is very new and rapidly evolving, with not many precedents in place to refer to as of yet. Hence, continuous monitoring and adaptation of your AI usage are crucial.”

Just this week, the UK Government made headlines with its announcement of a consultation on the use of copyrighted material for training AI models. Under the proposals, tech firms could be permitted to use copyrighted material unless owners have specifically opted out.

Despite the diversity of approaches globally, the AI regulatory push marks a significant moment for technological governance. Whether through the EU’s comprehensive model, China’s step-by-step strategy, or narrower, state-level initiatives like in the US, businesses worldwide must navigate a complex, evolving framework.

The challenge ahead will be striking the right balance between fostering innovation and mitigating risks, ensuring that AI remains a force for good while avoiding potential harms.

(Photo by Nathan Bingle)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI governance: Analysing emerging global regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/feed/ 0
EU introduces draft regulatory guidance for AI models https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/ https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/#respond Fri, 15 Nov 2024 14:52:05 +0000 https://www.artificialintelligence-news.com/?p=16496 The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models. The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each […]

The post EU introduces draft regulatory guidance for AI models appeared first on AI News.

]]>
The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models.

The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each addressing specific aspects of AI governance and risk mitigation:

  • Working Group 1: Transparency and copyright-related rules
  • Working Group 2: Risk identification and assessment for systemic risk
  • Working Group 3: Technical risk mitigation for systemic risk
  • Working Group 4: Governance risk mitigation for systemic risk

The draft is aligned with existing laws such as the Charter of Fundamental Rights of the European Union. It takes into account international approaches, striving for proportionality to risks, and aims to be future-proof by contemplating rapid technological changes.

Key objectives outlined in the draft include:

  • Clarifying compliance methods for providers of general-purpose AI models
  • Facilitating understanding across the AI value chain, ensuring seamless integration of AI models into downstream products
  • Ensuring compliance with Union law on copyrights, especially concerning the use of copyrighted material for model training
  • Continuously assessing and mitigating systemic risks associated with AI models

Recognising and mitigating systemic risks

A core feature of the draft is its taxonomy of systemic risks, which includes types, natures, and sources of such risks. The document outlines various threats such as cyber offences, biological risks, loss of control over autonomous AI models, and large-scale disinformation. By acknowledging the continuously evolving nature of AI technology, the draft recognises that this taxonomy will need updates to remain relevant.

As AI models with systemic risks become more common, the draft emphasises the need for robust safety and security frameworks (SSFs). It proposes a hierarchy of measures, sub-measures, and key performance indicators (KPIs) to ensure appropriate risk identification, analysis, and mitigation throughout a model’s lifecycle.

The draft suggests that providers establish processes to identify and report serious incidents associated with their AI models, offering detailed assessments and corrections as needed. It also encourages collaboration with independent experts for risk assessment, especially for models posing significant systemic risks.

Taking a proactive stance to AI regulatory guidance

The EU AI Act, which came into force on 1 August 2024, mandates that the final version of this Code be ready by 1 May 2025. This initiative underscores the EU’s proactive stance towards AI regulation, emphasising the need for AI safety, transparency, and accountability.

As the draft continues to evolve, the working groups invite stakeholders to participate actively in refining the document. Their collaborative input will shape a regulatory framework aimed at safeguarding innovation while protecting society from the potential pitfalls of AI technology.

While still in draft form, the EU’s Code of Practice for general-purpose AI models could set a benchmark for responsible AI development and deployment globally. By addressing key issues such as transparency, risk management, and copyright compliance, the Code aims to create a regulatory environment that fosters innovation, upholds fundamental rights, and ensures a high level of consumer protection.

This draft is open for written feedback until 28 November 2024. 

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU introduces draft regulatory guidance for AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/feed/ 0
Anthropic urges AI regulation to avoid catastrophes https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/ https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/#respond Fri, 01 Nov 2024 16:46:42 +0000 https://www.artificialintelligence-news.com/?p=16415 Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers. As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even […]

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers.

As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even biological and chemical disciplines significantly increases.

Anthropic warns the next 18 months are critical for policymakers to act, as the window for proactive prevention is narrowing. Notably, Anthropic’s Frontier Red Team highlights how current models can already contribute to various cyber offense-related tasks and expects future models to be even more effective.

Of particular concern is the potential for AI systems to exacerbate chemical, biological, radiological, and nuclear (CBRN) misuse. The UK AI Safety Institute found that several AI models can now match PhD-level human expertise in providing responses to science-related inquiries.

In addressing these risks, Anthropic has detailed its Responsible Scaling Policy (RSP) that was released in September 2023 as a robust countermeasure. RSP mandates an increase in safety and security measures corresponding to the sophistication of AI capabilities.

The RSP framework is designed to be adaptive and iterative, with regular assessments of AI models allowing for timely refinement of safety protocols. Anthropic says that it’s committed to maintaining and enhancing safety spans various team expansions, particularly in security, interpretability, and trust sectors, ensuring readiness for the rigorous safety standards set by its RSP.

Anthropic believes the widespread adoption of RSPs across the AI industry, while primarily voluntary, is essential for addressing AI risks.

Transparent, effective regulation is crucial to reassure society of AI companies’ adherence to promises of safety. Regulatory frameworks, however, must be strategic, incentivising sound safety practices without imposing unnecessary burdens.

Anthropic envisions regulations that are clear, focused, and adaptive to evolving technological landscapes, arguing that these are vital in achieving a balance between risk mitigation and fostering innovation.

In the US, Anthropic suggests that federal legislation could be the ultimate answer to AI risk regulation—though state-driven initiatives might need to step in if federal action lags. Legislative frameworks developed by countries worldwide should allow for standardisation and mutual recognition to support a global AI safety agenda, minimising the cost of regulatory adherence across different regions.

Furthermore, Anthropic addresses scepticism towards imposing regulations—highlighting that overly broad use-case-focused regulations would be inefficient for general AI systems, which have diverse applications. Instead, regulations should target fundamental properties and safety measures of AI models. 

While covering broad risks, Anthropic acknowledges that some immediate threats – like deepfakes – aren’t the focus of their current proposals since other initiatives are tackling these nearer-term issues.

Ultimately, Anthropic stresses the importance of instituting regulations that spur innovation rather than stifle it. The initial compliance burden, though inevitable, can be minimised through flexible and carefully-designed safety tests. Proper regulation can even help safeguard both national interests and private sector innovation by securing intellectual property against threats internally and externally.

By focusing on empirically measured risks, Anthropic plans for a regulatory landscape that neither biases against nor favours open or closed-source models. The objective remains clear: to manage the significant risks of frontier AI models with rigorous but adaptable regulation.

(Image Credit: Anthropic)

See also: President Biden issues first National Security Memorandum on AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/feed/ 0
EU AI Act: Early prep could give businesses competitive edge https://www.artificialintelligence-news.com/news/eu-ai-act-early-prep-could-give-businesses-competitive-edge/ https://www.artificialintelligence-news.com/news/eu-ai-act-early-prep-could-give-businesses-competitive-edge/#respond Tue, 22 Oct 2024 13:21:32 +0000 https://www.artificialintelligence-news.com/?p=16358 The EU AI Act is set to fully take effect in August 2026, but some provisions are coming into force even earlier. The legislation establishes a first-of-its-kind regulatory framework for AI systems, employing a risk-based approach that categorises AI applications based on their potential impact on safety, human rights, and societal wellbeing. “Some systems are […]

The post EU AI Act: Early prep could give businesses competitive edge appeared first on AI News.

]]>
The EU AI Act is set to fully take effect in August 2026, but some provisions are coming into force even earlier.

The legislation establishes a first-of-its-kind regulatory framework for AI systems, employing a risk-based approach that categorises AI applications based on their potential impact on safety, human rights, and societal wellbeing.

“Some systems are banned entirely, while systems deemed ‘high-risk’ are subject to stricter requirements and assessments before deployment,” explains the DPO Centre, a data protection consultancy.

Similar to GDPR, the Act’s extra-territorial reach means it applies to any organisation marketing, deploying, or using AI systems within the EU, regardless of where the system is developed. Businesses will be classified primarily as either ‘Providers’ or ‘Deployers,’ with additional categories for ‘Distributors,’ ‘Importers,’ ‘Product Manufacturers,’ and ‘Authorised Representatives.’

For organisations developing or deploying AI systems, particularly those classified as high-risk, compliance preparation promises to be complex. However, experts suggest viewing this as an opportunity rather than a burden.

“By embracing compliance as a catalyst for more transparent AI usage, businesses can turn regulatory demands into a competitive advantage,” notes the DPO Centre.

Key preparation strategies include comprehensive staff training, establishing robust corporate governance, and implementing strong cybersecurity measures. The legislation’s requirements often overlap with existing GDPR frameworks, particularly regarding transparency and accountability.

Organisations must also adhere to ethical AI principles and maintain clear documentation of their systems’ functionality, limitations, and intended use. The EU is currently developing specific codes of practice and templates to assist with compliance obligations.

For businesses uncertain about their obligations, experts recommend seeking professional guidance early. Tools like the EU AI Act Compliance Checker can help organisations verify their systems’ alignment with regulatory requirements.

Rather than viewing compliance as merely a regulatory burden, forward-thinking organisations should view the EU’s AI Act as an opportunity to demonstrate commitment to responsible AI development and build greater trust with their customers.

See also: AI governance gap: 95% of firms haven’t implemented frameworks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI Act: Early prep could give businesses competitive edge appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-ai-act-early-prep-could-give-businesses-competitive-edge/feed/ 0
AI governance gap: 95% of firms haven’t implemented frameworks https://www.artificialintelligence-news.com/news/ai-governance-gap-95-of-firms-havent-frameworks/ https://www.artificialintelligence-news.com/news/ai-governance-gap-95-of-firms-havent-frameworks/#respond Thu, 17 Oct 2024 16:38:58 +0000 https://www.artificialintelligence-news.com/?p=16318 Robust governance is essential to mitigate AI risks and maintain responsible systems, but the majority of firms are yet to implement a framework. Commissioned by Prove AI and conducted by Zogby Analytics, the report polled over 600 CEOs, CIOs, and CTOs from large companies across the US, UK, and Germany. The findings show that 96% […]

The post AI governance gap: 95% of firms haven’t implemented frameworks appeared first on AI News.

]]>
Robust governance is essential to mitigate AI risks and maintain responsible systems, but the majority of firms are yet to implement a framework.

Commissioned by Prove AI and conducted by Zogby Analytics, the report polled over 600 CEOs, CIOs, and CTOs from large companies across the US, UK, and Germany. The findings show that 96% of organisations are already utilising AI to support business operations, with the same percentage planning to increase their AI budgets in the coming year.

The primary motivations for AI investment include increasing productivity (82%), improving operational efficiency (73%), enhancing decision-making (65%), and achieving cost savings (60%). The most common AI use cases reported were customer service and support, predictive analytics, and marketing and ad optimisation.

Despite the surge in AI investments, business leaders are acutely aware of the additional risk exposure that AI brings to their organisations. Data integrity and security emerged as the biggest deterrents to implementing new AI solutions.

Executives also reported encountering various AI performance issues, including:

  • Data quality issues (e.g., inconsistencies or inaccuracies): 41%
  • Bias detection and mitigation challenges in AI algorithms, leading to unfair or discriminatory outcomes: 37%
  • Difficulty in quantifying and measuring the return on investment (ROI) of AI initiatives: 28%

While 95% of respondents expressed confidence in their organisation’s current AI risk management practices, the report revealed a significant gap in AI governance implementation.

Only 5% of executives reported that their organisation has implemented any AI governance framework. However, 82% stated that implementing AI governance solutions is a somewhat or extremely pressing priority, with 85% planning to implement such solutions by summer 2025.

The report also found that 82% of participants support an AI governance executive order to provide stronger oversight. Additionally, 65% expressed concern about IP infringement and data security.

Mrinal Manohar, CEO of Prove AI, commented: “Executives are making themselves clear: AI’s long-term efficacy, including providing a meaningful return on the massive investments organisations are currently making, is contingent on their ability to develop and refine comprehensive AI governance strategies.

“The wave of AI-focused legislation going into effect around the world is only increasing the urgency; for the current wave of innovation to continue responsibly, we need to implement clearer guardrails to manage and monitor the data informing AI systems.”

As global regulations like the EU AI Act loom on the horizon, the report underscores the importance of de-risking AI and the work that still needs to be done. Implementing and optimising dedicated AI governance strategies has emerged as a top priority for businesses looking to harness the power of AI while mitigating associated risks.

The findings of this report serve as a wake-up call for organisations to prioritise AI governance as they continue to invest in and deploy AI technologies. Responsible implementation and robust governance frameworks will be key to unlocking the full potential of AI while maintaining trust and compliance.

(Photo by Rob Thompson)

See also: Scoring AI models: Endor Labs unveils evaluation tool

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI governance gap: 95% of firms haven’t implemented frameworks appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-governance-gap-95-of-firms-havent-frameworks/feed/ 0