court Archives - AI News https://www.artificialintelligence-news.com/news/tag/court/ Artificial Intelligence News Fri, 25 Apr 2025 14:07:48 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png court Archives - AI News https://www.artificialintelligence-news.com/news/tag/court/ 32 32 Meta accused of using pirated data for AI development https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/ https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/#respond Fri, 10 Jan 2025 12:16:52 +0000 https://www.artificialintelligence-news.com/?p=16840 Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models. The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States […]

The post Meta accused of using pirated data for AI development appeared first on AI News.

]]>
Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models.

The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States District Court in the Northern District of California.

The filing accuses Meta of systematically torrenting and stripping copyright management information (CMI) from pirated datasets, including works from the notorious shadow library LibGen.

According to documents recently submitted to the court, evidence reveals highly incriminating practices involving Meta’s senior leaders. Plaintiffs allege that Meta CEO Mark Zuckerberg gave explicit approval for the use of the LibGen dataset, despite internal concerns raised by the company’s AI executives.

A December 2024 memo from internal Meta discussions acknowledged LibGen as “a dataset we know to be pirated,” with debates arising about the ethical and legal ramifications of using such materials. Documents also revealed that top engineers hesitated to torrent the datasets, citing concerns about using corporate laptops for potentially unlawful activities.

Additionally, internal communications suggest that after acquiring the LibGen dataset, Meta stripped CMI from the copyrighted works contained within—a practice that plaintiffs highlight as central to claims of copyright infringement.

According to the deposition of Michael Clark – a corporate representative for Meta – the company implemented scripts designed to remove any information identifying these works as copyrighted, including keywords like “copyright,” “acknowledgements,” or lines commonly used in such texts. Clark attested that this practice was done intentionally to prepare the dataset for training Meta’s Llama AI models.  

“Doesn’t feel right”

The allegations against Meta paint a portrait of a company knowingly partaking in a widespread piracy scheme facilitated through torrenting.

According to a string of emails included as exhibits, Meta engineers expressed concerns about the optics of torrenting pirated datasets from within corporate spaces. One engineer noted that “torrenting from a [Meta-owned] corporate laptop doesn’t feel right,” but despite hesitation, the rapid downloading and distribution – or “seeding” – of pirated data took place.

Legal counsel for the plaintiffs has stated that as late as January 2024, Meta had “already torrented (both downloaded and distributed) data from LibGen.” Moreover, records show that hundreds of related documents were initially obtained by Meta months prior but were withheld during early discovery processes. Plaintiffs argue this delayed disclosure amounts to bad-faith attempts by Meta to obstruct access to vital evidence.

During a deposition on 17 December 2024, Zuckerberg himself reportedly admitted that such activities would raise “lots of red flags” and stated it “seems like a bad thing,” though he provided limited direct responses regarding Meta’s broader AI training practices.

This case originally began as an intellectual property infringement action on behalf of authors and publishers claiming violations relating to AI use of their materials. However, the plaintiffs are now seeking to add two major claims to their suit: a violation of the Digital Millennium Copyright Act (DMCA) and a breach of the California Comprehensive Data Access and Fraud Act (CDAFA).  

Under the DMCA, the plaintiffs assert that Meta knowingly removed copyright protections to conceal unauthorised uses of copyrighted texts in its Llama models.

As cited in the complaint, Meta allegedly stripped CMI “to reduce the chance that the models will memorise this data” and that this removal of rights management indicators made discovering the infringement more difficult for copyright holders. 

The CDAFA allegations involve Meta’s methods for obtaining the LibGen dataset, including allegedly engaging in torrenting to acquire copyrighted datasets without permission. Internal documentation shows Meta engineers openly discussed concerns that seeding and torrenting might prove to be “legally not ok.” 

Meta case may impact emerging legislation around AI development

At the heart of this expanding legal battle lies growing concern over the intersection of copyright law and AI.

Plaintiffs argue the stripping of copyright protections from textual datasets denies rightful compensation to copyright owners and allows Meta to build AI systems like Llama on the financial ruins of authors’ and publishers’ creative efforts.

The timing of these allegations arises amidst heightened global scrutiny surrounding “generative AI” technologies. Companies like OpenAI, Google, and Meta have all come under fire regarding the use of copyrighted data to train their models. Courts across jurisdictions are currently grappling with the long-term impact of AI on rights management, with potentially landmark cases being decided in both the US and the UK.  

In this particular case, US courts have shown increasing willingness to hear complaints about AI’s potential harm to long-established copyright law precedents. Plaintiffs, in their motion, referred to The Intercept Media v. OpenAI, a recent decision from New York in which a similar DMCA claim was allowed to proceed.

Meta continues to deny all allegations in the case and has yet to publicly respond to Zuckerberg’s reported deposition statements.

Whether or not plaintiffs succeed in these amendments, authors across the world face growing anxieties about how their creative works are handled within the context of AI. With copyright law struggling to keep pace with technological advances, this case underscores the need for clearer guidance at an international level to protect both creators and innovators.

For Meta, these claims also represent a reputational risk. As AI becomes the central focus of its future strategy, the allegations of reliance on pirated libraries are unlikely to help its ambitions of maintaining leadership in the field.  

The unfolding case of Kadrey et al. vs. Meta could have far-reaching ramifications for the development of AI models moving forward, potentially setting legal precedents in the US and beyond.

(Photo by Amy Syiek)

See also: UK wants to prove AI can modernise public services responsibly

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta accused of using pirated data for AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/feed/ 0
OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’ https://www.artificialintelligence-news.com/news/openai-calls-elon-musk-lawsuit-claims-incoherent/ https://www.artificialintelligence-news.com/news/openai-calls-elon-musk-lawsuit-claims-incoherent/#respond Tue, 12 Mar 2024 16:36:27 +0000 https://www.artificialintelligence-news.com/?p=14529 OpenAI has hit back at Elon Musk’s lawsuit, deeming his claims “convoluted — often incoherent — factual premises.” Musk’s lawsuit accuses OpenAI of breaching its non-profit status and reneging on a founding agreement to keep the organisation non-profit and release its AI technology publicly. However, OpenAI has refuted these allegations, stating that there is no […]

The post OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’ appeared first on AI News.

]]>
OpenAI has hit back at Elon Musk’s lawsuit, deeming his claims “convoluted — often incoherent — factual premises.”

Musk’s lawsuit accuses OpenAI of breaching its non-profit status and reneging on a founding agreement to keep the organisation non-profit and release its AI technology publicly. However, OpenAI has refuted these allegations, stating that there is no such agreement with Musk and branding it as a mere “fiction.”

According to court filings, OpenAI asserts that there is no existing agreement with Musk, contradicting his assertions in the lawsuit.

The organisation further alleges that Musk had actually supported the idea of transitioning OpenAI into a for-profit entity under his control. It is claimed that Musk advocated for full control of the company as CEO, majority equity ownership, and even suggested tethering it to Tesla for financial backing. However, negotiations between Musk and OpenAI did not culminate in an agreement, leading to Musk’s withdrawal from the project.

OpenAI’s rebuttal highlights purported emails exchanged between Musk and the organisation, indicating his prior knowledge and support for its transition to a for-profit model. The company suggests that Musk’s lawsuit is driven by his desire to claim credit for OpenAI’s successes after he disengaged from the project.

In response to Musk’s legal action, OpenAI has portrayed his motives as self-serving rather than altruistic, asserting that his lawsuit is a bid to further his own commercial interests under the guise of championing humanity’s cause.

Meanwhile, Musk’s own foray into the realm of artificial intelligence with his company xAI has drawn attention.

Musk announced xAI’s intention to open source its Grok chatbot shortly after OpenAI’s publication of emails purportedly demonstrating Musk’s prior awareness of its non-open source intentions. While this move could be interpreted as a retaliatory gesture against OpenAI, it also presents an opportunity for xAI to garner feedback from developers and enhance its technology.

The legal clash between Musk and OpenAI underscores the complexities surrounding the development and governance of AI technologies, as well as the competing interests within the tech industry.

(Photo by Tim Mossholder on Unsplash)

See also: OpenAI announces new board lineup and governance structure

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-calls-elon-musk-lawsuit-claims-incoherent/feed/ 0
Elon Musk sues OpenAI over alleged breach of nonprofit agreement https://www.artificialintelligence-news.com/news/elon-musk-sues-openai-alleged-breach-nonprofit-agreement/ https://www.artificialintelligence-news.com/news/elon-musk-sues-openai-alleged-breach-nonprofit-agreement/#respond Fri, 01 Mar 2024 13:09:25 +0000 https://www.artificialintelligence-news.com/?p=14473 Elon Musk has filed a lawsuit against OpenAI and its CEO, Sam Altman, citing a violation of their nonprofit agreement. The legal battle, unfolding in the Superior Court of California for the County of San Francisco, revolves around OpenAI’s departure from its foundational mission of advancing open-source artificial general intelligence (AGI) for the betterment of […]

The post Elon Musk sues OpenAI over alleged breach of nonprofit agreement appeared first on AI News.

]]>
Elon Musk has filed a lawsuit against OpenAI and its CEO, Sam Altman, citing a violation of their nonprofit agreement.

The legal battle, unfolding in the Superior Court of California for the County of San Francisco, revolves around OpenAI’s departure from its foundational mission of advancing open-source artificial general intelligence (AGI) for the betterment of humanity.

Musk was a co-founder and early backer of OpenAI. According to Musk, Altman and Greg Brockman (another co-founder and current president of OpenAI) convinced him to bankroll the startup in 2015 on promises that it would remain a nonprofit.

In his legal challenge, Musk accuses OpenAI of straying from its principles through a collaboration with Microsoft—alleging that the partnership prioritises proprietary technology over the original ethos of open-source advancement.

Musk’s grievances include claims of contract breach, violation of fiduciary duty, and unfair business practices. He calls upon OpenAI to realign with its nonprofit objectives and seeks an injunction to halt the commercial exploitation of AGI technology.

At the heart of the dispute is OpenAI’s recent launch of GPT-4 in March 2023. Musk contends that unlike its predecessors, GPT-4 represents a shift towards closed-source models—a move he believes favours Microsoft’s financial interests at the expense of OpenAI’s altruistic mission.

Founded in 2015 as a nonprofit AI research lab, OpenAI transitioned into a commercial entity in 2020. OpenAI has now adopted a profit-driven approach, with revenues reportedly surpassing $2 billion annually.

Musk, who has long voiced concerns about the risks posed by AI, has called for robust government regulation and responsible AI development. He questions the technical expertise of OpenAI’s current board and highlights the removal and subsequent reinstatement of Altman in November 2023 as evidence of a profit-oriented agenda aligned with Microsoft’s interests.

See also: Mistral AI unveils LLM rivalling major players

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Elon Musk sues OpenAI over alleged breach of nonprofit agreement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/elon-musk-sues-openai-alleged-breach-nonprofit-agreement/feed/ 0
US appeals court decides scraping public web data is fine https://www.artificialintelligence-news.com/news/us-appeals-court-scraping-public-web-data-fine/ https://www.artificialintelligence-news.com/news/us-appeals-court-scraping-public-web-data-fine/#respond Tue, 19 Apr 2022 12:35:56 +0000 https://artificialintelligence-news.com/?p=11890 The US Ninth Circuit Court of Appeals has decided that scraping data from a public website doesn’t violate the Computer Fraud and Abuse Act (CFAA). In 2017, employment analytics firm HiQ filed a lawsuit against LinkedIn’s efforts to block it from scraping data from users’ profiles. The court barred Linkedin from stopping HiQ scraping data […]

The post US appeals court decides scraping public web data is fine appeared first on AI News.

]]>
The US Ninth Circuit Court of Appeals has decided that scraping data from a public website doesn’t violate the Computer Fraud and Abuse Act (CFAA).

In 2017, employment analytics firm HiQ filed a lawsuit against LinkedIn’s efforts to block it from scraping data from users’ profiles.

The court barred Linkedin from stopping HiQ scraping data after deciding the CFAA – which criminalises accessing a protected computer – doesn’t apply due to the information being public.

LinkedIn appealed the case and in 2019 the Ninth Circuit Court sided with HiQ and upheld the original decision.

In March 2020, LinkedIn once again appealed the decision on the basis that implementing technical barriers and sending a cease-and-desist letter is revoking authorisation. Therefore, any subsequent attempts to scrape data are unauthorised and therefore break the CFAA.

“At issue was whether, once hiQ received LinkedIn’s cease-and-desist letter, any further scraping and use of LinkedIn’s data was ‘without authorization’ within the meaning of the CFAA,” reads the filing (PDF).

“The panel concluded that hiQ raised a serious question as to whether the CFAA ‘without authorization’ concept is inapplicable where, as here, prior authorization is not generally required but a particular person—or bot—is refused access.”

The filing highlights several of LinkedIn’s technical measures to protect against data-scraping:

  • Prohibiting search engine crawlers and bots – aside from certain allowed entities, like Google – from accessing LinkedIn’s servers via the website’s standard ‘robots.txt’ file.
  • ‘Quicksand’ system that detects non-human activity indicative of scraping.
  • ‘Sentinel’ system that slows (or blocks) activity from suspicious IP addresses.
  • ‘Org Block’ system that generates a list of known malicious IP addresses linked to large-scale scraping.

Overall, LinkedIn claims to block approximately 95 million automated attempts to scrape data every day.

The appeals court once again ruled in favour of HiQ, upholding the conclusion that “the balance of hardships tips sharply in hiQ’s favor” and the company’s existence would be threatened without having access to LinkedIn’s public data.

“hiQ’s entire business depends on being able to access public LinkedIn member profiles,” hiQ’s CEO argued. “There is no current viable alternative to LinkedIn’s member database to obtain data for hiQ’s Keeper and Skill Mapper services.” 

However, LinkedIn’s petition (PDF) counters that the ruling has wider implications.

“Under the Ninth Circuit’s rule, every company with a public portion of its website that is integral to the operation of its business – from online retailers like Ticketmaster and Amazon to social networking platforms like Twitter – will be exposed to invasive bots deployed by free-riders unless they place those websites entirely behind password barricades,” wrote the company’s attorneys.

“But if that happens, those websites will no longer be indexable by search engines, which will make information less available to discovery by the primary means by which people obtain information on the Internet.”

AI companies that often rely on mass data-scraping will undoubtedly be pleased with the court’s decision.

Clearview AI, for example, has regularly been targeted by authorities and privacy campaigners for scraping billions of images from public websites to power its facial recognition system.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

Clearview AI recently made headlines for offering its services to Ukraine to help the country identify both Ukrainian defenders and Russian assailants who’ve lost their lives in the brutal conflict.

Mass data scraping will remain a controversial subject. Supporters will back the appeal court’s ruling while opponents will join LinkedIn’s attorneys in their concerns about normalising the practice.

(Photo by ThisisEngineering RAEng on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US appeals court decides scraping public web data is fine appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/us-appeals-court-scraping-public-web-data-fine/feed/ 0
Aussie court rules AIs can be credited as inventors under patent law https://www.artificialintelligence-news.com/news/aussie-court-rules-ais-can-be-credited-as-inventors-under-patent-law/ https://www.artificialintelligence-news.com/news/aussie-court-rules-ais-can-be-credited-as-inventors-under-patent-law/#respond Tue, 03 Aug 2021 16:10:43 +0000 http://artificialintelligence-news.com/?p=10821 A federal court in Australia has ruled that AI systems can be credited as inventors under patent law in a case that could set a global precedent. Ryan Abbott, a professor at University of Surrey, has launched over a dozen patent applications around the world – including in the UK, US, New Zealand, and Australia […]

The post Aussie court rules AIs can be credited as inventors under patent law appeared first on AI News.

]]>
A federal court in Australia has ruled that AI systems can be credited as inventors under patent law in a case that could set a global precedent.

Ryan Abbott, a professor at University of Surrey, has launched over a dozen patent applications around the world – including in the UK, US, New Zealand, and Australia – on behalf of US-based Dr Stephen Thaler.

The twist here is that it’s not Thaler which Abbott is attempting to credit as an inventor, but rather his AI device known as DABUS.

“In my view, an inventor as recognised under the act can be an artificial intelligence system or device,” said justice Jonathan Beach, overturning Australia’s original verdict. “We are both created and create. Why cannot our own creations also create?”

DABUS consists of neural networks and was used to invent an emergency warning light, a food container that improves grip and heat transfer, and more.

Until now, all of the patent applications were rejected—including in Australia. Each country determined that a human must be the credited inventor.

Whether AIs should be afforded certain “rights” similar to humans is a key debate, and one that is increasingly in need of answers. This patent case could be the first step towards establishing when machines – with increasing forms of sentience – should be treated like humans.

DABUS was awarded its first patent for “a food container based on fractal geometry,” by South Africa’s Companies and Intellectual Property Commission on June 24.

Following the patent award, Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, commented:

“This is a truly historic case that recognises the need to change how we attribute invention. We are moving from an age in which invention was the preserve of people to an era where machines are capable of realising the inventive step, unleashing the potential of AI-generated inventions for the benefit of society.

The School of Law at the University of Surrey has taken a leading role in asking important philosophical questions such as whether innovation can only be a human phenomenon, and what happens legally when AI behaves like a person.”

AI News reached out to the patent experts at ACT | The App Association, which represents more than 5,000 app makers and connected device companies around the world, for their perspective.

Brian Scarpelli, Senior Global Policy Counsel at ACT | The App Association, commented:

“The App Association, in alignment with the plain language of patent laws across key jurisdictions (including Australia’s 1990 Patents Act), is opposed to the proposal that a patent may be granted for an invention devised by a machine, rather than by a natural person.

Today’s patent laws can, for certain kinds of AI inventions, appropriately support inventorship. Patent offices can use the existing requirements for software patentability as a starting point to identify necessary elements of patentable AI inventions and applications – for example for AI technology that is used to improve machine capability, where it can be delineated, declared, and evaluated in a way equivalent to software inventions.

But more generally, determinations regarding when and by whom inventorship and authorship, autonomously created by AI, could represent a drastic shift in law and policy. This would have direct implications on policy questions raised about allowing patents on inventions made by machines further public policy goals, and even reaching into broader definitions of AI personhood.

Continued study, both by national/regional patent offices and multilateral fora like the World Intellectual Property Office, is going to be critical and needs to continue to inform a comprehensive debate by policymakers.”

Feel free to let us know in the comments whether you believe AI systems should have similar legal protections and obligations to humans.

(Photo by Trollinho on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Aussie court rules AIs can be credited as inventors under patent law appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/aussie-court-rules-ais-can-be-credited-as-inventors-under-patent-law/feed/ 0