Meta | Meta AI Developments & News | AI News https://www.artificialintelligence-news.com/categories/ai-companies/meta-facebook/ Artificial Intelligence News Wed, 30 Apr 2025 13:35:24 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Meta | Meta AI Developments & News | AI News https://www.artificialintelligence-news.com/categories/ai-companies/meta-facebook/ 32 32 Meta beefs up AI security with new Llama tools  https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/ https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/#respond Wed, 30 Apr 2025 13:35:22 +0000 https://www.artificialintelligence-news.com/?p=106233 If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools. The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to […]

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools.

The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to make developing and using AI a bit safer for everyone involved.

Developers working with the Llama family of models now have some upgraded kit to play with. You can grab these latest Llama Protection tools directly from Meta’s own Llama Protections page, or find them where many developers live: Hugging Face and GitHub.

First up is Llama Guard 4. Think of it as an evolution of Meta’s customisable safety filter for AI. The big news here is that it’s now multimodal so it can understand and apply safety rules not just to text, but to images as well. That’s crucial as AI applications get more visual. This new version is also being baked into Meta’s brand-new Llama API, which is currently in a limited preview.

Then there’s LlamaFirewall. This is a new piece of the puzzle from Meta, designed to act like a security control centre for AI systems. It helps manage different safety models working together and hooks into Meta’s other protection tools. Its job? To spot and block the kind of risks that keep AI developers up at night – things like clever ‘prompt injection’ attacks designed to trick the AI, potentially dodgy code generation, or risky behaviour from AI plug-ins.

Meta has also given its Llama Prompt Guard a tune-up. The main Prompt Guard 2 (86M) model is now better at sniffing out those pesky jailbreak attempts and prompt injections. More interestingly, perhaps, is the introduction of Prompt Guard 2 22M.

Prompt Guard 2 22M is a much smaller, nippier version. Meta reckons it can slash latency and compute costs by up to 75% compared to the bigger model, without sacrificing too much detection power. For anyone needing faster responses or working on tighter budgets, that’s a welcome addition.

But Meta isn’t just focusing on the AI builders; they’re also looking at the cyber defenders on the front lines of digital security. They’ve heard the calls for better AI-powered tools to help in the fight against cyberattacks, and they’re sharing some updates aimed at just that.

The CyberSec Eval 4 benchmark suite has been updated. This open-source toolkit helps organisations figure out how good AI systems actually are at security tasks. This latest version includes two new tools:

  • CyberSOC Eval: Built with the help of cybersecurity experts CrowdStrike, this framework specifically measures how well AI performs in a real Security Operation Centre (SOC) environment. It’s designed to give a clearer picture of AI’s effectiveness in threat detection and response. The benchmark itself is coming soon.
  • AutoPatchBench: This benchmark tests how good Llama and other AIs are at automatically finding and fixing security holes in code before the bad guys can exploit them.

To help get these kinds of tools into the hands of those who need them, Meta is kicking off the Llama Defenders Program. This seems to be about giving partner companies and developers special access to a mix of AI solutions – some open-source, some early-access, some perhaps proprietary – all geared towards different security challenges.

As part of this, Meta is sharing an AI security tool they use internally: the Automated Sensitive Doc Classification Tool. It automatically slaps security labels on documents inside an organisation. Why? To stop sensitive info from walking out the door, or to prevent it from being accidentally fed into an AI system (like in RAG setups) where it could be leaked.

They’re also tackling the problem of fake audio generated by AI, which is increasingly used in scams. The Llama Generated Audio Detector and Llama Audio Watermark Detector are being shared with partners to help them spot AI-generated voices in potential phishing calls or fraud attempts. Companies like ZenDesk, Bell Canada, and AT&T are already lined up to integrate these.

Finally, Meta gave a sneak peek at something potentially huge for user privacy: Private Processing. This is new tech they’re working on for WhatsApp. The idea is to let AI do helpful things like summarise your unread messages or help you draft replies, but without Meta or WhatsApp being able to read the content of those messages.

Meta is being quite open about the security side, even publishing their threat model and inviting security researchers to poke holes in the architecture before it ever goes live. It’s a sign they know they need to get the privacy aspect right.

Overall, it’s a broad set of AI security announcements from Meta. They’re clearly trying to put serious muscle behind securing the AI they build, while also giving the wider tech community better tools to build safely and defend effectively.

See also: Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/feed/ 0
Meta FAIR advances human-like AI with five major releases https://www.artificialintelligence-news.com/news/meta-fair-advances-human-like-ai-five-major-releases/ https://www.artificialintelligence-news.com/news/meta-fair-advances-human-like-ai-five-major-releases/#respond Thu, 17 Apr 2025 16:00:05 +0000 https://www.artificialintelligence-news.com/?p=105371 The Fundamental AI Research (FAIR) team at Meta has announced five projects advancing the company’s pursuit of advanced machine intelligence (AMI). The latest releases from Meta focus heavily on enhancing AI perception – the ability for machines to process and interpret sensory information – alongside advancements in language modelling, robotics, and collaborative AI agents. Meta […]

The post Meta FAIR advances human-like AI with five major releases appeared first on AI News.

]]>
The Fundamental AI Research (FAIR) team at Meta has announced five projects advancing the company’s pursuit of advanced machine intelligence (AMI).

The latest releases from Meta focus heavily on enhancing AI perception – the ability for machines to process and interpret sensory information – alongside advancements in language modelling, robotics, and collaborative AI agents.

Meta stated its goal involves creating machines “that are able to acquire, process, and interpret sensory information about the world around us and are able to use this information to make decisions with human-like intelligence and speed.”

The five new releases represent diverse but interconnected efforts towards achieving this ambitious goal.

Perception Encoder: Meta sharpens the ‘vision’ of AI

Central to the new releases is the Perception Encoder, described as a large-scale vision encoder designed to excel across various image and video tasks.

Vision encoders function as the “eyes” for AI systems, allowing them to understand visual data.

Meta highlights the increasing challenge of building encoders that meet the demands of advanced AI, requiring capabilities that bridge vision and language, handle both images and videos effectively, and remain robust under challenging conditions, including potential adversarial attacks.

The ideal encoder, according to Meta, should recognise a wide array of concepts while distinguishing subtle details—citing examples like spotting “a stingray burrowed under the sea floor, identifying a tiny goldfinch in the background of an image, or catching a scampering agouti on a night vision wildlife camera.”

Meta claims the Perception Encoder achieves “exceptional performance on image and video zero-shot classification and retrieval, surpassing all existing open source and proprietary models for such tasks.”

Furthermore, its perceptual strengths reportedly translate well to language tasks. 

When aligned with a large language model (LLM), the encoder is said to outperform other vision encoders in areas like visual question answering (VQA), captioning, document understanding, and grounding (linking text to specific image regions). It also reportedly boosts performance on tasks traditionally difficult for LLMs, such as understanding spatial relationships (e.g., “if one object is behind another”) or camera movement relative to an object.

“As Perception Encoder begins to be integrated into new applications, we’re excited to see how its advanced vision capabilities will enable even more capable AI systems,” Meta said.

Perception Language Model (PLM): Open research in vision-language

Complementing the encoder is the Perception Language Model (PLM), an open and reproducible vision-language model aimed at complex visual recognition tasks. 

PLM was trained using large-scale synthetic data combined with open vision-language datasets, explicitly without distilling knowledge from external proprietary models.

Recognising gaps in existing video understanding data, the FAIR team collected 2.5 million new, human-labelled samples focused on fine-grained video question answering and spatio-temporal captioning. Meta claims this forms the “largest dataset of its kind to date.”

PLM is offered in 1, 3, and 8 billion parameter versions, catering to academic research needs requiring transparency.

Alongside the models, Meta is releasing PLM-VideoBench, a new benchmark specifically designed to test capabilities often missed by existing benchmarks, namely “fine-grained activity understanding and spatiotemporally grounded reasoning.”

Meta hopes the combination of open models, the large dataset, and the challenging benchmark will empower the open-source community.

Meta Locate 3D: Giving robots situational awareness

Bridging the gap between language commands and physical action is Meta Locate 3D. This end-to-end model aims to allow robots to accurately localise objects in a 3D environment based on open-vocabulary natural language queries.

Meta Locate 3D processes 3D point clouds directly from RGB-D sensors (like those found on some robots or depth-sensing cameras). Given a textual prompt, such as “flower vase near TV console,” the system considers spatial relationships and context to pinpoint the correct object instance, distinguishing it from, say, a “vase on the table.”

The system comprises three main parts: a preprocessing step converting 2D features to 3D featurised point clouds; the 3D-JEPA encoder (a pretrained model creating a contextualised 3D world representation); and the Locate 3D decoder, which takes the 3D representation and the language query to output bounding boxes and masks for the specified objects.

Alongside the model, Meta is releasing a substantial new dataset for object localisation based on referring expressions. It includes 130,000 language annotations across 1,346 scenes from the ARKitScenes, ScanNet, and ScanNet++ datasets, effectively doubling existing annotated data in this area.

Meta sees this technology as crucial for developing more capable robotic systems, including its own PARTNR robot project, enabling more natural human-robot interaction and collaboration.

Dynamic Byte Latent Transformer: Efficient and robust language modelling

Following research published in late 2024, Meta is now releasing the model weights for its 8-billion parameter Dynamic Byte Latent Transformer.

This architecture represents a shift away from traditional tokenisation-based language models, operating instead at the byte level. Meta claims this approach achieves comparable performance at scale while offering significant improvements in inference efficiency and robustness.

Traditional LLMs break text into ‘tokens’, which can struggle with misspellings, novel words, or adversarial inputs. Byte-level models process raw bytes, potentially offering greater resilience.

Meta reports that the Dynamic Byte Latent Transformer “outperforms tokeniser-based models across various tasks, with an average robustness advantage of +7 points (on perturbed HellaSwag), and reaching as high as +55 points on tasks from the CUTE token-understanding benchmark.”

By releasing the weights alongside the previously shared codebase, Meta encourages the research community to explore this alternative approach to language modelling.

Collaborative Reasoner: Meta advances socially-intelligent AI agents

The final release, Collaborative Reasoner, tackles the complex challenge of creating AI agents that can effectively collaborate with humans or other AIs.

Meta notes that human collaboration often yields superior results, and aims to imbue AI with similar capabilities for tasks like helping with homework or job interview preparation.

Such collaboration requires not just problem-solving but also social skills like communication, empathy, providing feedback, and understanding others’ mental states (theory-of-mind), often unfolding over multiple conversational turns.

Current LLM training and evaluation methods often neglect these social and collaborative aspects. Furthermore, collecting relevant conversational data is expensive and difficult.

Collaborative Reasoner provides a framework to evaluate and enhance these skills. It includes goal-oriented tasks requiring multi-step reasoning achieved through conversation between two agents. The framework tests abilities like disagreeing constructively, persuading a partner, and reaching a shared best solution.

Meta’s evaluations revealed that current models struggle to consistently leverage collaboration for better outcomes. To address this, they propose a self-improvement technique using synthetic interaction data where an LLM agent collaborates with itself.

Generating this data at scale is enabled by a new high-performance model serving engine called Matrix. Using this approach on maths, scientific, and social reasoning tasks reportedly yielded improvements of up to 29.4% compared to the standard ‘chain-of-thought’ performance of a single LLM.

By open-sourcing the data generation and modelling pipeline, Meta aims to foster further research into creating truly “social agents that can partner with humans and other agents.”

These five releases collectively underscore Meta’s continued heavy investment in fundamental AI research, particularly focusing on building blocks for machines that can perceive, understand, and interact with the world in more human-like ways. 

See also: Meta will train AI models using EU user data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta FAIR advances human-like AI with five major releases appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-fair-advances-human-like-ai-five-major-releases/feed/ 0
Meta will train AI models using EU user data https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/ https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/#respond Tue, 15 Apr 2025 16:32:02 +0000 https://www.artificialintelligence-news.com/?p=105325 Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models. The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.    In a statement, […]

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models.

The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.   

In a statement, Meta wrote: “Today, we’re announcing our plans to train AI at Meta using public content – like public posts and comments – shared by adults on our products in the EU.

“People’s interactions with Meta AI – like questions and queries – will also be used to train and improve our models.”

Starting this week, users of Meta’s platforms (including Facebook, Instagram, WhatsApp, and Messenger) within the EU will receive notifications explaining the data usage. These notifications, delivered both in-app and via email, will detail the types of public data involved and link to an objection form.

“We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones,” Meta explained.

Meta explicitly clarified that certain data types remain off-limits for AI training purposes.

The company says it will not “use people’s private messages with friends and family” to train its generative AI models. Furthermore, public data associated with accounts belonging to users under the age of 18 in the EU will not be included in the training datasets.

Meta wants to build AI tools designed for EU users

Meta positions this initiative as a necessary step towards creating AI tools designed for EU users. Meta launched its AI chatbot functionality across its messaging apps in Europe last month, framing this data usage as the next phase in improving the service.

“We believe we have a responsibility to build AI that’s not just available to Europeans, but is actually built for them,” the company explained. 

“That means everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

This becomes increasingly pertinent as AI models evolve with multi-modal capabilities spanning text, voice, video, and imagery.   

Meta also situated its actions in the EU within the broader industry landscape, pointing out that training AI on user data is common practice.

“It’s important to note that the kind of AI training we’re doing is not unique to Meta, nor will it be unique to Europe,” the statement reads. 

“We’re following the example set by others including Google and OpenAI, both of which have already used data from European users to train their AI models.”

Meta further claimed its approach surpasses others in openness, stating, “We’re proud that our approach is more transparent than many of our industry counterparts.”   

Regarding regulatory compliance, Meta referenced prior engagement with regulators, including a delay initiated last year while awaiting clarification on legal requirements. The company also cited a favourable opinion from the European Data Protection Board (EDPB) in December 2024.

“We welcome the opinion provided by the EDPB in December, which affirmed that our original approach met our legal obligations,” wrote Meta.

Broader concerns over AI training data

While Meta presents its approach in the EU as transparent and compliant, the practice of using vast swathes of public user data from social media platforms to train large language models (LLMs) and generative AI continues to raise significant concerns among privacy advocates.

Firstly, the definition of “public” data can be contentious. Content shared publicly on platforms like Facebook or Instagram may not have been posted with the expectation that it would become raw material for training commercial AI systems capable of generating entirely new content or insights. Users might share personal anecdotes, opinions, or creative works publicly within their perceived community, without envisaging its large-scale, automated analysis and repurposing by the platform owner.

Secondly, the effectiveness and fairness of an “opt-out” system versus an “opt-in” system remain debatable. Placing the onus on users to actively object, often after receiving notifications buried amongst countless others, raises questions about informed consent. Many users may not see, understand, or act upon the notification, potentially leading to their data being used by default rather than explicit permission.

Thirdly, the issue of inherent bias looms large. Social media platforms reflect and sometimes amplify societal biases, including racism, sexism, and misinformation. AI models trained on this data risk learning, replicating, and even scaling these biases. While companies employ filtering and fine-tuning techniques, eradicating bias absorbed from billions of data points is an immense challenge. An AI trained on European public data needs careful curation to avoid perpetuating stereotypes or harmful generalisations about the very cultures it aims to understand.   

Furthermore, questions surrounding copyright and intellectual property persist. Public posts often contain original text, images, and videos created by users. Using this content to train commercial AI models, which may then generate competing content or derive value from it, enters murky legal territory regarding ownership and fair compensation—issues currently being contested in courts worldwide involving various AI developers.

Finally, while Meta highlights its transparency relative to competitors, the actual mechanisms of data selection, filtering, and its specific impact on model behaviour often remain opaque. Truly meaningful transparency would involve deeper insights into how specific data influences AI outputs and the safeguards in place to prevent misuse or unintended consequences.

The approach taken by Meta in the EU underscores the immense value technology giants place on user-generated content as fuel for the burgeoning AI economy. As these practices become more widespread, the debate surrounding data privacy, informed consent, algorithmic bias, and the ethical responsibilities of AI developers will undoubtedly intensify across Europe and beyond.

(Photo by Julio Lopez)

See also: Apple AI stresses privacy with synthetic and anonymised data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/feed/ 0
Big tech’s $320B AI spend defies efficiency race https://www.artificialintelligence-news.com/news/big-techs-320b-ai-spend-defies-efficiency-race/ https://www.artificialintelligence-news.com/news/big-techs-320b-ai-spend-defies-efficiency-race/#respond Wed, 12 Feb 2025 11:57:25 +0000 https://www.artificialintelligence-news.com/?p=104318 Tech giants are beginning an unprecedented $320 billion AI infrastructure spending spree in 2025, brushing aside concerns about more efficient AI models from challengers like DeepSeek. The massive investment push from Amazon, Microsoft, Google, and Meta signals the big players’ unwavering conviction that AI’s future demands bold infrastructure bets, despite (or perhaps because of) emerging […]

The post Big tech’s $320B AI spend defies efficiency race appeared first on AI News.

]]>
Tech giants are beginning an unprecedented $320 billion AI infrastructure spending spree in 2025, brushing aside concerns about more efficient AI models from challengers like DeepSeek. The massive investment push from Amazon, Microsoft, Google, and Meta signals the big players’ unwavering conviction that AI’s future demands bold infrastructure bets, despite (or perhaps because of) emerging efficiency breakthroughs.

The stakes are high, with collective capital expenditure jumping 30% up from 2024’s $246 billion investment. While investors may question the necessity of such aggressive spending, tech leaders are doubling down on their belief that AI represents a transformative opportunity worth every dollar.

Amazon stands at the forefront of this AI arms spend, according toa reportby Business Insider. Amazon is flexing its financial muscle with a planned $100 billion capital expenditure for 2025 – a dramatic leap from its $77 billion last year. AWS chief Andy Jassy isn’t mincing words, calling AI a “once-in-a-lifetime business opportunity” that demands aggressive investment.

Microsoft’s Satya Nadella also has a bullish stance with his own hard numbers. Having earmarked $80 billion for AI infrastructure in 2025, Microsoft’s existing AI ventures are already delivering; Nadella has spoken of $13 billion annual revenue from AI and 175% year-over-year growth.

His perspective draws from economic wisdom: citing the Jevons paradox, he argues that making AI more efficient and accessible will spark an unprecedented surge in demand.

Not to be outdone, Google parent Alphabet is pushing all its chips to the centre of the table, with a $75 billion infrastructure investment in 2025, dwarfing analysts’ expectations of $58 billion. Despite market jitters about cloud growth and AI strategy, CEO Sundar Pichai maintains Google’s product innovation engine is firing on all cylinders.

Meta’s approach is to pour $60-65 billion into capital spending in 2025 – up from $39 billion in 2024. The company is carving its own path by championing an “American standard” for open-source AI models, a strategy has caught investor attention, particularly given Meta’s proven track record in monetising AI through sophisticated ad targeting.

The emergence of DeepSeek’s efficient AI models has sparked some debate in investment circles. Investing.com’s Jesse Cohen voices growing demands for concrete returns on existing AI investments. Yet Wedbush’s Dan Ives dismisses such concerns, likening DeepSeek to “the Temu of AI” and insisting the revolution is just beginning.

The market’s response to these bold plans tells a mixed story. Meta’s strategy has won investor applause, while Amazon and Google face more sceptical reactions, with stock drops of 5% and 8% respectively following spending announcements in earnings calls. Yet tech leaders remain undeterred, viewing robust AI infrastructure as non-negotiable for future success.

The intensity of infrastructure investment suggests a reality: technological breakthroughs in AI efficiency aren’t slowing the race – they’re accelerating it. As big tech pours unprecedented resources into AI development, it’s betting that increased efficiency will expand rather than contract the market for AI services.

The high-stakes gamble on AI’s future reveals a shift in how big tech views investment. Rather than waiting to see how efficiency improvements might reduce costs, it’s are scaling up aggressively, convinced that tomorrow’s AI landscape will demand more infrastructure, not less. In this view, DeepSeek’s breakthroughs aren’t a threat to their strategy – they’re validation of AI’s expanding potential.

The message from Silicon Valley is that the AI revolution demands massive infrastructure investment, and the giants of tech are all in. The question isn’t whether to invest in AI infrastructure, but whether $320 billion will be enough to meet the coming surge in demand.

See also: DeepSeek ban? China data transfer boosts security concerns

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Big tech’s $320B AI spend defies efficiency race appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/big-techs-320b-ai-spend-defies-efficiency-race/feed/ 0
Meta accused of using pirated data for AI development https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/ https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/#respond Fri, 10 Jan 2025 12:16:52 +0000 https://www.artificialintelligence-news.com/?p=16840 Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models. The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States […]

The post Meta accused of using pirated data for AI development appeared first on AI News.

]]>
Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models.

The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States District Court in the Northern District of California.

The filing accuses Meta of systematically torrenting and stripping copyright management information (CMI) from pirated datasets, including works from the notorious shadow library LibGen.

According to documents recently submitted to the court, evidence reveals highly incriminating practices involving Meta’s senior leaders. Plaintiffs allege that Meta CEO Mark Zuckerberg gave explicit approval for the use of the LibGen dataset, despite internal concerns raised by the company’s AI executives.

A December 2024 memo from internal Meta discussions acknowledged LibGen as “a dataset we know to be pirated,” with debates arising about the ethical and legal ramifications of using such materials. Documents also revealed that top engineers hesitated to torrent the datasets, citing concerns about using corporate laptops for potentially unlawful activities.

Additionally, internal communications suggest that after acquiring the LibGen dataset, Meta stripped CMI from the copyrighted works contained within—a practice that plaintiffs highlight as central to claims of copyright infringement.

According to the deposition of Michael Clark – a corporate representative for Meta – the company implemented scripts designed to remove any information identifying these works as copyrighted, including keywords like “copyright,” “acknowledgements,” or lines commonly used in such texts. Clark attested that this practice was done intentionally to prepare the dataset for training Meta’s Llama AI models.  

“Doesn’t feel right”

The allegations against Meta paint a portrait of a company knowingly partaking in a widespread piracy scheme facilitated through torrenting.

According to a string of emails included as exhibits, Meta engineers expressed concerns about the optics of torrenting pirated datasets from within corporate spaces. One engineer noted that “torrenting from a [Meta-owned] corporate laptop doesn’t feel right,” but despite hesitation, the rapid downloading and distribution – or “seeding” – of pirated data took place.

Legal counsel for the plaintiffs has stated that as late as January 2024, Meta had “already torrented (both downloaded and distributed) data from LibGen.” Moreover, records show that hundreds of related documents were initially obtained by Meta months prior but were withheld during early discovery processes. Plaintiffs argue this delayed disclosure amounts to bad-faith attempts by Meta to obstruct access to vital evidence.

During a deposition on 17 December 2024, Zuckerberg himself reportedly admitted that such activities would raise “lots of red flags” and stated it “seems like a bad thing,” though he provided limited direct responses regarding Meta’s broader AI training practices.

This case originally began as an intellectual property infringement action on behalf of authors and publishers claiming violations relating to AI use of their materials. However, the plaintiffs are now seeking to add two major claims to their suit: a violation of the Digital Millennium Copyright Act (DMCA) and a breach of the California Comprehensive Data Access and Fraud Act (CDAFA).  

Under the DMCA, the plaintiffs assert that Meta knowingly removed copyright protections to conceal unauthorised uses of copyrighted texts in its Llama models.

As cited in the complaint, Meta allegedly stripped CMI “to reduce the chance that the models will memorise this data” and that this removal of rights management indicators made discovering the infringement more difficult for copyright holders. 

The CDAFA allegations involve Meta’s methods for obtaining the LibGen dataset, including allegedly engaging in torrenting to acquire copyrighted datasets without permission. Internal documentation shows Meta engineers openly discussed concerns that seeding and torrenting might prove to be “legally not ok.” 

Meta case may impact emerging legislation around AI development

At the heart of this expanding legal battle lies growing concern over the intersection of copyright law and AI.

Plaintiffs argue the stripping of copyright protections from textual datasets denies rightful compensation to copyright owners and allows Meta to build AI systems like Llama on the financial ruins of authors’ and publishers’ creative efforts.

The timing of these allegations arises amidst heightened global scrutiny surrounding “generative AI” technologies. Companies like OpenAI, Google, and Meta have all come under fire regarding the use of copyrighted data to train their models. Courts across jurisdictions are currently grappling with the long-term impact of AI on rights management, with potentially landmark cases being decided in both the US and the UK.  

In this particular case, US courts have shown increasing willingness to hear complaints about AI’s potential harm to long-established copyright law precedents. Plaintiffs, in their motion, referred to The Intercept Media v. OpenAI, a recent decision from New York in which a similar DMCA claim was allowed to proceed.

Meta continues to deny all allegations in the case and has yet to publicly respond to Zuckerberg’s reported deposition statements.

Whether or not plaintiffs succeed in these amendments, authors across the world face growing anxieties about how their creative works are handled within the context of AI. With copyright law struggling to keep pace with technological advances, this case underscores the need for clearer guidance at an international level to protect both creators and innovators.

For Meta, these claims also represent a reputational risk. As AI becomes the central focus of its future strategy, the allegations of reliance on pirated libraries are unlikely to help its ambitions of maintaining leadership in the field.  

The unfolding case of Kadrey et al. vs. Meta could have far-reaching ramifications for the development of AI models moving forward, potentially setting legal precedents in the US and beyond.

(Photo by Amy Syiek)

See also: UK wants to prove AI can modernise public services responsibly

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta accused of using pirated data for AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/feed/ 0
The risks behind the generative AI craze: Why caution is growing https://www.artificialintelligence-news.com/news/the-risks-behind-the-generative-ai-craze-why-caution-is-growing/ https://www.artificialintelligence-news.com/news/the-risks-behind-the-generative-ai-craze-why-caution-is-growing/#respond Wed, 09 Oct 2024 09:55:20 +0000 https://www.artificialintelligence-news.com/?p=16260 In the near future, Silicon Valley might look back at recent events as the point where the generative AI craze went too far. This past summer, investors questioned whether top AI stocks could sustain their sky-high valuations, given the lack of returns on massive AI spending. As Autumn approaches, major AI sectors—such as chips, LLMs, […]

The post The risks behind the generative AI craze: Why caution is growing appeared first on AI News.

]]>
In the near future, Silicon Valley might look back at recent events as the point where the generative AI craze went too far.

This past summer, investors questioned whether top AI stocks could sustain their sky-high valuations, given the lack of returns on massive AI spending. As Autumn approaches, major AI sectors—such as chips, LLMs, and AI devices—received renewed confidence. Nonetheless, there are an increasing number of reasons to be cautious.

Cerebras: A chip contender with a major risk

Chip startup Cerebras is challenging Nvidia’s dominance by developing processors designed to power smarter LLMs. Nvidia, a major player in the AI boom, has seen its market cap skyrocket from $364 billion at the start of 2023 to over $3 trillion.

Cerebras, however, relies heavily on a single customer: the Abu Dhabi-based AI firm G42. In 2023, G42 accounted for 83% of Cerebras’ revenue, and in the first half of 2024, that figure increased to 87%. While G42 is backed by major players like Microsoft and Silver Lake, its dependency poses a risk. Even though Cerebras has signed a deal with Saudi Aramco, its reliance on one client may cause concerns as it seeks a $7-8 billion valuation for its IPO.

OpenAI’s record-breaking funding – but with strings attached

OpenAI made the news when it raised $6.6 billion at a $157 billion valuation, becoming the largest investment round in Silicon Valley history. However, the company has urged its investors not to back competitors such as Anthropic and Elon Musk’s xAI—an unusual request in the world of venture capital, where spread betting is common. Critics, including Gary Marcus, have described this approach as “running scared.”

OpenAI’s backers also include “bubble chasers” such as SoftBank and Tiger Global, firms known for investing in companies at their peak, which frequently results in huge losses. With top executives such as CTO Mira Murati departing and predicted losses of $5 billion this year despite rising revenues, OpenAI faces significant challenges.

Meta’s big bet on AI wearables

Meta entered the AI race by unveiling Orion, its augmented reality glasses. The wearables promise to integrate AI into daily life, with Nvidia’s CEO Jensen Huang endorsing the product. However, at a production cost of $10,000 per unit, the price is a major obstacle.

Meta will need to reduce costs and overcome consumer hesitation, as previous attempts at AI-powered wearables—such as Snapchat’s glasses, Google Glass, and the Humane AI pin—have struggled to gain traction.

The road ahead

What’s next for AI? OpenAI must prove it can justify a $157 billion valuation while operating at a loss. Cerebras needs to reassure investors that relying on one client isn’t a dealbreaker. And Meta must convince consumers to adopt a completely new way of interacting with AI.

If these companies succeed, this moment could mark a turning point in the AI revolution. However, as tech history shows, high-stakes markets are rarely easy to win.

(Photo by Growtika)

See also: Ethical, trust and skill barriers hold back generative AI progress in EMEA

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The risks behind the generative AI craze: Why caution is growing appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/the-risks-behind-the-generative-ai-craze-why-caution-is-growing/feed/ 0
Tech industry giants urge EU to streamline AI regulations https://www.artificialintelligence-news.com/news/tech-industry-giants-urge-eu-streamline-ai-regulations/ https://www.artificialintelligence-news.com/news/tech-industry-giants-urge-eu-streamline-ai-regulations/#respond Thu, 19 Sep 2024 15:20:55 +0000 https://www.artificialintelligence-news.com/?p=16117 Meta has spearheaded an open letter calling for urgent reform of AI regulations in the EU. The letter, which garnered support from over 50 prominent companies – including Ericsson, SAP, and Spotify – was published as an advert in the Financial Times. The collective voice of these industry leaders highlights a pressing issue: Europe’s bureaucratic […]

The post Tech industry giants urge EU to streamline AI regulations appeared first on AI News.

]]>
Meta has spearheaded an open letter calling for urgent reform of AI regulations in the EU. The letter, which garnered support from over 50 prominent companies – including Ericsson, SAP, and Spotify – was published as an advert in the Financial Times.

The collective voice of these industry leaders highlights a pressing issue: Europe’s bureaucratic approach to AI regulation may be stifling innovation and causing the region to lag behind its global counterparts.

“Europe has become less competitive and less innovative compared to other regions and it now risks falling further behind in the AI era due to inconsistent regulatory decision making,” the letter states, painting a stark picture of the continent’s current position in the AI race.

The signatories emphasise two key areas of concern. Firstly, they point to the development of ‘open’ models, which are freely available for use, modification, and further development. These models are lauded for their potential to “multiply the benefits and spread social and economic opportunity” while simultaneously bolstering sovereignty and control.

Secondly, the letter underscores the importance of ‘multimodal’ models, which integrate text, images, and speech capabilities. The signatories argue that the leap from text-only to multimodal models is akin to “the difference between having only one sense and having all five of them”. They assert that these advanced models could significantly boost productivity, drive scientific research, and inject hundreds of billions of euros into the European economy.

However, the crux of the matter lies in the regulatory landscape. The letter expresses frustration with the uncertainty surrounding data usage for AI model training, stemming from interventions by European Data Protection Authorities. This ambiguity, they argue, could result in Large Language Models (LLMs) lacking crucial Europe-specific training data.

To address these challenges, the signatories call for “harmonised, consistent, quick and clear decisions under EU data regulations that enable European data to be used in AI training for the benefit of Europeans”. They stress the need for “decisive action” to unlock Europe’s potential for creativity, ingenuity, and entrepreneurship, which they believe is essential for the region’s prosperity and technological leadership.

A copy of the letter can be found below:

While the letter acknowledges the importance of consumer protection, it also highlights the delicate balance regulators must strike to avoid hindering commercial progress. The European Commission’s approach to regulation has often been criticised for its perceived heavy-handedness, and this latest appeal from industry leaders adds weight to growing concerns about the region’s global competitiveness in the AI sector.

The pressure is rapidly mounting on European policymakers to create a regulatory environment that fosters innovation while maintaining appropriate safeguards. The coming months will likely see intensified dialogue between industry stakeholders and regulators as they grapple with these complex issues that will shape the future of AI development in Europe.

(Photo by Sara Kurfeß)

See also: SolarWinds: IT professionals want stronger AI regulation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tech industry giants urge EU to streamline AI regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/tech-industry-giants-urge-eu-streamline-ai-regulations/feed/ 0
Use of AI for business governance must improve at the board level https://www.artificialintelligence-news.com/news/use-ai-business-governance-must-improve-at-board-level/ https://www.artificialintelligence-news.com/news/use-ai-business-governance-must-improve-at-board-level/#respond Tue, 20 Aug 2024 16:58:43 +0000 https://www.artificialintelligence-news.com/?p=15781 According to Carine Smith Ihenacho, chief governance and compliance officer of Norway’s $1.7 trillion sovereign wealth fund, boards need to be proficient with the use of AI and take control of its application in businesses to mitigate risks. The Norges Bank Investment Fund, which holds considerable shares in almost 9,000 companies worldwide — accounting for […]

The post Use of AI for business governance must improve at the board level appeared first on AI News.

]]>
According to Carine Smith Ihenacho, chief governance and compliance officer of Norway’s $1.7 trillion sovereign wealth fund, boards need to be proficient with the use of AI and take control of its application in businesses to mitigate risks.

The Norges Bank Investment Fund, which holds considerable shares in almost 9,000 companies worldwide — accounting for 1.5% of all listed stocks — has become a trailblazer in environmental, social, and corporate governance issues. About a year ago, the fund also provided its invested companies with recommendations on integrating responsible AI to improve economic outcomes.

Several companies still have a lot of ground to cover. Specifically, when stating that “Overall, a lot of competence building needs to be done at the board level,” Smith Ihenacho clarified that this does not mean every board should have an AI specialist. Instead, boards need to collectively understand how AI matters in their business and have policies in place.

“They should know: ‘What’s our policy on AI? Are we high risk or low risk? Where does AI meet customers? Are we transparent around it?’ It’s a big-picture question they should be able to answer,” Smith Ihenacho added, highlighting the breadth of understanding required at the board level.

The fund has shared its perspective on AI with the boards of its 60 largest portfolio companies, as reported in its 2023 responsible investment report. It is particularly focused on AI use in the healthcare sector due to its substantial impact on consumers, and is closely monitoring Big Tech companies that develop AI-based products.

In its engagement with tech firms, the fund emphasises the importance of robust governance structures to manage AI-related risks. “We focus more on the governance structure,” Smith Ihenacho explained. “Is the board involved? Do you have a proper policy on AI?”

The fund’s emphasis on AI governance is particularly relevant, given that nine of the ten largest positions in its equity holdings are tech companies. Leading among them are names such as Microsoft, Apple, Amazon, and Meta Platforms. Investments in these companies contributed to a 12.5% growth in the fund’s stock portfolio in the first half of 2024. The overall exposure to the tech sector increased from 21% to 26% over the past year, now comprising a quarter of the stock portfolio. This underscores the significant role that technology and AI play in the world today.

Though the fund favours AI innovation for its potential to boost efficiency and productivity, Smith Ihenacho has emphasised the importance of responsible use. She is quoted as saying, “It is fantastic what AI may be able to do to support innovation, efficiency, and productivity… we support that.” However, she also stressed the need to be responsible in how we manage the risks.

The fund’s adoption of AI governance aligns with rising global concerns about the ethical implications and potential dangers of these technologies. AI is increasingly utilised across various sectors, from finance to healthcare, and the need for governance frameworks has never been greater. The Norwegian sovereign wealth fund maintains a standard that requires companies to develop comprehensive AI policies at the board level, fostering the adoption of responsible AI practices across its large portfolio.

This initiative by one of the world’s largest investors could have far-reaching implications for corporate governance practices globally. As companies seek to harness the power of AI while navigating its complexities, the guidance provided by influential investors like Norges Bank Investment Fund may serve as a blueprint for responsible AI implementation and governance in the corporate world.

See also: X agrees to halt use of certain EU data for AI chatbot training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Use of AI for business governance must improve at the board level appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/use-ai-business-governance-must-improve-at-board-level/feed/ 0
Meta’s AI strategy: Building for tomorrow, not immediate profits https://www.artificialintelligence-news.com/news/metas-ai-strategy-building-for-tomorrow-not-immediate-profits/ https://www.artificialintelligence-news.com/news/metas-ai-strategy-building-for-tomorrow-not-immediate-profits/#respond Thu, 01 Aug 2024 15:49:28 +0000 https://www.artificialintelligence-news.com/?p=15599 Meta has signalled a long-term AI strategy that prioritises substantial investments over immediate revenue generation. During the company’s Q2 earnings call, CEO and founder Mark Zuckerberg outlined Meta’s vision for the future and emphasised the need for extensive computational resources to support their AI initiatives. Zuckerberg revealed that Meta is “planning for the compute clusters […]

The post Meta’s AI strategy: Building for tomorrow, not immediate profits appeared first on AI News.

]]>
Meta has signalled a long-term AI strategy that prioritises substantial investments over immediate revenue generation. During the company’s Q2 earnings call, CEO and founder Mark Zuckerberg outlined Meta’s vision for the future and emphasised the need for extensive computational resources to support their AI initiatives.

Zuckerberg revealed that Meta is “planning for the compute clusters and data we’ll need for the next several years,” with a particular focus on their next AI model, Llama 4.

The company anticipates that training Llama 4 will require “almost 10x more” computing power than its predecessor, Llama 3, which is believed to have used 16,000 GPUs. Zuckerberg expressed his goal for Llama 4 “to be the most advanced [model] in the industry next year.”

Meta’s financial commitment to AI development is substantial, with the company projecting capital expenditures between $37 and $40 billion for the full year, an increase of $2 billion from previous estimates. Investors were cautioned to expect “significant” increases in capital expenditures next year as well.

Despite these massive investments, Meta CFO Susan Li acknowledged that the company does not expect to generate revenue from generative AI this year.

Li emphasised the company’s strategy of building AI infrastructure with flexibility in mind, allowing for capacity adjustments based on optimal use cases. She explained that the hardware used for AI model training can also be utilised for inferencing and, with modifications, for ranking and recommendations.

Meta’s current AI efforts, dubbed “Core AI,” are already showing positive results in improving user engagement on Facebook and Instagram. Zuckerberg highlighted the success of a recently implemented unified video recommendation tool for Facebook, which has “already increased engagement on Facebook Reels more than our initial move from CPUs to GPUs did.”

Looking ahead, Zuckerberg envisions AI playing a crucial role in revolutionising Meta’s advertising business. He predicted that in the coming years, AI would take over ad copy creation and personalisation, potentially allowing advertisers to simply provide a business objective and budget, with Meta’s AI handling the rest.

While Meta’s AI investments are substantial, the company remains in a strong financial position. Q2 results showed revenue of $39 billion and net income of $13.5 billion, representing year-over-year increases of $7 billion and $5.7 billion, respectively. Meta’s user base continues to grow, with over 3.2 billion people using a Meta app daily, and its X competitor Threads is now approaching 200 million active monthly users.

As Meta charts its course in the AI landscape, the company’s strategy reflects a long-term vision that prioritises technological advancement and infrastructure development over immediate financial returns.

(Photo by Joshua Earle)

See also: NVIDIA and Meta CEOs: Every business will ‘have an AI’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta’s AI strategy: Building for tomorrow, not immediate profits appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/metas-ai-strategy-building-for-tomorrow-not-immediate-profits/feed/ 0
NVIDIA and Meta CEOs: Every business will ‘have an AI’ https://www.artificialintelligence-news.com/news/nvidia-and-meta-ceo-every-business-will-have-an-ai/ https://www.artificialintelligence-news.com/news/nvidia-and-meta-ceo-every-business-will-have-an-ai/#respond Tue, 30 Jul 2024 15:30:43 +0000 https://www.artificialintelligence-news.com/?p=15557 In a fireside chat at SIGGRAPH 2024, NVIDIA founder and CEO Jensen Huang and Meta founder and CEO Mark Zuckerberg shared their insights on the potential of open source AI and virtual assistants. The conversation began with Zuckerberg announcing the launch of AI Studio, a new platform designed to democratise AI creation. This tool allows […]

The post NVIDIA and Meta CEOs: Every business will ‘have an AI’ appeared first on AI News.

]]>
In a fireside chat at SIGGRAPH 2024, NVIDIA founder and CEO Jensen Huang and Meta founder and CEO Mark Zuckerberg shared their insights on the potential of open source AI and virtual assistants.

The conversation began with Zuckerberg announcing the launch of AI Studio, a new platform designed to democratise AI creation. This tool allows users to create, share, and discover AI characters, potentially opening up AI development to millions of creators and small businesses.

Huang emphasised the ubiquity of AI in the future, stating, “Every single restaurant, every single website will probably, in the future, have these AIs …”

Zuckerberg concurred, adding, “…just like every business has an email address and a website and a social media account, I think, in the future, every business is going to have an AI.”

This vision aligns with NVIDIA’s recent developments showcased at SIGGRAPH. The company previewed “James,” an interactive digital human based on the NVIDIA ACE (Avatar Cloud Engine) reference design. James – a virtual assistant capable of providing contextually accurate responses – demonstrates the potential for businesses to create custom, hyperrealistic avatars for customer interactions.

The discussion highlighted Meta’s significant contributions to AI development. Huang praised Meta’s work, saying, “You guys have done amazing AI work,” and cited advancements in computer vision, language models, and real-time translation. He also acknowledged the widespread use of PyTorch, an open-source machine learning framework developed by Meta.

Both CEOs stressed the importance of open source in advancing AI. Meta has positioned itself as a leader in this field, implementing AI across its platforms and releasing open-source models like Llama 3.1. This latest model, with 405 billion parameters, required training on over 16,000 NVIDIA H100 GPUs, representing a substantial investment in resources.

Zuckerberg shared his vision for more integrated AI models, saying, “I kind of dream of one day like you can almost imagine all of Facebook or Instagram being like a single AI model that has unified all these different content types and systems together.” He believes that collaboration is crucial for further advancements in AI.

The conversation touched on the potential of AI to enhance human productivity. Huang described a future where AI could generate images in real-time as users type, allowing for fluid collaboration between humans and AI assistants. This concept is reflected in NVIDIA’s latest advancements to the NVIDIA Maxine AI platform, including Maxine 3D and Audio2Face-2D, which aim to create immersive telepresence experiences.

Looking ahead, Zuckerberg expressed enthusiasm about combining AI with augmented reality eyewear, mentioning Meta’s collaboration with eyewear maker Luxottica. He envisions this technology transforming education, entertainment, and work.

Huang discussed the evolution of AI interactions, moving beyond turn-based conversations to more complex, multi-option simulations. “Today’s AI is kind of turn-based. You say something, it says something back to you,” Huang explained. “In the future, AI could contemplate multiple options, or come up with a tree of options and simulate outcomes, making it much more powerful.”

The importance of this evolution is evident in the adoption of NVIDIA’s technologies by companies across industries. HTC, Looking Glass, Reply, and UneeQ are among the latest firms using NVIDIA ACE and Maxine for applications ranging from customer service agents to telepresence experiences in entertainment, retail, and hospitality.

As AI continues to evolve and integrate into various aspects of our lives, the insights shared by these industry leaders provide a glimpse into a future where AI assistants are as commonplace as websites and social media accounts.

The developments showcased at SIGGRAPH 2024 by both NVIDIA and other companies demonstrate that this future is rapidly approaching, with digital humans becoming increasingly sophisticated and capable of natural, engaging interactions.

See also: Amazon strives to outpace Nvidia with cheaper, faster AI chips

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NVIDIA and Meta CEOs: Every business will ‘have an AI’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nvidia-and-meta-ceo-every-business-will-have-an-ai/feed/ 0
The exponential expenses of AI development https://www.artificialintelligence-news.com/news/the-exponential-expenses-of-ai-development/ https://www.artificialintelligence-news.com/news/the-exponential-expenses-of-ai-development/#respond Mon, 29 Jul 2024 12:40:13 +0000 https://www.artificialintelligence-news.com/?p=15538 Tech giants like Microsoft, Alphabet, and Meta are riding high on a wave of revenue from AI-driven cloud services, yet simultaneously drowning in the substantial costs of pushing AI’s boundaries. Recent financial reports paint a picture of a double-edged sword: on one side, impressive gains; on the other, staggering expenses.  This dichotomy has led Bloomberg to aptly […]

The post The exponential expenses of AI development appeared first on AI News.

]]>
Tech giants like Microsoft, Alphabet, and Meta are riding high on a wave of revenue from AI-driven cloud services, yet simultaneously drowning in the substantial costs of pushing AI’s boundaries. Recent financial reports paint a picture of a double-edged sword: on one side, impressive gains; on the other, staggering expenses. 

This dichotomy has led Bloomberg to aptly dub AI development a “huge money pit,” highlighting the complex economic reality behind today’s AI revolution. At the heart of this financial problem lies a relentless push for bigger, more sophisticated AI models. The quest for artificial general intelligence (AGI) has led companies to develop increasingly complex systems, exemplified by large language models like GPT-4. These models require vast computational power, driving up hardware costs to unprecedented levels.

To top it off, the demand for specialised AI chips, mainly graphics processing units (GPUs), has skyrocketed. Nvidia, the leading manufacturer in this space, has seen its market value soar as tech companies scramble to secure these essential components. Its H100 graphics chip, the gold standard for training AI models, has sold for an estimated $30,000 — with some resellers offering them for multiple times that amount. 

The global chip shortage has only exacerbated this issue, with some firms waiting months to acquire the necessary hardware. Meta Chief Executive Officer Zuckerberg previously said that his company planned to acquire 350,000 H100 chips by the end of this year to support its AI research efforts. Even if he gets a bulk-buying discount, that quickly adds to billions of dollars.

On the other hand, the push for more advanced AI has also sparked an arms race in chip design. Companies like Google and Amazon invest heavily in developing their AI-specific processors, aiming to gain a competitive edge and reduce reliance on third-party suppliers. This trend towards custom silicon adds another layer of complexity and cost to the AI development process.

But the hardware challenge extends beyond just procuring chips. The scale of modern AI models necessitates massive data centres, which come with their technological hurdles. These facilities must be designed to handle extreme computational loads while managing heat dissipation and energy consumption efficiently. As models grow larger, so do the power requirements, significantly increasing operational costs and environmental impact.

In a podcast interview in early April, Dario Amodei, the chief executive officer of OpenAI-rival Anthropic, said the current crop of AI models on the market cost around $100 million to train. “The models that are in training now and that will come out at various times later this year or early next year are closer in cost to $1 billion,” he said. “And then I think in 2025 and 2026, we’ll get more towards $5 or $10 billion.”

Then, there is data, the lifeblood of AI systems, presenting its own technological challenges. The need for vast, high-quality datasets has led companies to invest heavily in data collection, cleaning, and annotation technologies. Some firms are developing sophisticated synthetic data generation tools to supplement real-world data, further driving up research and development costs.

The rapid pace of AI innovation also means that infrastructure and tools quickly become obsolete. Companies must continuously upgrade their systems and retrain their models to stay competitive, creating a constant cycle of investment and obsolescence.

“On April 25, Microsoft said it spent $14 billion on capital expenditures in the most recent quarter and expects those costs to “increase materially,” driven partly by AI infrastructure investments. That was a 79% increase from the year-earlier quarter. Alphabet said it spent $12 billion during the quarter, a 91% increase from a year earlier, and expects the rest of the year to be “at or above” that level as it focuses on AI opportunities,” the article by Bloomberg reads.

Bloomberg also noted that Meta, meanwhile, raised its estimates for investments for the year and now believes capital expenditures will be $35 billion to $40 billion, which would be a 42% increase at the high end of the range. “It cited aggressive investment in AI research and product development,” Bloomberg wrote.

Interestingly, Bloomberg’s article also points out that despite these enormous costs, tech giants are proving that AI can be a real revenue driver. Microsoft and Alphabet reported significant growth in their cloud businesses, mainly attributed to increased demand for AI services. This suggests that while the initial investment in AI technology is staggering, the potential returns are compelling enough to justify the expense.

However, the high costs of AI development raise concerns about market concentration. As noted in the article, the expenses associated with cutting-edge AI research may limit innovation to a handful of well-funded companies, potentially stifling competition and diversity in the field. Looking ahead, the industry is focusing on developing more efficient AI technologies to address these cost challenges. 

Research into techniques like few-shot learning, transfer learning, and more energy-efficient model architectures aims to reduce the computational resources required for AI development and deployment. Moreover, the push towards edge AI – running AI models on local devices rather than in the cloud – could help distribute computational loads and reduce the strain on centralised data centres. 

This shift, however, requires its own set of technological innovations in chip design and software optimisation. Overall, it is clear that the future of AI will be shaped not just by breakthroughs in algorithms and model design but also by our ability to overcome the immense technological and financial hurdles that come with scaling AI systems. Companies that can navigate these challenges effectively will likely emerge as the leaders in the next phase of the AI revolution.

(Image by Igor Omilaev)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The exponential expenses of AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/the-exponential-expenses-of-ai-development/feed/ 0
Meta advances open source AI with ‘frontier-level’ Llama 3.1 https://www.artificialintelligence-news.com/news/meta-advances-open-source-ai-frontier-level-llama-3-1/ https://www.artificialintelligence-news.com/news/meta-advances-open-source-ai-frontier-level-llama-3-1/#respond Wed, 24 Jul 2024 12:39:45 +0000 https://www.artificialintelligence-news.com/?p=15518 Meta has unveiled Llama 3.1, marking a significant milestone in the company’s commitment to open source AI. This release, which Meta CEO Mark Zuckerberg calls “the first frontier-level open source AI model,” aims to challenge the dominance of closed AI systems and democratise access to advanced AI technology. The Llama 3.1 release includes three models: […]

The post Meta advances open source AI with ‘frontier-level’ Llama 3.1 appeared first on AI News.

]]>
Meta has unveiled Llama 3.1, marking a significant milestone in the company’s commitment to open source AI. This release, which Meta CEO Mark Zuckerberg calls “the first frontier-level open source AI model,” aims to challenge the dominance of closed AI systems and democratise access to advanced AI technology.

The Llama 3.1 release includes three models: 405B, 70B, and 8B. Zuckerberg asserts that the 405B model competes with the most advanced closed models while offering better cost-efficiency.

“Starting next year, we expect future Llama models to become the most advanced in the industry,” Zuckerberg predicts.

Zuckerberg draws parallels between the evolution of AI and the historical shift from closed Unix systems to open source Linux. He argues that open source AI will follow a similar trajectory, eventually becoming the industry standard due to its adaptability, cost-effectiveness, and broad ecosystem support.

Zuckerberg emphasises several key advantages of open source AI:

  • Customisation: Organisations can train and fine-tune models with their specific data.
  • Independence: Avoids lock-in to closed vendors or specific cloud providers.
  • Data security: Allows for local model deployment, enhancing data protection.
  • Cost-efficiency: Llama 3.1 405B can be run at roughly half the cost of closed models like GPT-4.
  • Ecosystem growth: Encourages innovation and collaboration across the industry.

Addressing safety concerns, Zuckerberg argues that open source AI is inherently safer due to increased transparency and scrutiny. He states, “Open source should be significantly safer since the systems are more transparent and can be widely scrutinised.”

To support the open source AI ecosystem, Meta is partnering with major tech companies like Amazon, Databricks, and NVIDIA to provide development services. The models will be available across major cloud platforms, with companies such as Scale.AI, Dell, and Deloitte ready to assist in enterprise adoption.

“Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society,” Zuckerberg claims.

The CEO views this release as a turning point, predicting that most developers will shift towards primarily using open source AI models. He invites the tech community to join Meta in “this journey to bring the benefits of AI to everyone in the world.”

The Llama 3.1 models are now accessible at llama.meta.com.

(Photo by Dima Solomin)

See also: Meta joins Apple in withholding AI models from EU users

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta advances open source AI with ‘frontier-level’ Llama 3.1 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-advances-open-source-ai-frontier-level-llama-3-1/feed/ 0