meta Archives - AI News https://www.artificialintelligence-news.com/news/tag/meta/ Artificial Intelligence News Thu, 24 Apr 2025 11:39:29 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png meta Archives - AI News https://www.artificialintelligence-news.com/news/tag/meta/ 32 32 Meta FAIR advances human-like AI with five major releases https://www.artificialintelligence-news.com/news/meta-fair-advances-human-like-ai-five-major-releases/ https://www.artificialintelligence-news.com/news/meta-fair-advances-human-like-ai-five-major-releases/#respond Thu, 17 Apr 2025 16:00:05 +0000 https://www.artificialintelligence-news.com/?p=105371 The Fundamental AI Research (FAIR) team at Meta has announced five projects advancing the company’s pursuit of advanced machine intelligence (AMI). The latest releases from Meta focus heavily on enhancing AI perception – the ability for machines to process and interpret sensory information – alongside advancements in language modelling, robotics, and collaborative AI agents. Meta […]

The post Meta FAIR advances human-like AI with five major releases appeared first on AI News.

]]>
The Fundamental AI Research (FAIR) team at Meta has announced five projects advancing the company’s pursuit of advanced machine intelligence (AMI).

The latest releases from Meta focus heavily on enhancing AI perception – the ability for machines to process and interpret sensory information – alongside advancements in language modelling, robotics, and collaborative AI agents.

Meta stated its goal involves creating machines “that are able to acquire, process, and interpret sensory information about the world around us and are able to use this information to make decisions with human-like intelligence and speed.”

The five new releases represent diverse but interconnected efforts towards achieving this ambitious goal.

Perception Encoder: Meta sharpens the ‘vision’ of AI

Central to the new releases is the Perception Encoder, described as a large-scale vision encoder designed to excel across various image and video tasks.

Vision encoders function as the “eyes” for AI systems, allowing them to understand visual data.

Meta highlights the increasing challenge of building encoders that meet the demands of advanced AI, requiring capabilities that bridge vision and language, handle both images and videos effectively, and remain robust under challenging conditions, including potential adversarial attacks.

The ideal encoder, according to Meta, should recognise a wide array of concepts while distinguishing subtle details—citing examples like spotting “a stingray burrowed under the sea floor, identifying a tiny goldfinch in the background of an image, or catching a scampering agouti on a night vision wildlife camera.”

Meta claims the Perception Encoder achieves “exceptional performance on image and video zero-shot classification and retrieval, surpassing all existing open source and proprietary models for such tasks.”

Furthermore, its perceptual strengths reportedly translate well to language tasks. 

When aligned with a large language model (LLM), the encoder is said to outperform other vision encoders in areas like visual question answering (VQA), captioning, document understanding, and grounding (linking text to specific image regions). It also reportedly boosts performance on tasks traditionally difficult for LLMs, such as understanding spatial relationships (e.g., “if one object is behind another”) or camera movement relative to an object.

“As Perception Encoder begins to be integrated into new applications, we’re excited to see how its advanced vision capabilities will enable even more capable AI systems,” Meta said.

Perception Language Model (PLM): Open research in vision-language

Complementing the encoder is the Perception Language Model (PLM), an open and reproducible vision-language model aimed at complex visual recognition tasks. 

PLM was trained using large-scale synthetic data combined with open vision-language datasets, explicitly without distilling knowledge from external proprietary models.

Recognising gaps in existing video understanding data, the FAIR team collected 2.5 million new, human-labelled samples focused on fine-grained video question answering and spatio-temporal captioning. Meta claims this forms the “largest dataset of its kind to date.”

PLM is offered in 1, 3, and 8 billion parameter versions, catering to academic research needs requiring transparency.

Alongside the models, Meta is releasing PLM-VideoBench, a new benchmark specifically designed to test capabilities often missed by existing benchmarks, namely “fine-grained activity understanding and spatiotemporally grounded reasoning.”

Meta hopes the combination of open models, the large dataset, and the challenging benchmark will empower the open-source community.

Meta Locate 3D: Giving robots situational awareness

Bridging the gap between language commands and physical action is Meta Locate 3D. This end-to-end model aims to allow robots to accurately localise objects in a 3D environment based on open-vocabulary natural language queries.

Meta Locate 3D processes 3D point clouds directly from RGB-D sensors (like those found on some robots or depth-sensing cameras). Given a textual prompt, such as “flower vase near TV console,” the system considers spatial relationships and context to pinpoint the correct object instance, distinguishing it from, say, a “vase on the table.”

The system comprises three main parts: a preprocessing step converting 2D features to 3D featurised point clouds; the 3D-JEPA encoder (a pretrained model creating a contextualised 3D world representation); and the Locate 3D decoder, which takes the 3D representation and the language query to output bounding boxes and masks for the specified objects.

Alongside the model, Meta is releasing a substantial new dataset for object localisation based on referring expressions. It includes 130,000 language annotations across 1,346 scenes from the ARKitScenes, ScanNet, and ScanNet++ datasets, effectively doubling existing annotated data in this area.

Meta sees this technology as crucial for developing more capable robotic systems, including its own PARTNR robot project, enabling more natural human-robot interaction and collaboration.

Dynamic Byte Latent Transformer: Efficient and robust language modelling

Following research published in late 2024, Meta is now releasing the model weights for its 8-billion parameter Dynamic Byte Latent Transformer.

This architecture represents a shift away from traditional tokenisation-based language models, operating instead at the byte level. Meta claims this approach achieves comparable performance at scale while offering significant improvements in inference efficiency and robustness.

Traditional LLMs break text into ‘tokens’, which can struggle with misspellings, novel words, or adversarial inputs. Byte-level models process raw bytes, potentially offering greater resilience.

Meta reports that the Dynamic Byte Latent Transformer “outperforms tokeniser-based models across various tasks, with an average robustness advantage of +7 points (on perturbed HellaSwag), and reaching as high as +55 points on tasks from the CUTE token-understanding benchmark.”

By releasing the weights alongside the previously shared codebase, Meta encourages the research community to explore this alternative approach to language modelling.

Collaborative Reasoner: Meta advances socially-intelligent AI agents

The final release, Collaborative Reasoner, tackles the complex challenge of creating AI agents that can effectively collaborate with humans or other AIs.

Meta notes that human collaboration often yields superior results, and aims to imbue AI with similar capabilities for tasks like helping with homework or job interview preparation.

Such collaboration requires not just problem-solving but also social skills like communication, empathy, providing feedback, and understanding others’ mental states (theory-of-mind), often unfolding over multiple conversational turns.

Current LLM training and evaluation methods often neglect these social and collaborative aspects. Furthermore, collecting relevant conversational data is expensive and difficult.

Collaborative Reasoner provides a framework to evaluate and enhance these skills. It includes goal-oriented tasks requiring multi-step reasoning achieved through conversation between two agents. The framework tests abilities like disagreeing constructively, persuading a partner, and reaching a shared best solution.

Meta’s evaluations revealed that current models struggle to consistently leverage collaboration for better outcomes. To address this, they propose a self-improvement technique using synthetic interaction data where an LLM agent collaborates with itself.

Generating this data at scale is enabled by a new high-performance model serving engine called Matrix. Using this approach on maths, scientific, and social reasoning tasks reportedly yielded improvements of up to 29.4% compared to the standard ‘chain-of-thought’ performance of a single LLM.

By open-sourcing the data generation and modelling pipeline, Meta aims to foster further research into creating truly “social agents that can partner with humans and other agents.”

These five releases collectively underscore Meta’s continued heavy investment in fundamental AI research, particularly focusing on building blocks for machines that can perceive, understand, and interact with the world in more human-like ways. 

See also: Meta will train AI models using EU user data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta FAIR advances human-like AI with five major releases appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-fair-advances-human-like-ai-five-major-releases/feed/ 0
Meta will train AI models using EU user data https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/ https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/#respond Tue, 15 Apr 2025 16:32:02 +0000 https://www.artificialintelligence-news.com/?p=105325 Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models. The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.    In a statement, […]

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models.

The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.   

In a statement, Meta wrote: “Today, we’re announcing our plans to train AI at Meta using public content – like public posts and comments – shared by adults on our products in the EU.

“People’s interactions with Meta AI – like questions and queries – will also be used to train and improve our models.”

Starting this week, users of Meta’s platforms (including Facebook, Instagram, WhatsApp, and Messenger) within the EU will receive notifications explaining the data usage. These notifications, delivered both in-app and via email, will detail the types of public data involved and link to an objection form.

“We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones,” Meta explained.

Meta explicitly clarified that certain data types remain off-limits for AI training purposes.

The company says it will not “use people’s private messages with friends and family” to train its generative AI models. Furthermore, public data associated with accounts belonging to users under the age of 18 in the EU will not be included in the training datasets.

Meta wants to build AI tools designed for EU users

Meta positions this initiative as a necessary step towards creating AI tools designed for EU users. Meta launched its AI chatbot functionality across its messaging apps in Europe last month, framing this data usage as the next phase in improving the service.

“We believe we have a responsibility to build AI that’s not just available to Europeans, but is actually built for them,” the company explained. 

“That means everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

This becomes increasingly pertinent as AI models evolve with multi-modal capabilities spanning text, voice, video, and imagery.   

Meta also situated its actions in the EU within the broader industry landscape, pointing out that training AI on user data is common practice.

“It’s important to note that the kind of AI training we’re doing is not unique to Meta, nor will it be unique to Europe,” the statement reads. 

“We’re following the example set by others including Google and OpenAI, both of which have already used data from European users to train their AI models.”

Meta further claimed its approach surpasses others in openness, stating, “We’re proud that our approach is more transparent than many of our industry counterparts.”   

Regarding regulatory compliance, Meta referenced prior engagement with regulators, including a delay initiated last year while awaiting clarification on legal requirements. The company also cited a favourable opinion from the European Data Protection Board (EDPB) in December 2024.

“We welcome the opinion provided by the EDPB in December, which affirmed that our original approach met our legal obligations,” wrote Meta.

Broader concerns over AI training data

While Meta presents its approach in the EU as transparent and compliant, the practice of using vast swathes of public user data from social media platforms to train large language models (LLMs) and generative AI continues to raise significant concerns among privacy advocates.

Firstly, the definition of “public” data can be contentious. Content shared publicly on platforms like Facebook or Instagram may not have been posted with the expectation that it would become raw material for training commercial AI systems capable of generating entirely new content or insights. Users might share personal anecdotes, opinions, or creative works publicly within their perceived community, without envisaging its large-scale, automated analysis and repurposing by the platform owner.

Secondly, the effectiveness and fairness of an “opt-out” system versus an “opt-in” system remain debatable. Placing the onus on users to actively object, often after receiving notifications buried amongst countless others, raises questions about informed consent. Many users may not see, understand, or act upon the notification, potentially leading to their data being used by default rather than explicit permission.

Thirdly, the issue of inherent bias looms large. Social media platforms reflect and sometimes amplify societal biases, including racism, sexism, and misinformation. AI models trained on this data risk learning, replicating, and even scaling these biases. While companies employ filtering and fine-tuning techniques, eradicating bias absorbed from billions of data points is an immense challenge. An AI trained on European public data needs careful curation to avoid perpetuating stereotypes or harmful generalisations about the very cultures it aims to understand.   

Furthermore, questions surrounding copyright and intellectual property persist. Public posts often contain original text, images, and videos created by users. Using this content to train commercial AI models, which may then generate competing content or derive value from it, enters murky legal territory regarding ownership and fair compensation—issues currently being contested in courts worldwide involving various AI developers.

Finally, while Meta highlights its transparency relative to competitors, the actual mechanisms of data selection, filtering, and its specific impact on model behaviour often remain opaque. Truly meaningful transparency would involve deeper insights into how specific data influences AI outputs and the safeguards in place to prevent misuse or unintended consequences.

The approach taken by Meta in the EU underscores the immense value technology giants place on user-generated content as fuel for the burgeoning AI economy. As these practices become more widespread, the debate surrounding data privacy, informed consent, algorithmic bias, and the ethical responsibilities of AI developers will undoubtedly intensify across Europe and beyond.

(Photo by Julio Lopez)

See also: Apple AI stresses privacy with synthetic and anonymised data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/feed/ 0
Meta accused of using pirated data for AI development https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/ https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/#respond Fri, 10 Jan 2025 12:16:52 +0000 https://www.artificialintelligence-news.com/?p=16840 Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models. The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States […]

The post Meta accused of using pirated data for AI development appeared first on AI News.

]]>
Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models.

The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States District Court in the Northern District of California.

The filing accuses Meta of systematically torrenting and stripping copyright management information (CMI) from pirated datasets, including works from the notorious shadow library LibGen.

According to documents recently submitted to the court, evidence reveals highly incriminating practices involving Meta’s senior leaders. Plaintiffs allege that Meta CEO Mark Zuckerberg gave explicit approval for the use of the LibGen dataset, despite internal concerns raised by the company’s AI executives.

A December 2024 memo from internal Meta discussions acknowledged LibGen as “a dataset we know to be pirated,” with debates arising about the ethical and legal ramifications of using such materials. Documents also revealed that top engineers hesitated to torrent the datasets, citing concerns about using corporate laptops for potentially unlawful activities.

Additionally, internal communications suggest that after acquiring the LibGen dataset, Meta stripped CMI from the copyrighted works contained within—a practice that plaintiffs highlight as central to claims of copyright infringement.

According to the deposition of Michael Clark – a corporate representative for Meta – the company implemented scripts designed to remove any information identifying these works as copyrighted, including keywords like “copyright,” “acknowledgements,” or lines commonly used in such texts. Clark attested that this practice was done intentionally to prepare the dataset for training Meta’s Llama AI models.  

“Doesn’t feel right”

The allegations against Meta paint a portrait of a company knowingly partaking in a widespread piracy scheme facilitated through torrenting.

According to a string of emails included as exhibits, Meta engineers expressed concerns about the optics of torrenting pirated datasets from within corporate spaces. One engineer noted that “torrenting from a [Meta-owned] corporate laptop doesn’t feel right,” but despite hesitation, the rapid downloading and distribution – or “seeding” – of pirated data took place.

Legal counsel for the plaintiffs has stated that as late as January 2024, Meta had “already torrented (both downloaded and distributed) data from LibGen.” Moreover, records show that hundreds of related documents were initially obtained by Meta months prior but were withheld during early discovery processes. Plaintiffs argue this delayed disclosure amounts to bad-faith attempts by Meta to obstruct access to vital evidence.

During a deposition on 17 December 2024, Zuckerberg himself reportedly admitted that such activities would raise “lots of red flags” and stated it “seems like a bad thing,” though he provided limited direct responses regarding Meta’s broader AI training practices.

This case originally began as an intellectual property infringement action on behalf of authors and publishers claiming violations relating to AI use of their materials. However, the plaintiffs are now seeking to add two major claims to their suit: a violation of the Digital Millennium Copyright Act (DMCA) and a breach of the California Comprehensive Data Access and Fraud Act (CDAFA).  

Under the DMCA, the plaintiffs assert that Meta knowingly removed copyright protections to conceal unauthorised uses of copyrighted texts in its Llama models.

As cited in the complaint, Meta allegedly stripped CMI “to reduce the chance that the models will memorise this data” and that this removal of rights management indicators made discovering the infringement more difficult for copyright holders. 

The CDAFA allegations involve Meta’s methods for obtaining the LibGen dataset, including allegedly engaging in torrenting to acquire copyrighted datasets without permission. Internal documentation shows Meta engineers openly discussed concerns that seeding and torrenting might prove to be “legally not ok.” 

Meta case may impact emerging legislation around AI development

At the heart of this expanding legal battle lies growing concern over the intersection of copyright law and AI.

Plaintiffs argue the stripping of copyright protections from textual datasets denies rightful compensation to copyright owners and allows Meta to build AI systems like Llama on the financial ruins of authors’ and publishers’ creative efforts.

The timing of these allegations arises amidst heightened global scrutiny surrounding “generative AI” technologies. Companies like OpenAI, Google, and Meta have all come under fire regarding the use of copyrighted data to train their models. Courts across jurisdictions are currently grappling with the long-term impact of AI on rights management, with potentially landmark cases being decided in both the US and the UK.  

In this particular case, US courts have shown increasing willingness to hear complaints about AI’s potential harm to long-established copyright law precedents. Plaintiffs, in their motion, referred to The Intercept Media v. OpenAI, a recent decision from New York in which a similar DMCA claim was allowed to proceed.

Meta continues to deny all allegations in the case and has yet to publicly respond to Zuckerberg’s reported deposition statements.

Whether or not plaintiffs succeed in these amendments, authors across the world face growing anxieties about how their creative works are handled within the context of AI. With copyright law struggling to keep pace with technological advances, this case underscores the need for clearer guidance at an international level to protect both creators and innovators.

For Meta, these claims also represent a reputational risk. As AI becomes the central focus of its future strategy, the allegations of reliance on pirated libraries are unlikely to help its ambitions of maintaining leadership in the field.  

The unfolding case of Kadrey et al. vs. Meta could have far-reaching ramifications for the development of AI models moving forward, potentially setting legal precedents in the US and beyond.

(Photo by Amy Syiek)

See also: UK wants to prove AI can modernise public services responsibly

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta accused of using pirated data for AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/feed/ 0
Big tech’s AI spending hits new heights https://www.artificialintelligence-news.com/news/big-tech-ai-spending-hits-new-heights/ https://www.artificialintelligence-news.com/news/big-tech-ai-spending-hits-new-heights/#respond Fri, 22 Nov 2024 12:02:34 +0000 https://www.artificialintelligence-news.com/?p=16537 In 2024, Big Tech is all-in on artificial intelligence, with companies like Microsoft, Amazon, Alphabet, and Meta leading the way. Their combined spending on AI is projected to exceed a jaw-dropping $240 billion. Why? Because AI isn’t just the future—it’s the present, and the demand for AI-powered tools and infrastructure has never been higher. The […]

The post Big tech’s AI spending hits new heights appeared first on AI News.

]]>
In 2024, Big Tech is all-in on artificial intelligence, with companies like Microsoft, Amazon, Alphabet, and Meta leading the way.

Their combined spending on AI is projected to exceed a jaw-dropping $240 billion. Why? Because AI isn’t just the future—it’s the present, and the demand for AI-powered tools and infrastructure has never been higher. The companies aren’t just keeping up; they’re setting the pace for the industry.

The scale of their investment is hard to ignore. In the first half of 2023, tech giants poured $74 billion into capital expenditure. By Q3, that number had jumped to $109 billion. In mid-2024, spending reached $104 billion, a remarkable 47% rise over the same period a year earlier. By Q3, the total hit $171 billion.

If this pattern continues, Q4 might add another $70 billion, bringing the total to a truly staggering $240 billion for the year.

Why so much spending?

AI’s potential is immense, and companies are making sure they’re positioned to reap the rewards.

  • A growing market: AI is projected to create $20 trillion in global economic impact by 2030. In countries like India, AI could contribute $500 billion to GDP by 2025. With stakes this high, big tech isn’t hesitating to invest heavily.
  • Infrastructure demands: Training and running AI models require massive investment in infrastructure, from data centres to high-performance GPUs. Alphabet increased its capital expenditures by 62% last quarter compared to the previous year, even as it cut its workforce by 9,000 employees to manage costs.
  • Revenue potential: AI is already proving its value. Microsoft’s AI products are expected to generate $10 billion annually—the fastest-growing segment in the company’s history. Alphabet, meanwhile, uses AI to write over 25% of its new code, streamlining operations.

Amazon is also ramping up, with plans to spend $75 billion on capital expenditure in 2024. Meta’s forecast is not far behind, with estimates between $38 and $40 billion. Across the board, organisations recognise that maintaining their edge in AI requires sustained and significant investment.

Supporting revenue streams

What keeps the massive investments keep on coming is the strength of big tech’s core businesses. Last quarter, Alphabet’s digital advertising machine, which is powered by Google’s search engine, generated $49.39 billion in ad revenue, a 12% year-over-year increase. This as a solid foundation that allows Alphabet to pour resources into building out its AI arsenal without destabilising the bottom line.

Microsoft’s diversified revenue streams are another example. While the company spent $20 billion on AI and cloud infrastructure last quarter, its productivity segment, which includes Office, grew by 12% to $28.3 billion, and its personal computing business, boosted by Xbox and the Activision Blizzard acquisition, grew 17% to $13.2 billion. These successes demonstrate how AI investments can support broader growth strategies.

The financial payoff

Big tech is already seeing the benefits of its heavy spending. Microsoft’s Azure platform has seen substantial growth, with its AI income approaching $6 billion. Amazon’s AI business is growing at triple-digit rates, and Alphabet reported a 34% jump in profits last quarter, with cloud revenue playing a major role.

Meta, while primarily focused on advertising, is leveraging AI to make its platforms more engaging. AI-driven tools, such as improved feeds and search features keep users on its platforms longer, resulting in new revenue growth.

AI spending shows no signs of slowing down. Tech leaders at Microsoft and Alphabet view AI as a long-term investment critical to their future success. And the results speak for themselves: Alphabet’s cloud revenue is up 35%, while Microsoft’s cloud business grew 20% last quarter.

For the time being, the focus is on scaling up infrastructure and meeting demand. However, the real transformation will come when big tech unlocks AI’s full potential, transforming industries and redefining how we work and live.

By investing in high-quality, centralised data strategies, businesses can ensure trustworthy and accurate AI implementations, and unlock AI’s full potential to drive innovation, improve decision-making, and gain competitive edge. AI’s revolutionary promise is within reach—but only for companies prepared to lay the groundwork for sustainable growth and long-term results.

(Photo by Unsplash)

See also: Microsoft tries to convert Google Chrome users

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Big tech’s AI spending hits new heights appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/big-tech-ai-spending-hits-new-heights/feed/ 0
The risks behind the generative AI craze: Why caution is growing https://www.artificialintelligence-news.com/news/the-risks-behind-the-generative-ai-craze-why-caution-is-growing/ https://www.artificialintelligence-news.com/news/the-risks-behind-the-generative-ai-craze-why-caution-is-growing/#respond Wed, 09 Oct 2024 09:55:20 +0000 https://www.artificialintelligence-news.com/?p=16260 In the near future, Silicon Valley might look back at recent events as the point where the generative AI craze went too far. This past summer, investors questioned whether top AI stocks could sustain their sky-high valuations, given the lack of returns on massive AI spending. As Autumn approaches, major AI sectors—such as chips, LLMs, […]

The post The risks behind the generative AI craze: Why caution is growing appeared first on AI News.

]]>
In the near future, Silicon Valley might look back at recent events as the point where the generative AI craze went too far.

This past summer, investors questioned whether top AI stocks could sustain their sky-high valuations, given the lack of returns on massive AI spending. As Autumn approaches, major AI sectors—such as chips, LLMs, and AI devices—received renewed confidence. Nonetheless, there are an increasing number of reasons to be cautious.

Cerebras: A chip contender with a major risk

Chip startup Cerebras is challenging Nvidia’s dominance by developing processors designed to power smarter LLMs. Nvidia, a major player in the AI boom, has seen its market cap skyrocket from $364 billion at the start of 2023 to over $3 trillion.

Cerebras, however, relies heavily on a single customer: the Abu Dhabi-based AI firm G42. In 2023, G42 accounted for 83% of Cerebras’ revenue, and in the first half of 2024, that figure increased to 87%. While G42 is backed by major players like Microsoft and Silver Lake, its dependency poses a risk. Even though Cerebras has signed a deal with Saudi Aramco, its reliance on one client may cause concerns as it seeks a $7-8 billion valuation for its IPO.

OpenAI’s record-breaking funding – but with strings attached

OpenAI made the news when it raised $6.6 billion at a $157 billion valuation, becoming the largest investment round in Silicon Valley history. However, the company has urged its investors not to back competitors such as Anthropic and Elon Musk’s xAI—an unusual request in the world of venture capital, where spread betting is common. Critics, including Gary Marcus, have described this approach as “running scared.”

OpenAI’s backers also include “bubble chasers” such as SoftBank and Tiger Global, firms known for investing in companies at their peak, which frequently results in huge losses. With top executives such as CTO Mira Murati departing and predicted losses of $5 billion this year despite rising revenues, OpenAI faces significant challenges.

Meta’s big bet on AI wearables

Meta entered the AI race by unveiling Orion, its augmented reality glasses. The wearables promise to integrate AI into daily life, with Nvidia’s CEO Jensen Huang endorsing the product. However, at a production cost of $10,000 per unit, the price is a major obstacle.

Meta will need to reduce costs and overcome consumer hesitation, as previous attempts at AI-powered wearables—such as Snapchat’s glasses, Google Glass, and the Humane AI pin—have struggled to gain traction.

The road ahead

What’s next for AI? OpenAI must prove it can justify a $157 billion valuation while operating at a loss. Cerebras needs to reassure investors that relying on one client isn’t a dealbreaker. And Meta must convince consumers to adopt a completely new way of interacting with AI.

If these companies succeed, this moment could mark a turning point in the AI revolution. However, as tech history shows, high-stakes markets are rarely easy to win.

(Photo by Growtika)

See also: Ethical, trust and skill barriers hold back generative AI progress in EMEA

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The risks behind the generative AI craze: Why caution is growing appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/the-risks-behind-the-generative-ai-craze-why-caution-is-growing/feed/ 0
Meta’s AI strategy: Building for tomorrow, not immediate profits https://www.artificialintelligence-news.com/news/metas-ai-strategy-building-for-tomorrow-not-immediate-profits/ https://www.artificialintelligence-news.com/news/metas-ai-strategy-building-for-tomorrow-not-immediate-profits/#respond Thu, 01 Aug 2024 15:49:28 +0000 https://www.artificialintelligence-news.com/?p=15599 Meta has signalled a long-term AI strategy that prioritises substantial investments over immediate revenue generation. During the company’s Q2 earnings call, CEO and founder Mark Zuckerberg outlined Meta’s vision for the future and emphasised the need for extensive computational resources to support their AI initiatives. Zuckerberg revealed that Meta is “planning for the compute clusters […]

The post Meta’s AI strategy: Building for tomorrow, not immediate profits appeared first on AI News.

]]>
Meta has signalled a long-term AI strategy that prioritises substantial investments over immediate revenue generation. During the company’s Q2 earnings call, CEO and founder Mark Zuckerberg outlined Meta’s vision for the future and emphasised the need for extensive computational resources to support their AI initiatives.

Zuckerberg revealed that Meta is “planning for the compute clusters and data we’ll need for the next several years,” with a particular focus on their next AI model, Llama 4.

The company anticipates that training Llama 4 will require “almost 10x more” computing power than its predecessor, Llama 3, which is believed to have used 16,000 GPUs. Zuckerberg expressed his goal for Llama 4 “to be the most advanced [model] in the industry next year.”

Meta’s financial commitment to AI development is substantial, with the company projecting capital expenditures between $37 and $40 billion for the full year, an increase of $2 billion from previous estimates. Investors were cautioned to expect “significant” increases in capital expenditures next year as well.

Despite these massive investments, Meta CFO Susan Li acknowledged that the company does not expect to generate revenue from generative AI this year.

Li emphasised the company’s strategy of building AI infrastructure with flexibility in mind, allowing for capacity adjustments based on optimal use cases. She explained that the hardware used for AI model training can also be utilised for inferencing and, with modifications, for ranking and recommendations.

Meta’s current AI efforts, dubbed “Core AI,” are already showing positive results in improving user engagement on Facebook and Instagram. Zuckerberg highlighted the success of a recently implemented unified video recommendation tool for Facebook, which has “already increased engagement on Facebook Reels more than our initial move from CPUs to GPUs did.”

Looking ahead, Zuckerberg envisions AI playing a crucial role in revolutionising Meta’s advertising business. He predicted that in the coming years, AI would take over ad copy creation and personalisation, potentially allowing advertisers to simply provide a business objective and budget, with Meta’s AI handling the rest.

While Meta’s AI investments are substantial, the company remains in a strong financial position. Q2 results showed revenue of $39 billion and net income of $13.5 billion, representing year-over-year increases of $7 billion and $5.7 billion, respectively. Meta’s user base continues to grow, with over 3.2 billion people using a Meta app daily, and its X competitor Threads is now approaching 200 million active monthly users.

As Meta charts its course in the AI landscape, the company’s strategy reflects a long-term vision that prioritises technological advancement and infrastructure development over immediate financial returns.

(Photo by Joshua Earle)

See also: NVIDIA and Meta CEOs: Every business will ‘have an AI’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta’s AI strategy: Building for tomorrow, not immediate profits appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/metas-ai-strategy-building-for-tomorrow-not-immediate-profits/feed/ 0
NVIDIA and Meta CEOs: Every business will ‘have an AI’ https://www.artificialintelligence-news.com/news/nvidia-and-meta-ceo-every-business-will-have-an-ai/ https://www.artificialintelligence-news.com/news/nvidia-and-meta-ceo-every-business-will-have-an-ai/#respond Tue, 30 Jul 2024 15:30:43 +0000 https://www.artificialintelligence-news.com/?p=15557 In a fireside chat at SIGGRAPH 2024, NVIDIA founder and CEO Jensen Huang and Meta founder and CEO Mark Zuckerberg shared their insights on the potential of open source AI and virtual assistants. The conversation began with Zuckerberg announcing the launch of AI Studio, a new platform designed to democratise AI creation. This tool allows […]

The post NVIDIA and Meta CEOs: Every business will ‘have an AI’ appeared first on AI News.

]]>
In a fireside chat at SIGGRAPH 2024, NVIDIA founder and CEO Jensen Huang and Meta founder and CEO Mark Zuckerberg shared their insights on the potential of open source AI and virtual assistants.

The conversation began with Zuckerberg announcing the launch of AI Studio, a new platform designed to democratise AI creation. This tool allows users to create, share, and discover AI characters, potentially opening up AI development to millions of creators and small businesses.

Huang emphasised the ubiquity of AI in the future, stating, “Every single restaurant, every single website will probably, in the future, have these AIs …”

Zuckerberg concurred, adding, “…just like every business has an email address and a website and a social media account, I think, in the future, every business is going to have an AI.”

This vision aligns with NVIDIA’s recent developments showcased at SIGGRAPH. The company previewed “James,” an interactive digital human based on the NVIDIA ACE (Avatar Cloud Engine) reference design. James – a virtual assistant capable of providing contextually accurate responses – demonstrates the potential for businesses to create custom, hyperrealistic avatars for customer interactions.

The discussion highlighted Meta’s significant contributions to AI development. Huang praised Meta’s work, saying, “You guys have done amazing AI work,” and cited advancements in computer vision, language models, and real-time translation. He also acknowledged the widespread use of PyTorch, an open-source machine learning framework developed by Meta.

Both CEOs stressed the importance of open source in advancing AI. Meta has positioned itself as a leader in this field, implementing AI across its platforms and releasing open-source models like Llama 3.1. This latest model, with 405 billion parameters, required training on over 16,000 NVIDIA H100 GPUs, representing a substantial investment in resources.

Zuckerberg shared his vision for more integrated AI models, saying, “I kind of dream of one day like you can almost imagine all of Facebook or Instagram being like a single AI model that has unified all these different content types and systems together.” He believes that collaboration is crucial for further advancements in AI.

The conversation touched on the potential of AI to enhance human productivity. Huang described a future where AI could generate images in real-time as users type, allowing for fluid collaboration between humans and AI assistants. This concept is reflected in NVIDIA’s latest advancements to the NVIDIA Maxine AI platform, including Maxine 3D and Audio2Face-2D, which aim to create immersive telepresence experiences.

Looking ahead, Zuckerberg expressed enthusiasm about combining AI with augmented reality eyewear, mentioning Meta’s collaboration with eyewear maker Luxottica. He envisions this technology transforming education, entertainment, and work.

Huang discussed the evolution of AI interactions, moving beyond turn-based conversations to more complex, multi-option simulations. “Today’s AI is kind of turn-based. You say something, it says something back to you,” Huang explained. “In the future, AI could contemplate multiple options, or come up with a tree of options and simulate outcomes, making it much more powerful.”

The importance of this evolution is evident in the adoption of NVIDIA’s technologies by companies across industries. HTC, Looking Glass, Reply, and UneeQ are among the latest firms using NVIDIA ACE and Maxine for applications ranging from customer service agents to telepresence experiences in entertainment, retail, and hospitality.

As AI continues to evolve and integrate into various aspects of our lives, the insights shared by these industry leaders provide a glimpse into a future where AI assistants are as commonplace as websites and social media accounts.

The developments showcased at SIGGRAPH 2024 by both NVIDIA and other companies demonstrate that this future is rapidly approaching, with digital humans becoming increasingly sophisticated and capable of natural, engaging interactions.

See also: Amazon strives to outpace Nvidia with cheaper, faster AI chips

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NVIDIA and Meta CEOs: Every business will ‘have an AI’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nvidia-and-meta-ceo-every-business-will-have-an-ai/feed/ 0
Meta advances open source AI with ‘frontier-level’ Llama 3.1 https://www.artificialintelligence-news.com/news/meta-advances-open-source-ai-frontier-level-llama-3-1/ https://www.artificialintelligence-news.com/news/meta-advances-open-source-ai-frontier-level-llama-3-1/#respond Wed, 24 Jul 2024 12:39:45 +0000 https://www.artificialintelligence-news.com/?p=15518 Meta has unveiled Llama 3.1, marking a significant milestone in the company’s commitment to open source AI. This release, which Meta CEO Mark Zuckerberg calls “the first frontier-level open source AI model,” aims to challenge the dominance of closed AI systems and democratise access to advanced AI technology. The Llama 3.1 release includes three models: […]

The post Meta advances open source AI with ‘frontier-level’ Llama 3.1 appeared first on AI News.

]]>
Meta has unveiled Llama 3.1, marking a significant milestone in the company’s commitment to open source AI. This release, which Meta CEO Mark Zuckerberg calls “the first frontier-level open source AI model,” aims to challenge the dominance of closed AI systems and democratise access to advanced AI technology.

The Llama 3.1 release includes three models: 405B, 70B, and 8B. Zuckerberg asserts that the 405B model competes with the most advanced closed models while offering better cost-efficiency.

“Starting next year, we expect future Llama models to become the most advanced in the industry,” Zuckerberg predicts.

Zuckerberg draws parallels between the evolution of AI and the historical shift from closed Unix systems to open source Linux. He argues that open source AI will follow a similar trajectory, eventually becoming the industry standard due to its adaptability, cost-effectiveness, and broad ecosystem support.

Zuckerberg emphasises several key advantages of open source AI:

  • Customisation: Organisations can train and fine-tune models with their specific data.
  • Independence: Avoids lock-in to closed vendors or specific cloud providers.
  • Data security: Allows for local model deployment, enhancing data protection.
  • Cost-efficiency: Llama 3.1 405B can be run at roughly half the cost of closed models like GPT-4.
  • Ecosystem growth: Encourages innovation and collaboration across the industry.

Addressing safety concerns, Zuckerberg argues that open source AI is inherently safer due to increased transparency and scrutiny. He states, “Open source should be significantly safer since the systems are more transparent and can be widely scrutinised.”

To support the open source AI ecosystem, Meta is partnering with major tech companies like Amazon, Databricks, and NVIDIA to provide development services. The models will be available across major cloud platforms, with companies such as Scale.AI, Dell, and Deloitte ready to assist in enterprise adoption.

“Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society,” Zuckerberg claims.

The CEO views this release as a turning point, predicting that most developers will shift towards primarily using open source AI models. He invites the tech community to join Meta in “this journey to bring the benefits of AI to everyone in the world.”

The Llama 3.1 models are now accessible at llama.meta.com.

(Photo by Dima Solomin)

See also: Meta joins Apple in withholding AI models from EU users

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta advances open source AI with ‘frontier-level’ Llama 3.1 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-advances-open-source-ai-frontier-level-llama-3-1/feed/ 0
Meta joins Apple in withholding AI models from EU users https://www.artificialintelligence-news.com/news/meta-joins-apple-withholding-ai-models-eu-users/ https://www.artificialintelligence-news.com/news/meta-joins-apple-withholding-ai-models-eu-users/#respond Thu, 18 Jul 2024 14:10:21 +0000 https://www.artificialintelligence-news.com/?p=15450 Meta has announced it will not be launching its upcoming multimodal AI model in the European Union due to regulatory concerns. This decision from Meta comes on the heels of Apple’s similar move to exclude the EU from its Apple Intelligence rollout, signalling a growing trend of tech giants hesitating to introduce advanced AI technologies […]

The post Meta joins Apple in withholding AI models from EU users appeared first on AI News.

]]>
Meta has announced it will not be launching its upcoming multimodal AI model in the European Union due to regulatory concerns.

This decision from Meta comes on the heels of Apple’s similar move to exclude the EU from its Apple Intelligence rollout, signalling a growing trend of tech giants hesitating to introduce advanced AI technologies in the region.

Meta’s latest multimodal AI model – capable of handling video, audio, images, and text – was set to be released under an open license. However, Meta’s decision will prevent European companies from utilising this technology, potentially putting them at a disadvantage in the global AI race.

“We will release a multimodal Llama model over the coming months, but not in the EU due to the unpredictable nature of the European regulatory environment,” a Meta spokesperson stated.

A text-only version of Meta’s Llama 3 model is still expected to launch in the EU.

Meta’s announcement comes just days after the EU finalised compliance deadlines for its new AI Act. Tech companies operating in the EU will have until August 2026 to comply with rules surrounding copyright, transparency, and specific AI applications like predictive policing.

The withholding of these advanced AI models from the EU market creates a challenging situation for companies outside the region. Those hoping to provide products and services utilising these models will be unable to offer them in one of the world’s largest economic markets.

Meta plans to integrate its multimodal AI models into products like the Meta Ray-Ban smart glasses. The company’s EU exclusion will extend to future multimodal AI model releases as well.

As more tech giants potentially follow suit, the EU may face challenges in maintaining its position as a leader in technological innovation while balancing concerns about AI’s societal impacts.

(Photo by engin akyurt)

See also: AI could unleash £119 billion in UK productivity

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta joins Apple in withholding AI models from EU users appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-joins-apple-withholding-ai-models-eu-users/feed/ 0
Meta unveils five AI models for multi-modal processing, music generation, and more https://www.artificialintelligence-news.com/news/meta-unveils-ai-models-multi-modal-processing-music-generation-more/ https://www.artificialintelligence-news.com/news/meta-unveils-ai-models-multi-modal-processing-music-generation-more/#respond Wed, 19 Jun 2024 15:40:48 +0000 https://www.artificialintelligence-news.com/?p=15062 Meta has unveiled five major new AI models and research, including multi-modal systems that can process both text and images, next-gen language models, music generation, AI speech detection, and efforts to improve diversity in AI systems. The releases come from Meta’s Fundamental AI Research (FAIR) team which has focused on advancing AI through open research […]

The post Meta unveils five AI models for multi-modal processing, music generation, and more appeared first on AI News.

]]>
Meta has unveiled five major new AI models and research, including multi-modal systems that can process both text and images, next-gen language models, music generation, AI speech detection, and efforts to improve diversity in AI systems.

The releases come from Meta’s Fundamental AI Research (FAIR) team which has focused on advancing AI through open research and collaboration for over a decade. As AI rapidly innovates, Meta believes working with the global community is crucial.

“By publicly sharing this research, we hope to inspire iterations and ultimately help advance AI in a responsible way,” said Meta.

Chameleon: Multi-modal text and image processing

Among the releases are key components of Meta’s ‘Chameleon’ models under a research license. Chameleon is a family of multi-modal models that can understand and generate both text and images simultaneously—unlike most large language models which are typically unimodal.

“Just as humans can process the words and images simultaneously, Chameleon can process and deliver both image and text at the same time,” explained Meta. “Chameleon can take any combination of text and images as input and also output any combination of text and images.”

Potential use cases are virtually limitless from generating creative captions to prompting new scenes with text and images.

Multi-token prediction for faster language model training

Meta has also released pretrained models for code completion that use ‘multi-token prediction’ under a non-commercial research license. Traditional language model training is inefficient by predicting just the next word. Multi-token models can predict multiple future words simultaneously to train faster.

“While [the one-word] approach is simple and scalable, it’s also inefficient. It requires several orders of magnitude more text than what children need to learn the same degree of language fluency,” said Meta.

JASCO: Enhanced text-to-music model

On the creative side, Meta’s JASCO allows generating music clips from text while affording more control by accepting inputs like chords and beats.

“While existing text-to-music models like MusicGen rely mainly on text inputs for music generation, our new model, JASCO, is capable of accepting various inputs, such as chords or beat, to improve control over generated music outputs,” explained Meta.

AudioSeal: Detecting AI-generated speech

Meta claims AudioSeal is the first audio watermarking system designed to detect AI-generated speech. It can pinpoint the specific segments generated by AI within larger audio clips up to 485x faster than previous methods.

“AudioSeal is being released under a commercial license. It’s just one of several lines of responsible research we have shared to help prevent the misuse of generative AI tools,” said Meta.

Improving text-to-image diversity

Another important release aims to improve the diversity of text-to-image models which can often exhibit geographical and cultural biases.

Meta developed automatic indicators to evaluate potential geographical disparities and conducted a large 65,000+ annotation study to understand how people globally perceive geographic representation.

“This enables more diversity and better representation in AI-generated images,” said Meta. The relevant code and annotations have been released to help improve diversity across generative models.

By publicly sharing these groundbreaking models, Meta says it hopes to foster collaboration and drive innovation within the AI community.

(Photo by Dima Solomin)

See also: NVIDIA presents latest advancements in visual AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta unveils five AI models for multi-modal processing, music generation, and more appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-unveils-ai-models-multi-modal-processing-music-generation-more/feed/ 0
DuckDuckGo releases portal giving private access to AI models https://www.artificialintelligence-news.com/news/duckduckgo-portal-giving-private-access-ai-models/ https://www.artificialintelligence-news.com/news/duckduckgo-portal-giving-private-access-ai-models/#respond Fri, 07 Jun 2024 15:42:22 +0000 https://www.artificialintelligence-news.com/?p=14966 DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected. The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source […]

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected.

The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source models are OpenAI’s GPT-3.5 Turbo and Anthropic’s Claude 3 Haiku, while the open-source models are Meta’s Llama-3 70B and Mistral AI’s Mixtral 8x7b.

What sets DuckDuckGo AI Chat apart is its commitment to user privacy. Neither DuckDuckGo nor the chatbot providers can use user data to train their models, ensuring that interactions remain private and anonymous. DuckDuckGo also strips away metadata, such as server or IP addresses, so that queries appear to originate from the company itself rather than individual users.

The company has agreements in place with all model providers to ensure that any saved chats are completely deleted within 30 days, and that none of the chats made on the platform can be used to train or improve the models. This makes preserving privacy easier than changing the privacy settings for each service.

In an era where online services are increasingly hungry for user data, DuckDuckGo’s AI Chat service is a breath of fresh air. The company’s commitment to privacy is a direct response to the growing concerns about data collection and usage in the AI industry. By providing a private and anonymous platform for users to interact with AI chatbots, DuckDuckGo is setting a new standard for the industry.

DuckDuckGo’s AI service is free to use within a daily limit, and the company is considering launching a paid tier to reduce or eliminate these limits. The service is designed to be a complementary partner to its search engine, allowing users to switch between search and AI chat for a more comprehensive search experience.

“We view AI Chat and search as two different but powerful tools to help you find what you’re looking for – especially when you’re exploring a new topic. You might be shopping or doing research for a project and are unsure how to get started. In situations like these, either AI Chat or Search could be good starting points.” the company explained.

“If you start by asking a few questions in AI Chat, the answers may inspire traditional searches to track down reviews, prices, or other primary sources. If you start with Search, you may want to switch to AI Chat for follow-up queries to help make sense of what you’ve read, or for quick, direct answers to new questions that weren’t covered in the web pages you saw.”

To accommodate that user workflow, DuckDuckGo has made AI Chat accessible through DuckDuckGo Private Search for quick access.

The launch of DuckDuckGo AI Chat comes at a time when the AI industry is facing increasing scrutiny over data privacy and usage. The service is a welcome addition for privacy-conscious individuals, joining the recent launch of Venice AI by crypto entrepreneur Erik Voorhees. Venice AI features an uncensored AI chatbot and image generator that doesn’t require accounts and doesn’t retain data..

As the AI industry continues to evolve, it’s clear that privacy will remain a top concern for users. With the launch of DuckDuckGo AI Chat, the company is taking a significant step towards providing users with a private and secure platform for interacting with AI chatbots.

See also: AI pioneers turn whistleblowers and demand safeguards

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/duckduckgo-portal-giving-private-access-ai-models/feed/ 0
UAE unveils new AI model to rival big tech giants https://www.artificialintelligence-news.com/news/uae-unveils-new-ai-model-to-rival-big-tech-giants/ https://www.artificialintelligence-news.com/news/uae-unveils-new-ai-model-to-rival-big-tech-giants/#respond Wed, 15 May 2024 09:53:41 +0000 https://www.artificialintelligence-news.com/?p=14818 The UAE is making big waves by launching a new open-source generative AI model. This step, taken by a government-backed research institute, is turning heads and marking the UAE as a formidable player in the global AI race. In Abu Dhabi, the Technology Innovation Institute (TII) unveiled the Falcon 2 series. As reported by Reuters, this series includes Falcon 2 11B, […]

The post UAE unveils new AI model to rival big tech giants appeared first on AI News.

]]>
The UAE is making big waves by launching a new open-source generative AI model. This step, taken by a government-backed research institute, is turning heads and marking the UAE as a formidable player in the global AI race.

In Abu Dhabi, the Technology Innovation Institute (TII) unveiled the Falcon 2 series. As reported by Reuters, this series includes Falcon 2 11B, a text-based model, and Falcon 2 11B VLM, a vision-to-language model capable of generating text descriptions from images. TII is run by Abu Dhabi’s Advanced Technology Research Council.

As a major oil exporter and a key player in the Middle East, the UAE is investing heavily in AI. This strategy has caught the eye of U.S. officials, leading to tensions over whether to use American or Chinese technology. In a move coordinated with Washington, Emirati AI firm G42 withdrew from Chinese investments and replaced Chinese hardware, securing a US$1.5 billion investment from Microsoft.

Faisal Al Bannai, Secretary General of the Advanced Technology Research Council and an adviser on strategic research and advanced technology, proudly states that the UAE is proving itself as a major player in AI. The release of the Falcon 2 series is part of a broader race among nations and companies to develop proprietary large language models. While some opt to keep their AI code private, the UAE, like Meta’s Llama, is making its groundbreaking work accessible to all.

Al Bannai is also excited about the upcoming Falcon 3 generation and expresses confidence in the UAE’s ability to compete globally: “We’re very proud that we can still punch way above our weight, really compete with the best players globally.”

Reflecting on his earlier statements this year, Al Bannai emphasised that the UAE’s decisive advantage lies in its ability to make swift strategic decisions.

It’s worth noting that Abu Dhabi’s ruling family controls some of the world’s largest sovereign wealth funds, worth about US$1.5 trillion. These funds, formerly used to diversify the UAE’s oil wealth, are now critical for accelerating growth in AI and other cutting-edge technologies. In fact, the UAE is emerging as a key player in producing advanced computer chips essential for training powerful AI systems. According to Wall Street Journal, OpenAI CEO Sam Altman met with investors, including Sheik Tahnoun bin Zayed Al Nahyan, who runs Abu Dhabi’s major sovereign wealth fund, to discuss a potential US$7 trillion investment to develop an AI chipmaker to compete with Nvidia.

Furthermore, the UAE’s commitment to generative AI is evident in its recent launch of a ‘Generative AI’ guide. This guide aims to unlock AI’s potential in various fields, including education, healthcare, and media. It provides a detailed overview of generative AI, addressing digital technologies’ challenges and opportunities while emphasising data privacy. The guide is designed to assist government agencies and the community leverage AI technologies by demonstrating 100 practical AI use cases for entrepreneurs, students, job seekers, and tech enthusiasts.

This proactive stance showcases the UAE’s commitment to participating in and leading the global AI race, positioning it as a nation to watch in the rapidly evolving tech scene.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UAE unveils new AI model to rival big tech giants appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uae-unveils-new-ai-model-to-rival-big-tech-giants/feed/ 0