generative ai Archives - AI News https://www.artificialintelligence-news.com/news/tag/generative-ai/ Artificial Intelligence News Thu, 24 Apr 2025 11:39:35 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png generative ai Archives - AI News https://www.artificialintelligence-news.com/news/tag/generative-ai/ 32 32 ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first https://www.artificialintelligence-news.com/news/chatgpt-hits-record-usage-after-viral-ghibli-feature/ https://www.artificialintelligence-news.com/news/chatgpt-hits-record-usage-after-viral-ghibli-feature/#respond Tue, 08 Apr 2025 10:00:47 +0000 https://www.artificialintelligence-news.com/?p=105218 Following the release of ChatGPT’s new image-generation tool, user activity has surged; millions of people have been drawn to a trend whereby uploaded images are inspired by the unique visual style of Studio Ghibli. The spike in interest contributed to record use levels for the chatbot and strained OpenAI’s infrastructure temporarily. Social media platforms were […]

The post ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first appeared first on AI News.

]]>
Following the release of ChatGPT’s new image-generation tool, user activity has surged; millions of people have been drawn to a trend whereby uploaded images are inspired by the unique visual style of Studio Ghibli.

The spike in interest contributed to record use levels for the chatbot and strained OpenAI’s infrastructure temporarily.

Social media platforms were soon flooded with AI-generated images styled after work by the renowned Japanese animation studio, known for titles like Spirited Away and My Neighbor Totoro. According to Similarweb, weekly active ChatGPT users passed 150 million for the first time this year.

OpenAI CEO Sam Altman said the chatbot gained one million users in a single hour in early April – matching the numbers the text-centric ChatGPT reached over five days when it first launched.

SensorTower data shows the company also recorded a jump in app activity. Weekly active users, downloads, and in-app revenue all hit record levels last week, following the update to GPT-4o that enabled new image-generation features. Compared to late March, downloads rose by 11%, active users grew 5%, and revenue increased by 6%.

The new tool’s popularity caused service slowdowns and intermittent outages. OpenAI acknowledged the increased load, with Altman saying that users should expect delays in feature roll-outs and occasional service disruption as capacity issues are settled.

Legal questions surface around ChatGPT’s Ghibli-style AI art

The viral use of Studio Ghibli-inspired AI imagery from OpenAI’s ChatGPT has raised concerns about copyright. Legal experts point out that while artistic styles themselves may not always be protected, closely mimicking a well-known look could fall into a legal grey area.

“The legal landscape of AI-generated images mimicking Studio Ghibli’s distinctive style is an uncertain terrain. Copyright law has generally protected only specific expressions rather than artistic styles themselves,” said Evan Brown, partner at law firm Neal & McDevitt.

Miyazaki’s past comments have also resurfaced. In 2016, the Studio Ghibli co-founder responded to early AI-generated artwork by saying, “I am utterly disgusted. I would never wish to incorporate this technology into my work at all.”

OpenAI has not commented on whether the model used for its image generation was trained on content similar to Ghibli’s animation.

Data privacy and personal risk

The trend has also drawn attention to user privacy and data security. Christoph C. Cemper, founder of AI prompt management firm AIPRM, cautioned that uploading a photo for artistic transformation may come with more risks than many users realise.

“When you upload a photo to an AI art generator, you’re giving away your biometric data (your face). Some AI tools store that data, use it to train future models, or even sell it to third parties – none of which you may be fully aware of unless you read the fine print,” Cemper said.

OpenAI’s privacy policy confirms that it collects both personal information and use data, including images and content submitted by users. Unless users opt out of training data collection or request deletion via their settings, content will be retained and used to improve future AI models.

Cemper said that once a facial image is uploaded, it becomes vulnerable to misuse. That data could be scraped, leaked, or used in identity theft, deepfake content, or other impersonation scams. He also pointed to prior incidents where private images were found in public AI datasets like LAION-5B, which are used to train various tools like Stable Diffusion.

Copyright and licensing considerations

There are also concerns that AI-generated content styled after recognisable artistic brands could cross into copyright infringement. While creating art in the style of Studio Ghibli, Disney, or Pixar might seem harmless, legal experts warn that such works may be considered derivative, especially if the mimicry is too close.

In 2022, several artists filed a class-action lawsuit against AI companies, claiming their models were trained on original artwork without consent. The cases reflect the broader conversation around how to balance innovation with creators’ rights as generative AI becomes more widely used.

Cemper also advised users to review carefully the terms of service on AI platforms. Many contain licensing clauses with language like “transferable rights,” “non-exclusive,” or “irrevocable licence,” which allow platforms to reproduce, modify, or distribute submitted content – even after the app is deleted.

“The rollout of ChatGPT’s 4o image generator shows just how powerful AI has become as it replicates iconic artistic styles with just a few clicks. But this unprecedented capability comes with a growing risk – the lines between creativity and copyright infringement are increasingly blurred,” Cemper said.

“The rapid pace of AI development also raises significant concerns about privacy and data security. There’s a pressing need for clearer, more transparent privacy policies. Users should be empowered to make informed decisions about uploading their photos or personal data.”

Search interest in “ChatGPT Studio Ghibli” has increased by more than 1,200% in the past week, but alongside the creativity and virality comes a wave of serious problems about privacy, copyright, and data use. As AI image tools get more advanced and accessible, users may want to think twice before uploading personal images, especially if they’re not sure where the data may ultimately end up.

(Image by YouTube Fireship)

See also: Midjourney V7: Faster AI image generation


Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chatgpt-hits-record-usage-after-viral-ghibli-feature/feed/ 0
L’Oréal: Making cosmetics sustainable with generative AI https://www.artificialintelligence-news.com/news/loreal-making-cosmetics-sustainable-generative-ai/ https://www.artificialintelligence-news.com/news/loreal-making-cosmetics-sustainable-generative-ai/#respond Thu, 16 Jan 2025 10:48:22 +0000 https://www.artificialintelligence-news.com/?p=16896 L’Oréal will leverage IBM’s generative AI (GenAI) technology to create innovative and sustainable cosmetic products. The partnership will involve developing a bespoke AI foundation model to supercharge L’Oréal’s Research & Innovation (R&I) teams in creating eco-friendly formulations using renewable raw materials. In turn, this initiative is designed to reduce both energy and material waste. Described […]

The post L’Oréal: Making cosmetics sustainable with generative AI appeared first on AI News.

]]>
L’Oréal will leverage IBM’s generative AI (GenAI) technology to create innovative and sustainable cosmetic products.

The partnership will involve developing a bespoke AI foundation model to supercharge L’Oréal’s Research & Innovation (R&I) teams in creating eco-friendly formulations using renewable raw materials. In turn, this initiative is designed to reduce both energy and material waste.

Described as the cosmetics industry’s first formulation-focused AI model, this effort is a glimpse into a future where cutting-edge technology drives environmentally-conscious solutions.

Stéphane Ortiz, Head of Innovation Métiers & Product Development at L’Oréal R&I, said: “As part of our Digital Transformation Program, this partnership will extend the speed and scale of our innovation and reformulation pipeline, with products always reaching higher standards of inclusivity, sustainability, and personalisation.”  

AI and beauty: A perfect match

By marrying L’Oréal’s expertise in cosmetic science with IBM’s AI technologies, the companies aim to unlock new pathways in both cosmetic innovation and sustainability. The role of AI in tailoring and personalising products is well-established, but diving deeper into its role in crafting renewable and sustainably-sourced formulations underscores a broader ecological mission. 

Matthieu Cassier, Chief Transformation & Digital Officer at L’Oréal R&I, commented: “Building on years of unique beauty science expertise and data structuring, this major alliance with IBM is opening a new exciting era for our innovation and development process.”

Foundation models serve as the technological backbone for this collaboration. These AI systems are trained on vast datasets, enabling them to perform various tasks and transfer learnings across different applications.

Although these models are perhaps most known for revolutionising natural language processing (NLP), IBM has advanced their use cases beyond text, including applications in chemistry, geospatial data, and time series analysis.

In this context, the custom AI model being developed for L’Oréal will process a massive database of cosmetic formulas and raw material components. From creating new products to reformulating existing ones and scaling up for production, the model will accelerate critical tasks for the company’s R&D teams.  

“This collaboration is a truly impactful application of generative AI, leveraging the power of technology and expertise for the good of the planet,” said Alessandro Curioni, IBM Fellow and VP for Europe and Africa, as well as Director at IBM Research Zurich.

“At IBM, we believe in the power of purpose-built, customised AI to help transform businesses. Using IBM’s latest AI technology, L’Oréal will be able to derive meaningful insights from their rich formula and product data to create a tailored AI model to help achieve their operational goals and continue creating high-quality and sustainable products.”

One of the more fascinating dimensions of this collaboration is its potential to deepen understanding of renewable ingredient behaviour within cosmetic formulations.

Guilhaume Leroy-Méline, IBM Distinguished Engineer and CTO of IBM Consulting France, said: “This alliance between highly specialised expertise in artificial intelligence and cosmetics seeks to revolutionise cosmetic formulation. It embodies the spirit of AI-augmented research, emphasising sustainability and diversity.” 

For IBM, this partnership reflects its broader strategy to extend AI applications into industries requiring bespoke solutions. As Curioni pointed out, custom AI has the potential to reshape businesses on multiple levels.

By co-developing this bespoke formulation model, IBM and L’Oréal are setting the stage for a beauty industry that prizes both sustainability and cutting-edge innovation. If successful, the partnership could very well serve as a blueprint for other industries looking to bring AI’s transformative potential to bear on sustainability efforts.  

(Photo by Kelly Sikkema)

See also: Cisco: Securing enterprises in the AI era

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post L’Oréal: Making cosmetics sustainable with generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/loreal-making-cosmetics-sustainable-generative-ai/feed/ 0
Driver used ChatGPT to plan attack, authorities reveal https://www.artificialintelligence-news.com/news/rriver-used-chatgpt-to-plan-attack-authorities-reveal/ https://www.artificialintelligence-news.com/news/rriver-used-chatgpt-to-plan-attack-authorities-reveal/#respond Fri, 10 Jan 2025 14:55:23 +0000 https://www.artificialintelligence-news.com/?p=16837 The new year was only beginning, but technology had already taken centre stage in a tragic event that shocked many. Just outside the Trump International Hotel in Las Vegas, a Tesla Cybertruck erupted in an explosion, leaving one person dead and seven others with minor injuries. The devastating incident, confirmed by Las Vegas Sheriff Kevin […]

The post Driver used ChatGPT to plan attack, authorities reveal appeared first on AI News.

]]>
The new year was only beginning, but technology had already taken centre stage in a tragic event that shocked many.

Just outside the Trump International Hotel in Las Vegas, a Tesla Cybertruck erupted in an explosion, leaving one person dead and seven others with minor injuries. The devastating incident, confirmed by Las Vegas Sheriff Kevin McMahill, has sparked discussions about the role of artificial intelligence and its darker implications in today’s world.

The Las Vegas Metro Police Department said that the truck’s bed had an alarming mix of gasoline canisters, camp fuel, and large firework mortars. Authorities believe these items were tied to a detonation system controlled by the driver, who appeared to have meticulously planned the attack. The combination of materials painted a chilling picture of a calculated and premeditated act.

The driver, identified as 37-year-old Matthew Livelsberger, was an active-duty soldier in the US Army. Investigators found a “possible manifesto” saved on his phone, along with emails to a podcaster and other documents outlining his intentions. Surveillance footage revealed him preparing for the explosion by pouring fuel onto the truck at a stop before driving to the hotel. Despite all of the preparations, officials confirmed that Livelsberger had no prior criminal record and was not under surveillance at the time of the incident.

One revelation drew significant public attention: Livelsberger had used ChatGPT to aid in his plans. Law enforcement reported that he queried the AI tool for information about assembling explosives, calculating how fast a round would need to be fired to detonate the materials, and understanding which legal loopholes might allow him to acquire the components. Sheriff McMahill addressed this unsettling development, stating, “We know AI was going to change the game for all of us at some point or another, in really all of our lives. I think this is the first incident that I’m aware of on US soil where ChatGPT is utilised to help an individual build a particular device.”

Tragically, Livelsberger’s life ended at the scene with a self-inflicted gunshot wound. Authorities identified his body through DNA and tattoos due to the extensive burns he sustained in the explosion.

OpenAI, the company behind ChatGPT, responded to the incident with a statement expressing their sorrow and emphasising their commitment to responsible AI use. “Our models are designed to refuse harmful instructions and minimise harmful content. In this case, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or illegal activities,” the statement read. OpenAI also confirmed their cooperation with law enforcement in the ongoing investigation.

The explosion itself was described as a deflagration—a slower, less destructive reaction compared to a high-explosive detonation. Investigators suspect the muzzle flash from a gunshot may have ignited fuel vapours or fireworks fuses in the truck, triggering a chain reaction. Other possibilities, though, such as an electrical short, have not been ruled out.

The Las Vegas explosion is a grim reminder of technology’s double-edged nature. While AI has enormous potential, its darker applications are forcing society to consider how to prevent such tragedies in the future.

(Photo by Unsplash)

See also: OpenAI: Musk wanted us to merge with Tesla or take ‘full control’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Driver used ChatGPT to plan attack, authorities reveal appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/rriver-used-chatgpt-to-plan-attack-authorities-reveal/feed/ 0
NVIDIA advances AI frontiers with CES 2025 announcements https://www.artificialintelligence-news.com/news/nvidia-advances-ai-frontiers-with-ces-2025-announcements/ https://www.artificialintelligence-news.com/news/nvidia-advances-ai-frontiers-with-ces-2025-announcements/#respond Tue, 07 Jan 2025 11:25:09 +0000 https://www.artificialintelligence-news.com/?p=16818 NVIDIA CEO and founder Jensen Huang took the stage for a keynote at CES 2025 to outline the company’s vision for the future of AI in gaming, autonomous vehicles (AVs), robotics, and more. “AI has been advancing at an incredible pace,” Huang said. “It started with perception AI — understanding images, words, and sounds. Then […]

The post NVIDIA advances AI frontiers with CES 2025 announcements appeared first on AI News.

]]>
NVIDIA CEO and founder Jensen Huang took the stage for a keynote at CES 2025 to outline the company’s vision for the future of AI in gaming, autonomous vehicles (AVs), robotics, and more.

“AI has been advancing at an incredible pace,” Huang said. “It started with perception AI — understanding images, words, and sounds. Then generative AI — creating text, images, and sound. Now, we’re entering the era of ‘physical AI,’ AI that can perceive, reason, plan, and act.”

With NVIDIA’s platforms and GPUs at the core, Huang explained how the company continues to fuel breakthroughs across multiple industries while unveiling innovations such as the Cosmos platform, next-gen GeForce RTX 50 Series GPUs, and compact AI supercomputer Project DIGITS. 

RTX 50 series: “The GPU is a beast”

One of the most significant announcements during CES 2025 was the introduction of the GeForce RTX 50 Series, powered by NVIDIA Blackwell architecture. Huang debuted the flagship RTX 5090 GPU, boasting 92 billion transistors and achieving an impressive 3,352 trillion AI operations per second (TOPS).

“GeForce enabled AI to reach the masses, and now AI is coming home to GeForce,” said Huang.

Holding the blacked-out GPU, Huang called it “a beast,” highlighting its advanced features, including dual cooling fans and its ability to leverage AI for revolutionary real-time graphics.

Set for a staggered release in early 2025, the RTX 50 Series includes the flagship RTX 5090 and RTX 5080 (available 30 January), followed by the RTX 5070 Ti and RTX 5070 (February). Laptop GPUs join the lineup in March.

In addition, NVIDIA introduced DLSS 4 – featuring ‘Multi-Frame Generation’ technology – which boosts gaming performance up to eightfold by generating three additional frames for every frame rendered.

Other advancements, such as RTX Neural Shaders and RTX Mega Geometry, promise heightened realism in video games, including precise face and hair rendering using generative AI.

Cosmos: Ushering in physical AI

NVIDIA took another step forward with the Cosmos platform at CES 2025, which Huang described as a “game-changer” for robotics, industrial AI, and AVs. Much like the impact of large language models on generative AI, Cosmos represents a new frontier for AI applications in robotics and autonomous systems.

“The ChatGPT moment for general robotics is just around the corner,” Huang declared.

Cosmos integrates generative models, tokenisers, and video processing frameworks to enable robots and vehicles to simulate potential outcomes and predict optimal actions. By ingesting text, image, and video prompts, Cosmos can generate “virtual world states,” tailored for complex robotics and AV use cases involving real-world environments and lighting.

Top robotics and automotive leaders – including XPENG, Hyundai Motor Group, and Uber – are among the first to adopt Cosmos, which is available on GitHub via an open licence.

Pras Velagapudi, CTO at Agility, comments: “Data scarcity and variability are key challenges to successful learning in robot environments. Cosmos’ text-, image- and video-to-world capabilities allow us to generate and augment photorealistic scenarios for a variety of tasks that we can use to train models without needing as much expensive, real-world data capture.”

Empowering developers with AI models

NVIDIA also unveiled new AI foundation models for RTX PCs, which aim to supercharge content creation, productivity, and enterprise applications. These models, presented as NVIDIA NIM (Neural Interaction Model) microservices, are designed to integrate with the RTX 50 Series hardware.

Huang emphasised the accessibility of these tools: “These AI models run in every single cloud because NVIDIA GPUs are now available in every cloud.”

NVIDIA is doubling down on its push to equip developers with advanced tools for building AI-driven solutions. The company introduced AI Blueprints: pre-configured tools for crafting agents tailored to specific enterprise needs, such as content generation, fraud detection, and video management.

“They are completely open source, so you could take it and modify the blueprints,” explains Huang.

Huang also announced the release of Llama Nemotron, designed for developers to build and deploy powerful AI agents.

Ahmad Al-Dahle, VP and Head of GenAI at Meta, said: “Agentic AI is the next frontier of AI development, and delivering on this opportunity requires full-stack optimisation across a system of LLMs to deliver efficient, accurate AI agents.

“Through our collaboration with NVIDIA and our shared commitment to open models, the NVIDIA Llama Nemotron family built on Llama can help enterprises quickly create their own custom AI agents.”

Philipp Herzig, Chief AI Officer at SAP, added: “AI agents that collaborate to solve complex tasks across multiple lines of the business will unlock a whole new level of enterprise productivity beyond today’s generative AI scenarios.

“Through SAP’s Joule, hundreds of millions of enterprise users will interact with these agents to accomplish their goals faster than ever before. NVIDIA’s new open Llama Nemotron model family will foster the development of multiple specialised AI agents to transform business processes.”

Safer and smarter autonomous vehicles

NVIDIA’s announcements extended to the automotive industry, where its DRIVE Hyperion AV platform is fostering a safer and smarter future for AVs. Built on the new NVIDIA AGX Thor system-on-a-chip (SoC), the platform allows vehicles to achieve next-level functional safety and autonomous capabilities using generative AI models.

“The autonomous vehicle revolution is here,” Huang said. “Building autonomous vehicles, like all robots, requires three computers: NVIDIA DGX to train AI models, Omniverse to test-drive and generate synthetic data, and DRIVE AGX, a supercomputer in the car.”

Huang explained that synthetic data is critical for AV development, as it dramatically enhances real-world datasets. NVIDIA’s AI data factories – powered by Omniverse and Cosmos platforms – generate synthetic driving scenarios, increasing the effectiveness of training data exponentially.

Toyota, the world’s largest automaker, is committed to using NVIDIA DRIVE AGX Orin and the safety-certified NVIDIA DriveOS to develop its next-generation vehicles. Heavyweights such as JLR, Mercedes-Benz, and Volvo Cars have also adopted DRIVE Hyperion.

Project DIGITS: Compact AI supercomputer

Huang concluded his NVIDIA keynote at CES 2025 with a final “one more thing” announcement: Project DIGITS, NVIDIA’s smallest yet most powerful AI supercomputer, powered by the cutting-edge GB10 Grace Blackwell Superchip.

“This is NVIDIA’s latest AI supercomputer,” Huang declared, revealing its compact size, claiming it’s portable enough to “practically fit in a pocket.”

Project DIGITS enables developers and engineers to train and deploy AI models directly from their desks, providing the full power of NVIDIA’s AI stack in a compact form.

Image of Project DIGITS on a desk, a compact AI supercomputer by NVIDIA debuted at CES 2025.

Set to launch in May, Project DIGITS represents NVIDIA’s push to make AI supercomputing accessible to individuals as well as organisations.

Vision for tomorrow

Reflecting on NVIDIA’s journey since inventing the programmable GPU in 1999, Huang described the past 12 years of AI-driven change as transformative.

“Every single layer of the technology stack has been fundamentally transformed,” he said.

With advancements spanning gaming, AI-driven agents, robotics, and autonomous vehicles, Huang foresees an exciting future.

“All of the enabling technologies I’ve talked about today will lead to surprising breakthroughs in general robotics and AI over the coming years,” Huang concludes.

(Image Credit: NVIDIA)

See also: Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NVIDIA advances AI frontiers with CES 2025 announcements appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nvidia-advances-ai-frontiers-with-ces-2025-announcements/feed/ 0
Google launches Veo and Imagen 3 generative AI models https://www.artificialintelligence-news.com/news/google-launches-veo-and-imagen-3-generative-ai-models/ https://www.artificialintelligence-news.com/news/google-launches-veo-and-imagen-3-generative-ai-models/#respond Tue, 03 Dec 2024 14:30:05 +0000 https://www.artificialintelligence-news.com/?p=16626 Google Cloud has launched two generative AI models on its Vertex AI platform, Veo and Imagen 3, amid reports of surging revenue growth among enterprises leveraging the technology. According to Google Cloud’s data, 86% of enterprise companies currently using generative AI in production environments have witnessed increased revenue, with an estimated average growth of 6%.  […]

The post Google launches Veo and Imagen 3 generative AI models appeared first on AI News.

]]>
Google Cloud has launched two generative AI models on its Vertex AI platform, Veo and Imagen 3, amid reports of surging revenue growth among enterprises leveraging the technology.

According to Google Cloud’s data, 86% of enterprise companies currently using generative AI in production environments have witnessed increased revenue, with an estimated average growth of 6%. 

This metric has driven the tech giant’s latest innovation push, resulting in the introduction of Veo – its most sophisticated video generation model to date – and Imagen 3, an advanced text-to-image generation system.

Breaking ground

Veo, now available in private preview on Vertex AI, represents a milestone as Google becomes the first hyperscaler to offer an image-to-video model. The technology enables businesses to generate high-quality videos from simple text or image prompts, potentially revolutionising video production workflows across industries.

Imagen 3 – scheduled for release to all Vertex AI customers next week – promises unprecedented realism in generated images, with marked improvements in detail, lighting, and artifact reduction. The model includes new features for enterprise customers on an allowlist, including advanced editing capabilities and brand customisation options.

Example images generated by the Imagen 3 generative AI (GenAI) model by Google, available on its Vertex AI platform.

Transforming operations

Several major firms have begun implementing these technologies into their operations.

Mondelez International, the company behind brands such as Oreo, Cadbury, and Chips Ahoy!, is using the technology to accelerate campaign content creation across its global portfolio of brands.

Jon Halvorson, SVP of Consumer Experience & Digital Commerce at Mondelez International, explained: “Our collaboration with Google Cloud has been instrumental in harnessing the power of generative AI, notably through Imagen 3, to revolutionise content production.

“This technology has enabled us to produce hundreds of thousands of customised assets, enhancing creative quality while significantly reducing both time to market and costs.”

Knowledge sharing platform Quora has developed Poe, a platform that enables users to interact with generative AI models. Veo and Imagen are now integrated with Poe.

Spencer Chan, Product Lead for Poe at Quora, commented: “We created Poe to democratise access to the world’s best gen AI models. With Veo, we’re now enabling millions of users to bring their ideas to life through stunning, high-quality generative video.”

Safety and security

In response to growing concerns about AI-generated content, Google has implemented robust safety features in both models. These include:

  • Digital watermarking through Google DeepMind’s SynthID.
  • Built-in safety filters to prevent harmful content creation.
  • Strict data governance policies ensure customer data protection.
  • Industry-first copyright indemnity for generative AI services.

The launch of these new models signals Google’s growing influence in the enterprise AI space and suggests a shift toward more sophisticated, integrated AI solutions for business applications.

(Imagery Credit: Google Cloud)

See also: Alibaba Marco-o1: Advancing LLM reasoning capabilities

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google launches Veo and Imagen 3 generative AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-launches-veo-and-imagen-3-generative-ai-models/feed/ 0
Generative AI use soars among Brits, but is it sustainable? https://www.artificialintelligence-news.com/news/generative-ai-use-soars-among-brits-but-is-it-sustainable/ https://www.artificialintelligence-news.com/news/generative-ai-use-soars-among-brits-but-is-it-sustainable/#respond Wed, 27 Nov 2024 20:19:15 +0000 https://www.artificialintelligence-news.com/?p=16560 A survey by CloudNine PR shows that 83% of UK adults are aware of generative AI tools, and 45% of those familiar with them want companies to be transparent about the environmental costs associated with the technologies. With data centres burning vast amounts of energy, the growing demand for GenAI has sparked a debate about […]

The post Generative AI use soars among Brits, but is it sustainable? appeared first on AI News.

]]>
A survey by CloudNine PR shows that 83% of UK adults are aware of generative AI tools, and 45% of those familiar with them want companies to be transparent about the environmental costs associated with the technologies.

With data centres burning vast amounts of energy, the growing demand for GenAI has sparked a debate about its sustainability.

The cost of intelligence: Generative AI’s carbon footprint

Behind every AI-generated email, idea, or recommendation are data centres running thousands of energy-hungry servers. Data centres are responsible for both training the large language models that power generative AI and processing individual user queries. Unlike a simple Google search, which uses relatively little energy, a single generative AI request can consume up to ten times as much electricity.

The numbers are staggering. If all nine billion daily Google searches worldwide were replaced with generative AI tasks, the additional electricity demand would match the annual energy consumption of 1.5 million EU residents. According to consultants Morgan Stanley, the energy demands of generative AI are expected to grow by 70% annually until 2027. By that point, the energy required to support generative AI systems could rival the electricity needs of an entire country—Spain, for example, based on its 2022 usage.

UK consumers want greener AI practices

The survey also highlights growing awareness among UK consumers about the environmental implications of generative AI. Nearly one in five respondents said they don’t trust generative AI providers to manage their environmental impact responsibly. Among regular users of these tools, 10% expressed a willingness to pay a premium for products or services that prioritise energy efficiency and sustainability.

Interestingly, over a third (35%) of respondents think generative AI tools should “actively remind” users of their environmental impact. While this appears like a small step, it has the potential to encourage more mindful usage and place pressure on companies to adopt greener technologies.

Efforts to tackle the environmental challenge

Fortunately, some companies and policymakers are beginning to address these concerns. In the United States, the Artificial Intelligence Environmental Impacts Act was introduced earlier this year. The legislation aims to standardise how AI companies measure and report carbon emissions. It also provides a voluntary framework for developers to evaluate and disclose their systems’ environmental impact, pushing the industry towards greater transparency.

Major players in the tech industry are also stepping up. Companies like Salesforce have voiced support for legislation requiring standardised methods to measure and report AI’s carbon footprint. Experts point to several practical ways to reduce generative AI’s environmental impact, including adopting energy-efficient hardware, using sustainable cooling methods in data centres, and transitioning to renewable energy sources.

Despite these efforts, the urgency to address generative AI’s environmental impact remains critical. As Uday Radia, owner of CloudNine PR, puts it: “Generative AI has huge potential to make our lives better, but there is a race against time to make it more sustainable before it gets out of control.”

(Photo by Unsplash)

See also: The AI revolution: Reshaping data centres and the digital landscape 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Generative AI use soars among Brits, but is it sustainable? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/generative-ai-use-soars-among-brits-but-is-it-sustainable/feed/ 0
Generative AI: Disparities between C-suite and practitioners https://www.artificialintelligence-news.com/news/generative-ai-disparities-c-suite-and-practitioners/ https://www.artificialintelligence-news.com/news/generative-ai-disparities-c-suite-and-practitioners/#respond Tue, 19 Nov 2024 12:31:35 +0000 https://www.artificialintelligence-news.com/?p=16515 A report by Publicis Sapient sheds light on the disparities between the C-suite and practitioners, dubbed the “V-suite,” in their perceptions and adoption of generative AI. The report reveals a stark contrast in how the C-suite and V-suite view the potential of generative AI. While the C-suite focuses on visible use cases such as customer […]

The post Generative AI: Disparities between C-suite and practitioners appeared first on AI News.

]]>
A report by Publicis Sapient sheds light on the disparities between the C-suite and practitioners, dubbed the “V-suite,” in their perceptions and adoption of generative AI.

The report reveals a stark contrast in how the C-suite and V-suite view the potential of generative AI. While the C-suite focuses on visible use cases such as customer experience, service, and sales, the V-suite sees opportunities across various functional areas, including operations, HR, and finance.

Risk perception

The divide extends to risk perception as well. Fifty-one percent of C-level respondents expressed more concern about the risk and ethics of generative AI than other emerging technologies. In contrast, only 23 percent of the V-suite shared these worries.

Simon James, Managing Director of Data & AI at Publicis Sapient, said: “It’s likely the C-suite is more worried about abstract, big-picture dangers – such as Hollywood-style scenarios of a rapidly-evolving superintelligence – than the V-suite.”

The report also highlights the uncertainty surrounding generative AI maturity. Organisations can be at various stages of maturity simultaneously, with many struggling to define what success looks like. More than two-thirds of respondents lack a way to measure the success of their generative AI projects.

Navigating the generative AI landscape

Despite the C-suite’s focus on high-visibility use cases, generative AI is quietly transforming back-office functions. More than half of the V-suite respondents ranked generative AI as extremely important in areas like finance and operations over the next three years, compared to a smaller percentage of the C-suite.

To harness the full potential of generative AI, the report recommends a portfolio approach to innovation projects. Leaders should focus on delivering projects, controlling shadow IT, avoiding duplication, empowering domain experts, connecting business units with the CIO’s office, and engaging the risk office early and often.

Daniel Liebermann, Managing Director at Publicis Sapient, commented: “It’s as hard for leaders to learn how individuals within their organisation are using ChatGPT or Microsoft Copilot as it is to understand how they’re using the internet.”

The path forward

The report concludes with five steps to maximise innovation: adopting a portfolio approach, improving communication between the CIO’s office and the risk office, seeking out innovators within the organisation, using generative AI to manage information, and empowering team members through company culture and upskilling.

As generative AI continues to evolve, organisations must bridge the gap between the C-suite and V-suite to unlock its full potential. The future of business transformation lies in harnessing the power of a decentralised, bottom-up approach to innovation.

See also: EU introduces draft regulatory guidance for AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Generative AI: Disparities between C-suite and practitioners appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/generative-ai-disparities-c-suite-and-practitioners/feed/ 0
Penguin Random House protects its books from AI training use https://www.artificialintelligence-news.com/news/penguin-random-house-protects-its-books-from-ai-training-use/ https://www.artificialintelligence-news.com/news/penguin-random-house-protects-its-books-from-ai-training-use/#respond Tue, 22 Oct 2024 18:36:29 +0000 https://www.artificialintelligence-news.com/?p=16353 Penguin Random House (PRH) has taken a significant step in response to rising concerns about the use of intellectual property to train AI systems. The publisher has introduced a new statement to the copyright pages of both new and reprinted books, stating, “No part of this book may be used or reproduced in any manner […]

The post Penguin Random House protects its books from AI training use appeared first on AI News.

]]>
Penguin Random House (PRH) has taken a significant step in response to rising concerns about the use of intellectual property to train AI systems.

The publisher has introduced a new statement to the copyright pages of both new and reprinted books, stating, “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems.” This change is supplemented by a section that excludes PRH’s works from the European Union’s text and data mining exception, in accordance with applicable copyright laws.

As one of the first major publishers to address the issue of AI training explicitly, PRH is responding to the broader debate about how tech companies use copyrighted content to train large language models (LLMs), like those used in chatbots and other AI tools. Publishers have become increasingly concerned about the possible misuse of their intellectual property in recent years, especially after reports arose that copyrighted books were utilised by AI firms to enhance these technologies.

PRH’s move to amend its copyright page is an attempt to protect its content ahead of time, even though such comments have no bearing on the legal framework of copyright. The clauses work similarly to a “robots.txt” file, which websites employ to request that their content not be scraped by bots or AI systems. While these notices indicate the publisher’s intent, they are not legally binding, and existing copyright protections apply in the absence of such disclaimers.

PRH’s move also emphasises the ongoing tension between content creators and the AI industry, as more authors, publishers, and other creatives ask for stronger protections. The Authors’ Licensing and Collecting Society (ALCS) has been outspoken in its support for PRH’s actions. ALCS CEO Barbara Hayes expressed approval of the updated copyright language, emphasising the need for publishers to protect their works from unauthorised use in AI training.

However, some contend that simply changing copyright pages may not be enough. The Society of Authors (SoA) applauds PRH’s efforts, but believes more needs to be done to guarantee that authors’ rights are properly protected. SoA CEO Anna Ganley has called on publishers to go beyond these statements and incorporate explicit protections in author contracts, making sure that writers are informed before their work is used in AI-related initiatives.

As AI advances, the debate over its usage of copyrighted content remains far from over. PRH’s action could herald a larger shift in the publishing sector, but how other publishers and the legal system react remains to be seen.

(Image by StockSnap)

See also: AI governance gap: 95% of firms haven’t implemented frameworks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Penguin Random House protects its books from AI training use appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/penguin-random-house-protects-its-books-from-ai-training-use/feed/ 0
King’s Business School: How AI is transforming problem-solving https://www.artificialintelligence-news.com/news/kings-business-school-ai-transforming-problem-solving/ https://www.artificialintelligence-news.com/news/kings-business-school-ai-transforming-problem-solving/#respond Mon, 07 Oct 2024 15:50:58 +0000 https://www.artificialintelligence-news.com/?p=16250 A new study by researchers at King’s Business School and Wazoku has revealed that AI is transforming global problem-solving. The report found that nearly half (46%) of Wazoku’s 700,000-strong network of problem solvers had utilised generative AI (GenAI) to work on innovative ideas over the past year. This network – known as the Wazoku Crowd […]

The post King’s Business School: How AI is transforming problem-solving appeared first on AI News.

]]>
A new study by researchers at King’s Business School and Wazoku has revealed that AI is transforming global problem-solving.

The report found that nearly half (46%) of Wazoku’s 700,000-strong network of problem solvers had utilised generative AI (GenAI) to work on innovative ideas over the past year. This network – known as the Wazoku Crowd – comprises a diverse group of professionals including scientists, pharmacists, engineers, PhD students, CEOs, start-ups, and business leaders.

Perhaps more strikingly, almost a quarter (22%) of respondents reported using GenAI or LLM tools such as ChatGPT and Claude for at least half of their idea submissions, with 8% employing these technologies for every single submission. Of those using GenAI, 47% are leveraging it specifically for idea generation.

The Wazoku Crowd’s collective intelligence is harnessed to solve ‘challenges’ – requests for ideas submitted by enterprises – with an impressive success rate of over 80%.

Simon Hill, CEO of Wazoku, commented on the findings: “There’s an incredible amount of hype with GenAI, but alongside that there is enormous curiosity. Getting immersed in something and being curious is an innovator’s dream, so there is rich potential with GenAI.”

However, Hill also urged caution: “A note of caution, though – it is best used to generate interest, not solutions. Human ingenuity and creativity are still best, although using GenAI can undoubtedly make that process more effective.”

The study revealed that the most common application of GenAI was in research and learning, with 85% of respondents using it for this purpose. Additionally, around one-third of the Wazoku Crowd employed GenAI for report structuring, writing, and data analysis and insight.

The research was conducted in partnership with Oguz A. Acar, Professor of Marketing and Innovation at King’s Business School, King’s College London. Professor Acar viewed the study as a crucial first step towards understanding AI’s potential and limitations in tackling complex innovation challenges.

“Everyone’s trying to figure out what AI can and can’t do, and this survey is a step forward in understanding that,” Professor Acar stated. “It reveals that some crowd members view GenAI as a valuable ally, using it to research, create, and communicate more effectively.”

“While perhaps it’s no surprise that those open to innovation are curious about new tools, the survey also shows mixed opinions. Most people haven’t used GenAI tools yet, highlighting that we’re only beginning to uncover AI’s potential in innovative problem-solving.”

Wazoku collaborates with a range of customers, including Sanofi, A2A, Bill & Melinda Gates Foundation, and numerous global enterprise businesses, government departments, and not-for-profits, to crowdsource ideas and innovation.

Recently, Wazoku launched its own conversational AI to aid innovation. Dubbed Jen AI, this digital innovation assistant has access to Wazoku’s connected innovation management suite—aimed at accelerating decision-making around innovation and enhancing productivity to deliver consistent, scalable results.

“The solutions to the world’s problems are complex, and the support of AI brings vast benefits in terms of efficiency, creativity, and insight generation,” explained Hill.

As the adoption of AI in innovation processes continues to grow, it’s clear that – while these tools offer significant potential – they are best used to augment rather than replace human creativity and problem-solving skills.

(Photo by Ally Griffin)

See also: Ivo Everts, Databricks: Enhancing open-source AI and improving data governance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post King’s Business School: How AI is transforming problem-solving appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/kings-business-school-ai-transforming-problem-solving/feed/ 0
Han Heloir, MongoDB: The role of scalable databases in AI-powered apps https://www.artificialintelligence-news.com/news/han-heloir-mongodb-the-future-of-ai-powered-applications-and-scalable-databases/ https://www.artificialintelligence-news.com/news/han-heloir-mongodb-the-future-of-ai-powered-applications-and-scalable-databases/#respond Mon, 30 Sep 2024 00:22:58 +0000 https://www.artificialintelligence-news.com/?p=16108 As data management grows more complex and modern applications extend the capabilities of traditional approaches, AI is revolutionising application scaling. In addition to freeing operators from outdated, inefficient methods that require careful supervision and extra resources, AI enables real-time, adaptive optimisation of application scaling. Ultimately, these benefits combine to enhance efficiency and reduce costs for […]

The post Han Heloir, MongoDB: The role of scalable databases in AI-powered apps appeared first on AI News.

]]>
As data management grows more complex and modern applications extend the capabilities of traditional approaches, AI is revolutionising application scaling.

Han Heloir, EMEA gen AI senior solutions architect at MongoDB
Han Heloir, EMEA gen AI senior solutions architect, MongoDB.

In addition to freeing operators from outdated, inefficient methods that require careful supervision and extra resources, AI enables real-time, adaptive optimisation of application scaling. Ultimately, these benefits combine to enhance efficiency and reduce costs for targeted applications.

With its predictive capabilities, AI ensures that applications scale efficiently, improving performance and resource allocation—marking a major advance over conventional methods.

Ahead of AI & Big Data Expo Europe, Han Heloir, EMEA gen AI senior solutions architect at MongoDB, discusses the future of AI-powered applications and the role of scalable databases in supporting generative AI and enhancing business processes.

AI News: As AI-powered applications continue to grow in complexity and scale, what do you see as the most significant trends shaping the future of database technology?

Heloir: While enterprises are keen to leverage the transformational power of generative AI technologies, the reality is that building a robust, scalable technology foundation involves more than just choosing the right technologies. It’s about creating systems that can grow and adapt to the evolving demands of generative AI, demands that are changing quickly, some of which traditional IT infrastructure may not be able to support. That is the uncomfortable truth about the current situation.

Today’s IT architectures are being overwhelmed by unprecedented data volumes generated from increasingly interconnected data sets. Traditional systems, designed for less intensive data exchanges, are currently unable to handle the massive, continuous data streams required for real-time AI responsiveness. They are also unprepared to manage the variety of data being generated.

The generative AI ecosystem often comprises a complex set of technologies. Each layer of technology—from data sourcing to model deployment—increases functional depth and operational costs. Simplifying these technology stacks isn’t just about improving operational efficiency; it’s also a financial necessity.

AI News: What are some key considerations for businesses when selecting a scalable database for AI-powered applications, especially those involving generative AI?

Heloir: Businesses should prioritise flexibility, performance and future scalability. Here are a few key reasons:

  • The variety and volume of data will continue to grow, requiring the database to handle diverse data types—structured, unstructured, and semi-structured—at scale. Selecting a database that can manage such variety without complex ETL processes is important.
  • AI models often need access to real-time data for training and inference, so the database must offer low latency to enable real-time decision-making and responsiveness.
  • As AI models grow and data volumes expand, databases must scale horizontally, to allow organisations to add capacity without significant downtime or performance degradation.
  • Seamless integration with data science and machine learning tools is crucial, and native support for AI workflows—such as managing model data, training sets and inference data—can enhance operational efficiency.

AI News: What are the common challenges organisations face when integrating AI into their operations, and how can scalable databases help address these issues?

Heloir: There are a variety of challenges that organisations can run into when adopting AI. These include the massive amounts of data from a wide variety of sources that are required to build AI applications. Scaling these initiatives can also put strain on the existing IT infrastructure and once the models are built, they require continuous iteration and improvement.

To make this easier, a database that scales can help simplify the management, storage and retrieval of diverse datasets. It offers elasticity, allowing businesses to handle fluctuating demands while sustaining performance and efficiency. Additionally, they accelerate time-to-market for AI-driven innovations by enabling rapid data ingestion and retrieval, facilitating faster experimentation.

AI News: Could you provide examples of how collaborations between database providers and AI-focused companies have driven innovation in AI solutions?

Heloir: Many businesses struggle to build generative AI applications because the technology evolves so quickly. Limited expertise and the increased complexity of integrating diverse components further complicate the process, slowing innovation and hindering the development of AI-driven solutions.

One way we address these challenges is through our MongoDB AI Applications Program (MAAP), which provides customers with resources to assist them in putting AI applications into production. This includes reference architectures and an end-to-end technology stack that integrates with leading technology providers, professional services and a unified support system.

MAAP categorises customers into four groups, ranging from those seeking advice and prototyping to those developing mission-critical AI applications and overcoming technical challenges. MongoDB’s MAAP enables faster, seamless development of generative AI applications, fostering creativity and reducing complexity.

AI News: How does MongoDB approach the challenges of supporting AI-powered applications, particularly in industries that are rapidly adopting AI?

Heloir: Ensuring you have the underlying infrastructure to build what you need is always one of the biggest challenges organisations face.

To build AI-powered applications, the underlying database must be capable of running queries against rich, flexible data structures. With AI, data structures can become very complex. This is one of the biggest challenges organisations face when building AI-powered applications, and it’s precisely what MongoDB is designed to handle. We unify source data, metadata, operational data, vector data and generated data—all in one platform.

AI News: What future developments in database technology do you anticipate, and how is MongoDB preparing to support the next generation of AI applications?

Heloir: Our key values are the same today as they were when MongoDB initially launched: we want to make developers’ lives easier and help them drive business ROI. This remains unchanged in the age of artificial intelligence. We will continue to listen to our customers, assist them in overcoming their biggest difficulties, and ensure that MongoDB has the features they require to develop the next [generation of] great applications.

(Photo by Caspar Camille Rubin)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Han Heloir, MongoDB: The role of scalable databases in AI-powered apps appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/han-heloir-mongodb-the-future-of-ai-powered-applications-and-scalable-databases/feed/ 0
Alibaba Cloud unleashes over 100 open-source AI models https://www.artificialintelligence-news.com/news/alibaba-cloud-unleashes-over-100-open-source-ai-models/ https://www.artificialintelligence-news.com/news/alibaba-cloud-unleashes-over-100-open-source-ai-models/#respond Fri, 20 Sep 2024 13:08:45 +0000 https://www.artificialintelligence-news.com/?p=16135 Alibaba Cloud has open-sourced more than 100 of its newly-launched AI models, collectively known as Qwen 2.5. The announcement was made during the company’s annual Apsara Conference. The cloud computing arm of Alibaba Group has also unveiled a revamped full-stack infrastructure designed to meet the surging demand for robust AI computing. This new infrastructure encompasses […]

The post Alibaba Cloud unleashes over 100 open-source AI models appeared first on AI News.

]]>
Alibaba Cloud has open-sourced more than 100 of its newly-launched AI models, collectively known as Qwen 2.5. The announcement was made during the company’s annual Apsara Conference.

The cloud computing arm of Alibaba Group has also unveiled a revamped full-stack infrastructure designed to meet the surging demand for robust AI computing. This new infrastructure encompasses innovative cloud products and services that enhance computing, networking, and data centre architecture, all aimed at supporting the development and wide-ranging applications of AI models.

Eddie Wu, Chairman and CEO of Alibaba Cloud Intelligence, said: “Alibaba Cloud is investing, with unprecedented intensity, in the research and development of AI technology and the building of its global infrastructure. We aim to establish an AI infrastructure of the future to serve our global customers and unlock their business potential.”

The newly-released Qwen 2.5 models range from 0.5 to 72 billion parameters in size and boast enhanced knowledge and stronger capabilities in maths and coding. Supporting over 29 languages, these models cater to a wide array of AI applications both at the edge and in the cloud across various sectors, from automotive and gaming to scientific research.

Alibaba Cloud’s open-source AI models gain traction

Since its debut in April 2023, the Qwen model series has garnered significant traction, surpassing 40 million downloads across platforms such as Hugging Face and ModelScope. These models have also inspired the creation of over 50,000 derivative models on Hugging Face alone.

Jingren Zhou, CTO of Alibaba Cloud Intelligence, commented: “This initiative is set to empower developers and corporations of all sizes, enhancing their ability to leverage AI technologies and further stimulating the growth of the open-source community.”

In addition to the open-source models, Alibaba Cloud announced an upgrade to its proprietary flagship model, Qwen-Max. The enhanced version reportedly demonstrates performance on par with other state-of-the-art models in areas such as language comprehension, reasoning, mathematics, and coding.

The company has also expanded its multimodal capabilities with a new text-to-video model as part of its Tongyi Wanxiang large model family. This model can generate high-quality videos in various visual styles, from realistic scenes to 3D animation, based on Chinese and English text instructions.

Furthermore, Alibaba Cloud introduced Qwen2-VL, an updated vision language model capable of comprehending videos lasting over 20 minutes and supporting video-based question-answering. The company also launched an AI Developer, a Qwen-powered AI assistant designed to support programmers in automating tasks such as requirement analysis, code programming, and bug identification and fixing.

To support these AI advancements, Alibaba Cloud has announced several infrastructure upgrades, including:

  • CUBE DC 5.0, a next-generation data centre architecture that increases energy and operational efficiency.
  • Alibaba Cloud Open Lake, a solution to maximise data utility for generative AI applications.
  • PAI AI Scheduler, a proprietary cloud-native scheduling engine for enhanced computing resource management.
  • DMS: OneMeta+OneOps, a platform for unified management of metadata across multiple cloud environments.
  • 9th Generation Enterprise Elastic Compute Service (ECS) instance, offering improved performance for various applications.

These updates from Alibaba Cloud – including the release of over 100 open-source models – aim to provide comprehensive support for customers and partners to maximise the benefits of the latest technology in building more efficient, sustainable, and inclusive AI applications.

(Image Source: www.alibabagroup.com)

See also: Tech industry giants urge EU to streamline AI regulations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Alibaba Cloud unleashes over 100 open-source AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/alibaba-cloud-unleashes-over-100-open-source-ai-models/feed/ 0
Amazon partners with Anthropic to enhance Alexa https://www.artificialintelligence-news.com/news/amazon-partners-anthropic-enhance-alexa/ https://www.artificialintelligence-news.com/news/amazon-partners-anthropic-enhance-alexa/#respond Mon, 02 Sep 2024 13:18:08 +0000 https://www.artificialintelligence-news.com/?p=15929 Amazon is gearing up to roll out a revamped version of its Alexa voice assistant, which is expected to be available this October, right before the US shopping rush. Internally referred to as “Remarkable,” the new technology will be powered by Anthropic’s Claude AI models. Sources close to the matter have indicated that this shift […]

The post Amazon partners with Anthropic to enhance Alexa appeared first on AI News.

]]>
Amazon is gearing up to roll out a revamped version of its Alexa voice assistant, which is expected to be available this October, right before the US shopping rush.

Internally referred to as “Remarkable,” the new technology will be powered by Anthropic’s Claude AI models. Sources close to the matter have indicated that this shift occurred due to the underperformance of Amazon’s in-house software.

The enhanced Alexa will operate using advanced generative AI to handle more complex queries. Amazon plans to offer the new Alexa as a subscription service, priced between $5 and $10 per month, while the classic version of Alexa will remain free. This approach marks a significant change for Amazon and suggests that the company aims to turn this voice assistant into a profitable venture after years of limited success in generating revenue through this platform.

Amazon’s decision to quickly adopt an external model, Claude, indicates a strategic shift. Amazon typically prefers to build everything in-house to minimise its dependence on third-party vendors, thereby avoiding external influences on customer behaviour and business strategies, as well as external influences on who controls data. However, it seems that Amazon’s traditional strategy does not provide the massive AI capability needed, or perhaps Amazon has realised the need for more powerful AI. It is also worth noting that the involved AI developer, OpenAI, is affiliated with major technology companies like Apple and Microsoft in developing AI technologies.

The launch of the “Remarkable” Alexa is anticipated during Amazon’s annual devices and services event in September, though the company has not confirmed the exact date. This event will also mark the first public appearance of Panos Panay, the new head of Amazon’s devices division, who has taken over from long-time executive David Limp.

The updated version of Alexa would be a more interactive and intuitive assistant, as the new functionality would stem from its conversational mode. The assistant is envisioned to do more than just recognise patterns in people’s speech; it would be able to hold conversations built on previous interactions. The most likely features include personalised shopping advice, news aggregation, and more advanced home automation. As for whether customers would pay for Alexa, this likely depends on the final set of available features. The issue might be particularly pressing for Amazon, given that customers already pay for Prime membership.

The future for Alexa is quite ambitious, but it also bears significant risks. For the new version to be successful, internal performance benchmarks must be met. While estimates for “Remarkable” Alexa suggest that even a small percentage of current users paying for the premium version could become a substantial income stream for Amazon, the likelihood of achieving the expected outcomes remains uncertain.

However, Amazon’s partnership with Anthropic is currently under regulatory review, largely due to an investigation by the UK’s antitrust regulator. The impending upgrade announcement and the regulator’s response could significantly influence the company’s future activities.

Amazon’s initiative to adopt an AI solution developed by Anthropic marks a significant shift for the company, which previously focused on developing its proprietary technology. At this point, it is possible to view this move as part of the general trend in the industry to turn to partnerships regarding AI development to enhance the competitiveness of products.

See also: Amazon strives to outpace Nvidia with cheaper, faster AI chips

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon partners with Anthropic to enhance Alexa appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-partners-anthropic-enhance-alexa/feed/ 0