Muhammad Zulhusni, Author at AI News https://www.artificialintelligence-news.com Artificial Intelligence News Wed, 30 Apr 2025 11:23:22 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Muhammad Zulhusni, Author at AI News https://www.artificialintelligence-news.com 32 32 Duolingo shifts to AI-first model, cutting contractor roles https://www.artificialintelligence-news.com/news/duolingo-shifts-to-ai-first-model-cutting-contractor-roles/ https://www.artificialintelligence-news.com/news/duolingo-shifts-to-ai-first-model-cutting-contractor-roles/#respond Wed, 30 Apr 2025 11:23:21 +0000 https://www.artificialintelligence-news.com/?p=106215 Duolingo is restructuring parts of its workforce as it shifts toward becoming an “AI-first” company, according to an internal memo from CEO and co-founder Luis von Ahn that was later shared publicly on the company’s LinkedIn page. The memo outlines a series of planned changes to how the company operates, with a particular focus on […]

The post Duolingo shifts to AI-first model, cutting contractor roles appeared first on AI News.

]]>
Duolingo is restructuring parts of its workforce as it shifts toward becoming an “AI-first” company, according to an internal memo from CEO and co-founder Luis von Ahn that was later shared publicly on the company’s LinkedIn page.

The memo outlines a series of planned changes to how the company operates, with a particular focus on how artificial intelligence will be used to streamline processes, reduce manual tasks, and scale content development.

Duolingo will gradually stop using contractors for work that AI can take over. The company will also begin evaluating job candidates and employee performance partly based on how they use AI tools. Von Ahn said that headcount increases will only be considered when a team can no longer automate parts of its work effectively.

“Being AI-first means we will need to rethink much of how we work. Making minor tweaks to systems designed for humans won’t get us there,” von Ahn wrote. “AI helps us get closer to our mission. To teach well, we need to create a massive amount of content, and doing that manually doesn’t scale.”

One of the main drivers behind the shift is the need to produce content more quickly, and Von Ahn says that producing new content manually would take decades. By integrating AI into its workflow, Duolingo has replaced processes he described as slow and manual those that are more efficient and automated.

The company has also used AI to develop features that weren’t previously feasible such as an AI-powered video call feature, which aims to provide tutoring to the level of human instructors. According to von Ahn, tools like this move the Duolingo platform closer to its mission – to deliver language instruction globally.

The internal shift is not limited to content creation or product development. Von Ahn said most business functions will be expected to rethink how they operate and identify opportunities to embed AI into daily work. Teams will be encouraged to adopt what he called “constructive constraints” – policies that push them to prioritise automation before requesting additional resources.

The move echoes a broader trend in the tech industry. Shopify CEO Tobi Lütke recently gave a similar directive to employees, urging them to demonstrate why tasks couldn’t be completed with AI before requesting new headcount. Both companies appear to be setting new expectations for how teams manage growth in an AI-dominated environment.

Duolingo’s leadership maintains the changes are not intended to reduce its focus on employee well-being, and the company will continue to support staff with training, mentorship, and tools designed to help employees adapt to new workflows. The goal, he wrote, is not to replace staff with AI, but to eliminate bottlenecks and allow employees to concentrate on complex or creative work.

“AI isn’t just a productivity boost,” von Ahn wrote. “It helps us get closer to our mission.”

The company’s move toward more automation reflects a belief that waiting too long to embrace AI could be a missed opportunity. Von Ahn pointed to Duolingo’s early investment in mobile-first design in 2012 as a model. That shift helped the company gain visibility and user adoption, including being named Apple’s iPhone App of the Year in 2013. The decision to go “AI-first” is framed as a similarly forward-looking step.

The transition is expected to take some time. Von Ahn acknowledged that not all systems are ready for full automation and that integrating AI into certain areas, like codebase analysis, could take longer. Nevertheless, he said moving quickly – even if it means accepting occasional setbacks – is more important than waiting for the technology to be fully mature.

By placing AI at the centre of its operations, Duolingo is aiming to deliver more scalable learning experiences and manage internal resources more efficiently. The company plans to provide additional updates as the implementation progresses.

(Photo by Unsplash)

See also: AI in education: Balancing promises and pitfalls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Duolingo shifts to AI-first model, cutting contractor roles appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/duolingo-shifts-to-ai-first-model-cutting-contractor-roles/feed/ 0
OpenAI’s latest LLM opens doors for China’s AI startups https://www.artificialintelligence-news.com/news/openai-latest-llm-opens-doors-for-china-ai-startups/ https://www.artificialintelligence-news.com/news/openai-latest-llm-opens-doors-for-china-ai-startups/#respond Tue, 29 Apr 2025 16:41:59 +0000 https://www.artificialintelligence-news.com/?p=16158 At the Apsara Conference in Hangzhou, hosted by Alibaba Cloud, China’s AI startups emphasised their efforts to develop large language models. The companies’ efforts follow the announcement of OpenAI’s latest LLMs, including the o1 generative pre-trained transformer model backed by Microsoft. The model is intended to tackle difficult tasks, paving the way for advances in […]

The post OpenAI’s latest LLM opens doors for China’s AI startups appeared first on AI News.

]]>
At the Apsara Conference in Hangzhou, hosted by Alibaba Cloud, China’s AI startups emphasised their efforts to develop large language models.

The companies’ efforts follow the announcement of OpenAI’s latest LLMs, including the o1 generative pre-trained transformer model backed by Microsoft. The model is intended to tackle difficult tasks, paving the way for advances in science, coding, and mathematics.

During the conference, Kunal Zhilin, founder of Moonshot AI, underlined the importance of the o1 model, adding that it has the potential to reshape various industries and create new opportunities for AI startups.

Zhilin stated that reinforcement learning and scalability might be pivotal for AI development. He spoke of the scaling law, which states that larger models with more training data perform better.

“This approach pushes the ceiling of AI capabilities,” Zhilin said, adding that OpenAI o1 has the potential to disrupt sectors and generate new opportunities for startups.

OpenAI has also stressed the model’s ability to solve complex problems, which it says operate in a manner similar to human thinking. By refining its strategies and learning from mistakes, the model improves its problem-solving capabilities.

Zhilin said companies with enough computing power will be able to innovate not only in algorithms, but also in foundational AI models. He sees this as pivotal, as AI engineers rely increasingly on reinforcement learning to generate new data after exhausting available organic data sources.

StepFun CEO Jiang Daxin concurred with Zhilin but stated that computational power remains a big challenge for many start-ups, particularly due to US trade restrictions that hinder Chinese enterprises’ access to advanced semiconductors.

“The computational requirements are still substantial,” Daxin stated.

An insider at Baichuan AI has said that only a small group of Chinese AI start-ups — including Moonshot AI, Baichuan AI, Zhipu AI, and MiniMax — are in a position to make large-scale investments in reinforcement learning. These companies — collectively referred to as the “AI tigers” — are involved heavily in LLM development, pushing the next generation of AI.

More from the Apsara Conference

Also at the conference, Alibaba Cloud made several announcements, including the release of its Qwen 2.5 model family, which features advances in coding and mathematics. The models range from 0.5 billion to 72 billion parameters and support approximately 29 languages, including Chinese, English, French, and Spanish.

Specialised models such as Qwen2.5-Coder and Qwen2.5-Math have already gained some traction, with over 40 million downloads on platforms Hugging Face and ModelScope.

Alibaba Cloud added to its product portfolio, delivering a text-to-video model in its picture generator, Tongyi Wanxiang. The model can create videos in realistic and animated styles, with possible uses in advertising and filmmaking.

Alibaba Cloud unveiled Qwen 2-VL, the latest version of its vision language model. It handles videos longer than 20 minutes, supports video-based question-answering, and is optimised for mobile devices and robotics.

For more information on the conference, click here.

(Photo by: @Guy_AI_Wise via X)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI’s latest LLM opens doors for China’s AI startups appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-latest-llm-opens-doors-for-china-ai-startups/feed/ 0
Huawei to begin mass shipments of Ascend 910C amid US curbs https://www.artificialintelligence-news.com/news/huawei-to-begin-mass-shipments-ascend-910c-us-curbs/ https://www.artificialintelligence-news.com/news/huawei-to-begin-mass-shipments-ascend-910c-us-curbs/#respond Wed, 23 Apr 2025 06:56:04 +0000 https://www.artificialintelligence-news.com/?p=105378 Huawei is expected to begin large-scale shipments of the Ascend 910C AI chip as early as next month, according to people familiar with the matter. While limited quantities have already been delivered, mass deployment would mark an important step for Chinese firms seeking domestic alternatives to US-made semiconductors. The move comes at a time when […]

The post Huawei to begin mass shipments of Ascend 910C amid US curbs appeared first on AI News.

]]>
Huawei is expected to begin large-scale shipments of the Ascend 910C AI chip as early as next month, according to people familiar with the matter.

While limited quantities have already been delivered, mass deployment would mark an important step for Chinese firms seeking domestic alternatives to US-made semiconductors.

The move comes at a time when Chinese developers face tighter restrictions on access to Nvidia hardware. The US government recently informed Nvidia that sales of its H20 AI chip to China require an export licence. That’s left developers in China looking for options that can support large-scale training and inference workloads.

The Huawei Ascend 910C chip isn’t built on the most advanced process nodes, but it represents a workaround. The chip is essentially a dual-package version of the earlier 910B, with two processors to double the performance and memory. Sources familiar with the chip say it performs comparably to Nvidia’s H100.

Rather than relying on cutting-edge manufacturing, Huawei has adopted a brute-force approach, combining multiple chips and high-speed optical interconnects to scale up performance. This approach is central to Huawei’s CloudMatrix 384 system, a full rack-scale AI platform for training large models.

The CloudMatrix 384 features 384 Huawei Ascend 910C chips deployed in 16 racks comprising of 12 compute racks and four networking. Unlike copper-based systems, Huawei’s platform is uses optical interconnects, enabling high-bandwidth communication between components of the system. According to analysis from SemiAnalysis, the architecture includes 6,912 800G LPO optical transceivers to form an optical all-to-all mesh network.

This allows Huawei’s system to deliver approximately 300 petaFLOPs of BF16 compute power – outpacing Nvidia’s GB200 NVL72 system, which reaches around 180 BF16 petaFLOPs. The CloudMatrix also claims advantages in higher memory bandwidth and capacity, offering more than double the bandwidth and over 3.6 times the high-bandwidth memory (HBM) capacity.

The gains, however, are not without drawbacks. The Huawei system is predicted to be 2.3 times less efficient per floating point operation than Nvidia’s GB200 and has lower power efficiency per unit of memory bandwidth and capacity. Despite the lower performance per watt, Huawei’s system still provides the infrastructure needed to train advanced AI models at scale.

Sources indicate that China’s largest chip foundry, SMIC, is producing some of the main components for the 910C using its 7nm N+2 process. Yield levels remain a concern, however, and some of the 910C units reportedly include chips produced by TSMC for Chinese firm Sophgo. Huawei has denied using TSMC-made parts.

The US Commerce Department is currently investigating the relationship between TSMC and Sophgo after a Sophgo-designed chip was found in Huawei’s earlier 910B processor. TSMC has maintained that it has not supplied Huawei since 2020 and continues to comply with export regulations.

In late 2023, Huawei began distributing early samples of the 910C to selected technology firms and opened its order books. Consulting firm Albright Stonebridge Group suggested the chip is likely to become the go-to choice for Chinese companies building large AI models or deploying inference capacity, given the ongoing export controls on US-made chips.

While the Huawei Ascend 910C may not match Nvidia in power efficiency or process technology, it signals a broader trend. Chinese technology firms are developing homegrown alternatives to foreign components, even if it means using less advanced methods to achieve similar outcomes.

As global AI demand surges and export restrictions tighten, Huawei’s ability to deliver a scalable AI hardware solution domestically could help shape China’s artificial intelligence future – especially as developers look to secure long-term supply chains and reduce exposure to geopolitical risk.

(Photo via Unsplash)

See also: Huawei’s AI hardware breakthrough challenges Nvidia’s dominance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Huawei to begin mass shipments of Ascend 910C amid US curbs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/huawei-to-begin-mass-shipments-ascend-910c-us-curbs/feed/ 0
Apple AI stresses privacy with synthetic and anonymised data https://www.artificialintelligence-news.com/news/apple-leans-on-synthetic-data-to-upgrade-ai-privately/ https://www.artificialintelligence-news.com/news/apple-leans-on-synthetic-data-to-upgrade-ai-privately/#respond Tue, 15 Apr 2025 08:58:08 +0000 https://www.artificialintelligence-news.com/?p=105319 Apple is taking a new approach to training its AI models – one that avoids collecting or copying user content from iPhones or Macs. According to a recent blog post, the company plans to continue to rely on synthetic data (constructed data that is used to mimic user behaviour) and differential privacy to improve features […]

The post Apple AI stresses privacy with synthetic and anonymised data appeared first on AI News.

]]>
Apple is taking a new approach to training its AI models – one that avoids collecting or copying user content from iPhones or Macs.

According to a recent blog post, the company plans to continue to rely on synthetic data (constructed data that is used to mimic user behaviour) and differential privacy to improve features like email summaries, without gaining access to personal emails or messages.

For users who opt in to Apple’s Device Analytics program, the company’s AI models will compare synthetic email-like messages against a small sample of a real user’s content stored locally on the device. The device then identifies which of the synthetic messages most closely matches its user sample, and sends information about the selected match back to Apple. No actual user data leaves the device, and Apple says it receives only aggregated information.

The technique will allow Apple to improve its models for longer-form text generation tasks without collecting real user content. It’s an extension of the company’s long-standing use of differential privacy, which introduces randomised data into broader datasets to help protect individual identities. Apple has used this method since 2016 to understand use patterns, in line with the company’s safeguarding policies.

Improving Genmoji and other Apple Intelligence features

The company already uses differential privacy to improve features like Genmoji, where it collects general trends about which prompts are most popular without linking any prompt with a specific user or device. In upcoming releases, Apple plans to apply similar methods to other Apple Intelligence features, including Image Playground, Image Wand, Memories Creation, and Writing Tools.

For Genmoji, the company anonymously polls participating devices to determine whether specific prompt fragments have been seen. Each device responds with a noisy signal – some responses reflect actual use, while others are randomised. The approach ensures that only widely-used terms become visible to Apple, and no individual response can be traced back to a user or device, the company says.

Curating synthetic data for better email summaries

While the above method has worked well with respect to short prompts, Apple needed a new approach for more complex tasks like summarising emails. For this, Apple generates thousands of sample messages, and these synthetic messages are converted into numerical representations, or ’embeddings,’ based on language, tone, and topic. Participating user devices then compare the embeddings to locally stored samples. Again, only the selected match is shared, not the content itself.

Apple collects the most frequently-selected synthetic embeddings from participating devices and uses them to refine its training data. Over time, this process allows the system to generate more relevant and realistic synthetic emails, helping Apple to improve its AI outputs for summarisation and text generation without apparent compromise of user privacy.

Available in beta

Apple is rolling out the system in beta versions of iOS 18.5, iPadOS 18.5, and macOS 15.5. According to Bloomberg’s Mark Gurman, Apple is attempting to address challenges with its AI development in this way, problems which have included delayed feature rollouts and the fallout from leadership changes in the Siri team.

Whether its approach will yield more useful AI outputs in practice remains to be seen, but it signals a clear public effort to balance user privacy with model performance.

(Photo by Unsplash)

See also: ChatGPT got another viral moment with ‘AI action figure’ trend

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Apple AI stresses privacy with synthetic and anonymised data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/apple-leans-on-synthetic-data-to-upgrade-ai-privately/feed/ 0
ChatGPT got another viral moment with ‘AI action figure’ trend https://www.artificialintelligence-news.com/news/chatgpt-got-another-viral-moment-with-ai-action-figure-trend/ https://www.artificialintelligence-news.com/news/chatgpt-got-another-viral-moment-with-ai-action-figure-trend/#respond Mon, 14 Apr 2025 11:19:31 +0000 https://www.artificialintelligence-news.com/?p=105304 ChatGPT’s image generation feature has sparked a new wave of personalised digital creations, with LinkedIn users leading a trend of turning themselves into action figures. The craze began picking up momentum after the viral Studio Ghibli-style portraits sees users sharing images of themselves as boxed dolls – complete with accessories and job-themed packaging. There are […]

The post ChatGPT got another viral moment with ‘AI action figure’ trend appeared first on AI News.

]]>
ChatGPT’s image generation feature has sparked a new wave of personalised digital creations, with LinkedIn users leading a trend of turning themselves into action figures.

The craze began picking up momentum after the viral Studio Ghibli-style portraits sees users sharing images of themselves as boxed dolls – complete with accessories and job-themed packaging.

There are several variations in the latest wave of AI-generated self-representation. The most common format is similar to a traditional action figure or Barbie doll, with props like coffee mugs, books, and laptops reflecting users’ professional lives. The images are designed to resemble toy store displays, complete with bold taglines and personalised packaging.

The movement gained initial attention on LinkedIn, where professionals used the format to showcase their brand identities more playfully. The “AI Action Figure” format, in particular, resonated with marketers, consultants, and others looking to present themselves as standout figures – literally. Popularity of the service has since trickled into other platforms including Instagram, TikTok, and Facebook, though engagement remains largely centred around LinkedIn.

ChatGPT’s image tool – part of its GPT-4o release – serves as the engine. Users upload a high-resolution photo of themselves, usually full-body, with a custom prompt describing how the final image should look. Details frequently include the person’s name, accessories, outfit styles, and package details. Some opt for a nostalgic “Barbiecore” vibe with pink tones and sparkles, while others stick to a corporate design that reflects their day job.

Refinements are common. Many users go through multiple image generations, changing accessories and rewording prompts until the figure matches their wanted personality or profession. The result is a glossy, toy-style portrait that crosses the line between humour and personal branding.

While the toy-style trend hasn’t seen the same viral reach as the Ghibli portrait craze, it has still sparked a steady flow of content across platforms. Hashtags like #AIBarbie and #BarbieBoxChallenge have gained traction, and some brands – including Mac Cosmetics and NYX – were quick to participate. A few public figures have joined in too, most notably US Representative Marjorie Taylor Greene, who shared a doll version of herself featuring accessories like a Bible and gavel.

Regardless of the buzz, engagement levels are different. Many posts receive limited interaction, and most well-known influencers have avoided the trend. Nevertheless, it highlights ChatGPT’s growing presence in mainstream online culture, and its ability to respond to users’ creativity using relatively simple tools.

The is not the first time ChatGPT’s image generation tool has overwhelmed the platform. When the Ghibli-style portraits first went viral, demand spiked so dramatically that OpenAI temporarily limited image generation for free accounts. CEO Sam Altman later described the surge in users as “biblical demand,” noting a dramatic rise in daily active users and infrastructure stress.

The Barbie/action figure trend, though at a smaller scale, follows that same path – using ChatGPT’s simple interface and its growing popularity as a creative tool. As with other viral AI visuals, the trend has also raised broader conversations about identity, aesthetics, and self-presentation in digital spaces. However, unlike the Ghibli portrait craze, it hasn’t attracted much criticism – at least not yet.

The format’s appeal lies in its simplicity. It offers users a way to engage with AI-generated art without needing technical skills, and satisfies an urge for of self-expression. The result is something like part professional head-shot, part novelty toy, and part visual joke, making it a surprisingly versatile format for social media sharing.

While some may see the toy model phenomenon as a gimmick, others view it as a window into what’s possible when AI tools are placed directly in users’ hands.

For now, whether it’s a mini-me holding a coffee mug or a Barbie-style figure ready for the toy shelf, ChatGPT is again changing how people choose to represent themselves in the digital age.

(Photo by Unsplash)

See also: ChatGPT hits record usage after viral Ghibli feature – Here are four risks to know first

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT got another viral moment with ‘AI action figure’ trend appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chatgpt-got-another-viral-moment-with-ai-action-figure-trend/feed/ 0
Spot AI introduces the world’s first universal AI agent builder for security cameras https://www.artificialintelligence-news.com/news/spot-ai-introduces-the-worlds-first-universal-ai-agent-builder-for-security-cameras/ https://www.artificialintelligence-news.com/news/spot-ai-introduces-the-worlds-first-universal-ai-agent-builder-for-security-cameras/#respond Thu, 10 Apr 2025 03:31:47 +0000 https://www.artificialintelligence-news.com/?p=105242 Spot AI has introduced Iris, which the company describes as the world’s first universal video AI agent builder for enterprise camera systems. The tool allows businesses to create customised AI agents through a conversational interface, making it easier to monitor and act on video data from physical settings without the need for technical expertise. Designed […]

The post Spot AI introduces the world’s first universal AI agent builder for security cameras appeared first on AI News.

]]>
Spot AI has introduced Iris, which the company describes as the world’s first universal video AI agent builder for enterprise camera systems.

The tool allows businesses to create customised AI agents through a conversational interface, making it easier to monitor and act on video data from physical settings without the need for technical expertise.

Designed for industries like manufacturing, logistics, retail, construction, and healthcare, Iris builds on Spot AI’s earlier launch of out-of-the-box Video AI Agents for safety, security, and operations. While those prebuilt agents focus on common use cases, Iris gives organisations the flexibility to train agents for more specific, business-critical scenarios.

According to Spot AI, users can build video agents in a matter of minutes. The system allows training through reinforcement—using examples of what the AI should and shouldn’t detect—and can be configured to trigger real-world responses like shutting down equipment, locking doors, or generating alerts.

CEO and Co-Founder Rish Gupta said the tool dramatically shortens the time required to create specialised video detection systems.

“What used to take months of development now happens in minutes,” Gupta explained. Before Iris, creating specialised video detection required dedicated AI/ML teams with advanced degrees, thousands of annotated images, and 8 weeks of complex development,” he explained. “Iris puts that same power in the hands of any business leader through simple conversation with 8 minutes and 20 training images.”

Examples from real-world settings

Spot AI highlighted a variety of industry-specific use cases that Iris could support:

  • Manufacturing: Detecting product backups or fluid leaks, with automatic responses based on severity.
  • Warehousing: Spotting unsafe stacking of boxes or pallets to prevent accidents.
  • Retail: Monitoring shelf stock levels and generating alerts for restocking.
  • Healthcare: Distinguishing between staff and patients wearing similar uniforms to optimise traffic flow and safety.
  • Security: Identifying tools like bolt cutters in parking areas to address evolving security threats.
  • Safety compliance: Verifying whether workers are wearing required safety gear on-site.

Video AI agents continuously monitor critical areas and help teams respond quickly to safety hazards, operational inefficiencies, and security issues. With Iris, those agents can be developed and modified through natural language interaction, reducing the need for engineering support and making video insights more accessible across departments.

Looking ahead

Iris is part of Spot AI’s broader effort to make video data more actionable in physical environments. The company plans to discuss the tool and its capabilities at Google Cloud Next, where Rish Gupta is scheduled to speak during a media roundtable on April 9.

(Image by Spot AI)

See also: ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Spot AI introduces the world’s first universal AI agent builder for security cameras appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/spot-ai-introduces-the-worlds-first-universal-ai-agent-builder-for-security-cameras/feed/ 0
ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first https://www.artificialintelligence-news.com/news/chatgpt-hits-record-usage-after-viral-ghibli-feature/ https://www.artificialintelligence-news.com/news/chatgpt-hits-record-usage-after-viral-ghibli-feature/#respond Tue, 08 Apr 2025 10:00:47 +0000 https://www.artificialintelligence-news.com/?p=105218 Following the release of ChatGPT’s new image-generation tool, user activity has surged; millions of people have been drawn to a trend whereby uploaded images are inspired by the unique visual style of Studio Ghibli. The spike in interest contributed to record use levels for the chatbot and strained OpenAI’s infrastructure temporarily. Social media platforms were […]

The post ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first appeared first on AI News.

]]>
Following the release of ChatGPT’s new image-generation tool, user activity has surged; millions of people have been drawn to a trend whereby uploaded images are inspired by the unique visual style of Studio Ghibli.

The spike in interest contributed to record use levels for the chatbot and strained OpenAI’s infrastructure temporarily.

Social media platforms were soon flooded with AI-generated images styled after work by the renowned Japanese animation studio, known for titles like Spirited Away and My Neighbor Totoro. According to Similarweb, weekly active ChatGPT users passed 150 million for the first time this year.

OpenAI CEO Sam Altman said the chatbot gained one million users in a single hour in early April – matching the numbers the text-centric ChatGPT reached over five days when it first launched.

SensorTower data shows the company also recorded a jump in app activity. Weekly active users, downloads, and in-app revenue all hit record levels last week, following the update to GPT-4o that enabled new image-generation features. Compared to late March, downloads rose by 11%, active users grew 5%, and revenue increased by 6%.

The new tool’s popularity caused service slowdowns and intermittent outages. OpenAI acknowledged the increased load, with Altman saying that users should expect delays in feature roll-outs and occasional service disruption as capacity issues are settled.

Legal questions surface around ChatGPT’s Ghibli-style AI art

The viral use of Studio Ghibli-inspired AI imagery from OpenAI’s ChatGPT has raised concerns about copyright. Legal experts point out that while artistic styles themselves may not always be protected, closely mimicking a well-known look could fall into a legal grey area.

“The legal landscape of AI-generated images mimicking Studio Ghibli’s distinctive style is an uncertain terrain. Copyright law has generally protected only specific expressions rather than artistic styles themselves,” said Evan Brown, partner at law firm Neal & McDevitt.

Miyazaki’s past comments have also resurfaced. In 2016, the Studio Ghibli co-founder responded to early AI-generated artwork by saying, “I am utterly disgusted. I would never wish to incorporate this technology into my work at all.”

OpenAI has not commented on whether the model used for its image generation was trained on content similar to Ghibli’s animation.

Data privacy and personal risk

The trend has also drawn attention to user privacy and data security. Christoph C. Cemper, founder of AI prompt management firm AIPRM, cautioned that uploading a photo for artistic transformation may come with more risks than many users realise.

“When you upload a photo to an AI art generator, you’re giving away your biometric data (your face). Some AI tools store that data, use it to train future models, or even sell it to third parties – none of which you may be fully aware of unless you read the fine print,” Cemper said.

OpenAI’s privacy policy confirms that it collects both personal information and use data, including images and content submitted by users. Unless users opt out of training data collection or request deletion via their settings, content will be retained and used to improve future AI models.

Cemper said that once a facial image is uploaded, it becomes vulnerable to misuse. That data could be scraped, leaked, or used in identity theft, deepfake content, or other impersonation scams. He also pointed to prior incidents where private images were found in public AI datasets like LAION-5B, which are used to train various tools like Stable Diffusion.

Copyright and licensing considerations

There are also concerns that AI-generated content styled after recognisable artistic brands could cross into copyright infringement. While creating art in the style of Studio Ghibli, Disney, or Pixar might seem harmless, legal experts warn that such works may be considered derivative, especially if the mimicry is too close.

In 2022, several artists filed a class-action lawsuit against AI companies, claiming their models were trained on original artwork without consent. The cases reflect the broader conversation around how to balance innovation with creators’ rights as generative AI becomes more widely used.

Cemper also advised users to review carefully the terms of service on AI platforms. Many contain licensing clauses with language like “transferable rights,” “non-exclusive,” or “irrevocable licence,” which allow platforms to reproduce, modify, or distribute submitted content – even after the app is deleted.

“The rollout of ChatGPT’s 4o image generator shows just how powerful AI has become as it replicates iconic artistic styles with just a few clicks. But this unprecedented capability comes with a growing risk – the lines between creativity and copyright infringement are increasingly blurred,” Cemper said.

“The rapid pace of AI development also raises significant concerns about privacy and data security. There’s a pressing need for clearer, more transparent privacy policies. Users should be empowered to make informed decisions about uploading their photos or personal data.”

Search interest in “ChatGPT Studio Ghibli” has increased by more than 1,200% in the past week, but alongside the creativity and virality comes a wave of serious problems about privacy, copyright, and data use. As AI image tools get more advanced and accessible, users may want to think twice before uploading personal images, especially if they’re not sure where the data may ultimately end up.

(Image by YouTube Fireship)

See also: Midjourney V7: Faster AI image generation


Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chatgpt-hits-record-usage-after-viral-ghibli-feature/feed/ 0
Ant Group uses domestic chips to train AI models and cut costs https://www.artificialintelligence-news.com/news/ant-group-uses-domestic-chips-to-train-ai-models-and-cut-costs/ https://www.artificialintelligence-news.com/news/ant-group-uses-domestic-chips-to-train-ai-models-and-cut-costs/#respond Thu, 03 Apr 2025 09:59:09 +0000 https://www.artificialintelligence-news.com/?p=105116 Ant Group is relying on Chinese-made semiconductors to train artificial intelligence models to reduce costs and lessen dependence on restricted US technology, according to people familiar with the matter. The Alibaba-owned company has used chips from domestic suppliers, including those tied to its parent, Alibaba, and Huawei Technologies to train large language models using the […]

The post Ant Group uses domestic chips to train AI models and cut costs appeared first on AI News.

]]>
Ant Group is relying on Chinese-made semiconductors to train artificial intelligence models to reduce costs and lessen dependence on restricted US technology, according to people familiar with the matter.

The Alibaba-owned company has used chips from domestic suppliers, including those tied to its parent, Alibaba, and Huawei Technologies to train large language models using the Mixture of Experts (MoE) method. The results were reportedly comparable to those produced with Nvidia’s H800 chips, sources claim. While Ant continues to use Nvidia chips for some of its AI development, one sources said the company is turning increasingly to alternatives from AMD and Chinese chip-makers for its latest models.

The development signals Ant’s deeper involvement in the growing AI race between Chinese and US tech firms, particularly as companies look for cost-effective ways to train models. The experimentation with domestic hardware reflects a broader effort among Chinese firms to work around export restrictions that block access to high-end chips like Nvidia’s H800, which, although not the most advanced, is still one of the more powerful GPUs available to Chinese organisations.

Ant has published a research paper describing its work, stating that its models, in some tests, performed better than those developed by Meta. Bloomberg News, which initially reported the matter, has not verified the company’s results independently. If the models perform as claimed, Ant’s efforts may represent a step forward in China’s attempt to lower the cost of running AI applications and reduce the reliance on foreign hardware.

MoE models divide tasks into smaller data sets handled by separate components, and have gained attention among AI researchers and data scientists. The technique has been used by Google and the Hangzhou-based startup, DeepSeek. The MoE concept is similar to having a team of specialists, each handling part of a task to make the process of producing models more efficient. Ant has declined to comment on its work with respect to its hardware sources.

Training MoE models depends on high-performance GPUs which can be too expensive for smaller companies to acquire or use. Ant’s research focused on reducing that cost barrier. The paper’s title is suffixed with a clear objective: Scaling Models “without premium GPUs.” [our quotation marks]

The direction taken by Ant and the use of MoE to reduce training costs contrast with Nvidia’s approach. CEO Officer Jensen Huang has said that demand for computing power will continue to grow, even with the introduction of more efficient models like DeepSeek’s R1. His view is that companies will seek more powerful chips to drive revenue growth, rather than aiming to cut costs with cheaper alternatives. Nvidia’s strategy remains focused on building GPUs with more cores, transistors, and memory.

According to the Ant Group paper, training one trillion tokens – the basic units of data AI models use to learn – cost about 6.35 million yuan (roughly $880,000) using conventional high-performance hardware. The company’s optimised training method reduced that cost to around 5.1 million yuan by using lower-specification chips.

Ant said it plans to apply its models produced in this way – Ling-Plus and Ling-Lite – to industrial AI use cases like healthcare and finance. Earlier this year, the company acquired Haodf.com, a Chinese online medical platform, to further Ant’s ambition to deploy AI-based solutions in healthcare. It also operates other AI services, including a virtual assistant app called Zhixiaobao and a financial advisory platform known as Maxiaocai.

“If you find one point of attack to beat the world’s best kung fu master, you can still say you beat them, which is why real-world application is important,” said Robin Yu, chief technology officer of Beijing-based AI firm, Shengshang Tech.

Ant has made its models open source. Ling-Lite has 16.8 billion parameters – settings that help determine how a model functions – while Ling-Plus has 290 billion. For comparison, estimates suggest closed-source GPT-4.5 has around 1.8 trillion parameters, according to MIT Technology Review.

Despite progress, Ant’s paper noted that training models remains challenging. Small adjustments to hardware or model structure during model training sometimes resulted in unstable performance, including spikes in error rates.

(Photo by Unsplash)

See also: DeepSeek V3-0324 tops non-reasoning AI models in open-source first

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Ant Group uses domestic chips to train AI models and cut costs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ant-group-uses-domestic-chips-to-train-ai-models-and-cut-costs/feed/ 0
Is America falling behind in the AI race? https://www.artificialintelligence-news.com/news/is-america-falling-behind-in-the-ai-race/ https://www.artificialintelligence-news.com/news/is-america-falling-behind-in-the-ai-race/#respond Mon, 24 Mar 2025 09:35:32 +0000 https://www.artificialintelligence-news.com/?p=104963 Several major US artificial intelligence companies have expressed fear around an erosion of America’s edge in AI development. In recent submissions to the US government, the companies warned that Chinese models, such as DeepSeek R1, are becoming more sophisticated and competitive. The submissions, filed in March 2025 in response to a request for input on […]

The post Is America falling behind in the AI race? appeared first on AI News.

]]>
Several major US artificial intelligence companies have expressed fear around an erosion of America’s edge in AI development.

In recent submissions to the US government, the companies warned that Chinese models, such as DeepSeek R1, are becoming more sophisticated and competitive. The submissions, filed in March 2025 in response to a request for input on an AI Action Plan, highlight the growing challenge from China in technological capability and price.

China’s growing AI presence

Chinese state-supported AI model DeepSeek R1 has piqued the interest of US developers. According to OpenAI, DeepSeek demonstrates that the technological gap between the US and China is narrowing. The company described DeepSeek as “state-subsidised, state-controlled, and freely available,” raises concerns about the model’s ability to influence global AI development.

OpenAI compared DeepSeek to Chinese telecommunications company Huawei, warning that Chinese regulations could allow the government to compel DeepSeek to compromise sensitive US systems or infrastructure. Concerns about data privacy were also raised, with OpenAI pointing out that Chinese rules could force DeepSeek to disclose user data to the government, and enhance China’s ability to develop more advanced AI systems.

The competition from China also includes Ernie X1 and Ernie 4.5, released by Baidu, which are designed to compete with Western systems.

According to Baidu, Ernie X1 “delivers performance on par with DeepSeek R1 at only half the price.” Meanwhile, Ernie 4.5 is priced at just 1% of OpenAI’s GPT-4.5 while outperforming it in multiple benchmarks.

DeepSeek’s aggressive pricing strategy is also raising concerns with the US companies. According to Bernstein Research, DeepSeek’s V3 and R1 models are priced “anywhere from 20-40x cheaper” than equivalent models from OpenAI. The pricing pressure could force US developers to adjust their business models to remain competitive.

Baidu’s strategy of open-sourcing its models is also gaining traction. “One thing we learned from DeepSeek is that open-sourcing the best models can greatly help adoption,” Baidu CEO Robin Li said in February. Baidu plans to open-source the Ernie 4.5 series starting June 30, which could accelerate adoption and further increase competitive pressure on US firms.

Cost aside, early user feedback on Baidu’s models has been positive. “[I’ve] been playing around with it for hours, impressive performance,” Alvin Foo, a venture partner at Zero2Launch, said in a post on social media, suggesting China’s AI models are becoming more affordable and effective.

US AI security and economic risks

The submissions also highlight what the US companies perceive as risks to security and the economy.

OpenAI warned that Chinese regulations could allow the government to compel DeepSeek to manipulate its models to compromise infrastructure or sensitive applications, creating vulnerabilities in important systems.

Anthropic’s concerns centred on biosecurity. It disclosed that its own Claude 3.7 Sonnet model demonstrated capabilities in biological weapon development, highlighting the dual-use nature of AI systems.

Anthropic also raised issues with US export controls on AI chips. While Nvidia’s H20 chips meet US export restrictions, they nonetheless perform well in text generation – a important feature for reinforcement learning. Anthropic called on the government to tighten controls to prevent China from gaining a technological edge using the chips.

Google took a more cautious approach, acknowledging security risks yet warned against over-regulation. The company argues that strict AI export rules could harm US competitiveness by limiting business opportunities for domestic cloud providers. Google recommended targeted export controls to protect national security but without disruption to its business operations.

Maintaining US AI competitiveness

All US three companies emphasised the need for better government oversight and infrastructure investment to maintain US AI leadership.

Anthropic warned that by 2027, training a single advanced AI model could require up to five gigawatts of power – enough to power a small city. The company proposed a national target to build 50 additional gigawatts of AI-dedicated power capacity by 2027 and to streamline regulations around power transmission infrastructure.

OpenAI positioned the competition between US and Chinese AI as a contest between democratic and authoritarian AI models. The company argued that promoting a free-market approach would drive better outcomes and maintain America’s technological edge.

Google focused on urging practical measures, including increased federal funding for AI research, improved access to government contracts, and streamlined export controls. The company also recommended more flexible procurement rules to accelerate AI adoption by federal agencies.

Regulatory strategies for US AI

The US companies called for a unified federal approach to AI regulation.

OpenAI proposed a regulatory framework managed by the Department of Commerce, warning that fragmented state-level regulations could drive AI development overseas. The company supported a tiered export control framework, allowing broader access to US-developed AI in democratic countries while restricting it in authoritarian states.

Anthropic called for stricter export controls on AI hardware and training data, warning that even minor improvements in model performance could give China a strategic advantage.

Google focused on copyright and intellectual property rights, stressing that its interpretation of ‘fair use’ is important for AI development. The company warned that overly restrictive copyright rules could disadvantage US AI firms compared to their Chinese competitors.

All three companies stressed the need for faster government adoption of AI. OpenAI recommended removing some existing testing and procurement barriers, while Anthropic supported streamlined procurement processes. Google emphasised the need for improved interoperability in government cloud infrastructure.

See also: The best AI prompt generator: Create perfect AI prompts

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Is America falling behind in the AI race? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/is-america-falling-behind-in-the-ai-race/feed/ 0
AVAXAI brings DeepSeek to Web3 with decentralised AI agents https://www.artificialintelligence-news.com/news/avaxai-brings-deepseek-to-web3-with-decentralised-ai-agents/ https://www.artificialintelligence-news.com/news/avaxai-brings-deepseek-to-web3-with-decentralised-ai-agents/#respond Fri, 07 Feb 2025 10:40:08 +0000 https://www.artificialintelligence-news.com/?p=104160 AI continues to evolve, transforming industries with advances in automation, decision-making, and predictive analytics. AI models like DeepSeek push the boundaries of what’s possible, making complex tasks more efficient and accessible. At the same time, Web3 is reshaping digital ownership and finance through decentralisation. As the two technologies advance, their convergence seems inevitable. However, integrating […]

The post AVAXAI brings DeepSeek to Web3 with decentralised AI agents appeared first on AI News.

]]>
The DeepSeek controversy and its impact on AI’s future DeepSeek has been at the centre of global attention, not only for its technical advancements, but also for concerns about its use. In January, the company unveiled a chatbot that reportedly matched the performance of its rivals at a significantly lower training cost, a development that shook international markets. AI-related stocks, including Australia’s chip-maker Brainchip, saw sharp declines following the news. However, DeepSeek’s rapid rise has also raised security concerns. Australia has banned the DeepSeek AI from all government devices and systems, citing an “unacceptable risk” to national security. According to the BBC, officials insist that the decision is based on security assessments, not the company’s Chinese origins. The government’s move emphasises ongoing debates over AI governance and the potential risks of incorporating AI into important systems. Despite these concerns, AIvalanche DeFAI Agents continues to explore new ways to utilise DeepSeek’s abilities in a decentralised framework. It wants to provide users with greater control over AI agents and maintain security and transparency in Web3.

Decentralised AI agents for ownership and monetisation

DeepSeek is an AI model built for tasks like data analysis and autonomous operations. AIvalanche DeFAI Agents extends its capabilities by integrating tokenised AI and DeFAI agents into the Avalanche C-Chain. The platform combines Avalanche’s efficiency with AI functionality, letting users create, manage, and deploy AI agents with minimal effort. Users can use AIvalanche DeFAI Agents to develop AI agents and investigate ways to monetise them. The decentralised framework enables trustless transactions, altering the way AI ownership and interaction take place.

Key features of AIvalanche DeFAI agents

  • Create and manage AI agents: Users can build AI agents in just a few clicks. Each agent has a dedicated page outlining its capabilities.
  • Co-ownership of AI agents: Anyone can invest in AI agents early by acquiring tokens before they gain mainstream attention. Users can also engage with established AI agents while trading their tokens.
  • Monetising AI agents: AI agents evolve by learning from new data. They have their own wallets and can execute transactions, manage tasks, and distribute revenue.

Support from key players in the Avalanche ecosystem

AIvalanche DeFAI Agents has gained recognition in the Avalanche ecosystem, receiving support from entities like Avalaunch and AVenturesDAO. Avalaunch provides a launchpad for Avalanche-based projects, while AVenturesDAO is a community-driven investment group. Their involvement highlights growing interest in decentralised AI and DeFAI agents.

Expanding access through public sales and listings

AIvalanche DeFAI Agents is currently conducting a public sale across several launchpads, including Ape Terminal, Polkastarter, Avalaunch, and Seedify. The platforms enable broader participation in the Web3 AI agent economy. Following a public sale, the platform plans to list its AVAXAI token on centralised exchanges like Gate.io and MEXC. The listings could improve accessibility and liquidity and increase the platform’s adoption. As AI and decentralised finance (DeFi) continue to intersect, AIvalanche DeFAI Agents aims to establish itself in the space. (Photo by Unsplash) See also: Microsoft and OpenAI probe alleged data theft by DeepSeek
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here

The post AVAXAI brings DeepSeek to Web3 with decentralised AI agents appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/avaxai-brings-deepseek-to-web3-with-decentralised-ai-agents/feed/ 0
ChatGPT-4 vs. ChatGPT-3.5: Which to use? https://www.artificialintelligence-news.com/news/chatgpt-4-vs-chatgpt-35-which-one-should-you-use/ https://www.artificialintelligence-news.com/news/chatgpt-4-vs-chatgpt-35-which-one-should-you-use/#respond Mon, 03 Feb 2025 16:41:19 +0000 https://www.artificialintelligence-news.com/?p=104093 OpenAI offers two versions of its chatbot, ChatGPT-4 and ChatGPT-3.5, each catering to different needs. ChatGPT-4 is the more advanced option, providing improved accuracy and reasoning, while ChatGPT-3.5 remains a solid choice, especially for those looking for a free AI tool. The right model depends on user needs – whether it’s a more powerful AI […]

The post ChatGPT-4 vs. ChatGPT-3.5: Which to use? appeared first on AI News.

]]>
OpenAI offers two versions of its chatbot, ChatGPT-4 and ChatGPT-3.5, each catering to different needs.

ChatGPT-4 is the more advanced option, providing improved accuracy and reasoning, while ChatGPT-3.5 remains a solid choice, especially for those looking for a free AI tool. The right model depends on user needs – whether it’s a more powerful AI for complex tasks or a simple, accessible chatbot for everyday use.

Both models are built on the same foundational AI concepts, but they have notable differences. ChatGPT-4 offers more advanced reasoning, a larger context window, and multimodal capabilities, making it better suited for complex problem-solving and content generation.

In contrast, ChatGPT-3.5 is designed for general-purpose tasks and is easier to access since it’s free. While ChatGPT-4 requires a subscription, ChatGPT-3.5 is available at no cost, making it a practical option for casual users who don’t need advanced features.

Who should choose ChatGPT-4?

ChatGPT-4 is designed for users who need a more powerful AI model that can handle both text and image inputs. It can handle longer conversations, making it helpful for users who want thorough, context-rich interactions. It also supports internet browsing in specific plans, allowing for limited real-time information retrieval.

However, this model is only available with subscription plans, which begin at $20 per month for individual users and progress to higher-tier options for teams and enterprises.

While these plans offer extra features like a larger context window and better performance, they also require a financial commitment that may be unnecessary for users with basic AI needs.

Who should choose ChatGPT-3.5?

ChatGPT-3.5 remains a viable alternative for users looking for a free AI chatbot that does not require a subscription. It can perform a variety of general tasks, including answering questions, drafting text, and offering conversational support.

While it lacks multimodal capabilities and has a smaller context window than ChatGPT-4, it is still a reliable tool for many common uses. The setup process is straightforward – users simply need to create an OpenAI account to start using the model via the web or through mobile apps. It supports voice interactions on mobile devices, making it more convenient for hands-free use.

Businesses and professionals looking for a scalable AI solution will likely prefer ChatGPT-4, which provides more sophisticated responses, advanced reasoning, and additional enterprise features. Its ability to process multimodal inputs, evaluate data, and manage longer conversations makes it a more effective tool for professional and research-based tasks.

Making the right choice: ChatGPT-4 or ChatGPT-3.5?

For those deciding between the two, the choice largely depends on the intended use. ChatGPT-4 is the better option for users who require higher accuracy and enhanced reasoning. It is well-suited for professionals, researchers, and businesses seeking a more powerful AI tool. In comparison, ChatGPT-3.5 is ideal for users who need a simple and user-friendly AI model capable of handling a wide range of tasks.

Are there better AI alternatives?

While ChatGPT-4 and ChatGPT-3.5 are both capable AI tools, they may not be everyone’s cup of tea. Users looking for a free, multimodal AI tool with extensive real-time web search capabilities may find other models more suitable. Similarly, people who need AI specifically for coding and development may prefer a model optimised for those tasks. OpenAI’s models are designed to be general-purpose, but they may not meet the needs of users requiring highly specialised AI applications.

For those exploring alternatives, Google Gemini, Anthropic Claude, and Microsoft Copilot are among the top competitors in the AI chatbot space. Google Gemini, previously known as Bard, integrates deeply with Google Search and offers strong multimodal capabilities. Many users appreciate its accessibility and free-tier offerings.

Anthropic’s Claude is another option, particularly for those focused on ethical AI development and security. It features one of the largest context windows available, making it suitable for long-form content generation.

Meanwhile, Microsoft Copilot integrates with Microsoft 365 applications and Bing, providing an AI assistant that seamlessly fits into productivity and development workflows.

(Photo by Unsplash)

See also: Microsoft and OpenAI probe alleged data theft by DeepSeek


Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT-4 vs. ChatGPT-3.5: Which to use? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chatgpt-4-vs-chatgpt-35-which-one-should-you-use/feed/ 0
How AI helped refine Hungarian accents in The Brutalist https://www.artificialintelligence-news.com/news/how-ai-helped-refine-hungarian-accents-in-the-brutalist/ https://www.artificialintelligence-news.com/news/how-ai-helped-refine-hungarian-accents-in-the-brutalist/#respond Fri, 24 Jan 2025 13:38:07 +0000 https://www.artificialintelligence-news.com/?p=16952 When it comes to movies buzzing with Oscar potential, Brady Corbet’s The Brutalist is a standout this awards season. The visually stunning drama transports viewers to the post-World War II era, unravelling the story of László Tóth, played by Adrien Brody. Tóth, a fictional Hungarian-Jewish architect, starts over in the United States after being forced […]

The post How AI helped refine Hungarian accents in The Brutalist appeared first on AI News.

]]>
When it comes to movies buzzing with Oscar potential, Brady Corbet’s The Brutalist is a standout this awards season.

The visually stunning drama transports viewers to the post-World War II era, unravelling the story of László Tóth, played by Adrien Brody. Tóth, a fictional Hungarian-Jewish architect, starts over in the United States after being forced to leave his family behind as he emigrates.

Beyond its vintage allure, something modern brews in the background: the use of AI. Specifically, AI was employed to refine Brody’s and co-star Felicity Jones’ Hungarian pronunciation. The decision has sparked lively debates about technology’s role in film-making.

The role of AI in The Brutalist

According to Dávid Jancsó, the film’s editor, the production team turned to Respeecher, an AI software developed by a Ukrainian company, to tweak the actors’ Hungarian dialogue. Speaking to RedShark News (as cited by Mashable SEA), Jancsó explained that Hungarian – a Uralic language known for its challenging sounds – was a significant hurdle for the actors, despite their talent and dedication.

Respeecher’s software isn’t magic, but just a few years ago, it would have seemed wondrous. It creates a voice model based on a speaker’s characteristics and adjusts specific elements, like pronunciation. In this case, it was used to fine-tune the letter and vowel sounds that Brody and Jones found tricky. Most of the corrections were minimal, with Jancsó himself providing some replacement sounds to preserve the authenticity of the performances. “Most of their Hungarian dialogue has a part of me talking in there,” he joked, emphasising the care taken to maintain the actors’ original delivery.

Respeecher: AI behind the scenes

The is not Respeecher’s first foray into Hollywood. The software is known for restoring iconic voices like that of Darth Vader for the Obi-Wan Kenobi series, and has recreated Edith Piaf’s voice for an upcoming biopic. Outside of film, Respeecher has helped to preserve endangered languages like Crimean Tatar.

For The Brutalist, the AI tool wasn’t just a luxury – it was a time and budget saver. With so much dialogue in Hungarian, manually editing every line would have required painstaking, manual work. Jancsó said that using AI sped up the process significantly, an important factor given the film’s modest $10 million budget.

Beyond voice: AI’s other roles in the film

AI was also used in other aspects of the production process, used for example to generate some of Tóth’s architectural drawings and complete buildings in the film’s Venice Biennale sequence. However, director Corbet has clarified that these images were not fully AI-generated; instead, the AI was used for specific background elements.

Corbet and Jancsó have been candid about their perspectives on AI in film-making. Jancsó sees it as a valuable tool, saying, “There’s nothing in the film using AI that hasn’t been done before. It just makes the process a lot faster.” Corbet added that the software’s purpose was to enhance authenticity, not replace the actors’ hard work.

A broader conversation

The debate surrounding AI in the film industry isn’t new. From script-writing to music production, concerns about generative AI’s impact were central to the 2023 Writers Guild of America (WGA) and SAG-AFTRA strikes. Although agreements have been reached to regulate the use of AI, the topic remains a hot-button issue.

The Brutalist awaits a possible Oscar nomination. From its story line to its cinematic style, the film wears its ambition on its sleeve. It’s not just a celebration of the postwar Brutalist architectural movement, it’s also a nod to classic American cinema. Shot in the rarely used VistaVision format, the film captures the grandeur of mid-20th-century film-making. Adding to its nostalgic charm, it includes a 15-minute intermission during its epic three-and-a-half-hour runtime.

Yet the use of AI has given a new dimension to the ongoing conversation about AI in the creative industry. Whether people see AI as a betrayal of craftsmanship or an exciting innovative tool that can add to a final creation, one thing is certain: AI continues to transform how stories are delivered on screen.

See also: AI music sparks new copyright battle in US courts

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post How AI helped refine Hungarian accents in The Brutalist appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/how-ai-helped-refine-hungarian-accents-in-the-brutalist/feed/ 0