AI Chatbots News | Chatbot Developments | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-chatbots/ Artificial Intelligence News Fri, 02 May 2025 12:38:13 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png AI Chatbots News | Chatbot Developments | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-chatbots/ 32 32 Google AMIE: AI doctor learns to ‘see’ medical images https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/ https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/#respond Fri, 02 May 2025 12:38:12 +0000 https://www.artificialintelligence-news.com/?p=106274 Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer). Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your […]

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer).

Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your ECG printout. That’s what Google is aiming for.

We already knew AMIE showed promise in text-based medical chats, thanks to earlier work published in Nature. But let’s face it, real medicine isn’t just about words.

Doctors rely heavily on what they can see – skin conditions, readings from machines, lab reports. As the Google team rightly points out, even simple instant messaging platforms “allow static multimodal information (e.g., images and documents) to enrich discussions.”

Text-only AI was missing a huge piece of the puzzle. The big question, as the researchers put it, was “Whether LLMs can conduct diagnostic clinical conversations that incorporate this more complex type of information.”

Google teaches AMIE to look and reason

Google’s engineers have beefed up AMIE using their Gemini 2.0 Flash model as the brains of the operation. They’ve combined this with what they call a “state-aware reasoning framework.” In plain English, this means the AI doesn’t just follow a script; it adapts its conversation based on what it’s learned so far and what it still needs to figure out.

It’s close to how a human clinician works: gathering clues, forming ideas about what might be wrong, and then asking for more specific information – including visual evidence – to narrow things down.

“This enables AMIE to request relevant multimodal artifacts when needed, interpret their findings accurately, integrate this information seamlessly into the ongoing dialogue, and use it to refine diagnoses,” Google explains.

Think of the conversation flowing through stages: first gathering the patient’s history, then moving towards diagnosis and management suggestions, and finally follow-up. The AI constantly assesses its own understanding, asking for that skin photo or lab result if it senses a gap in its knowledge.

To get this right without endless trial-and-error on real people, Google built a detailed simulation lab.

Google created lifelike patient cases, pulling realistic medical images and data from sources like the PTB-XL ECG database and the SCIN dermatology image set, adding plausible backstories using Gemini. Then, they let AMIE ‘chat’ with simulated patients within this setup and automatically check how well it performed on things like diagnostic accuracy and avoiding errors (or ‘hallucinations’).

The virtual OSCE: Google puts AMIE through its paces

The real test came in a setup designed to mirror how medical students are assessed: the Objective Structured Clinical Examination (OSCE).

Google ran a remote study involving 105 different medical scenarios. Real actors, trained to portray patients consistently, interacted either with the new multimodal AMIE or with actual human primary care physicians (PCPs). These chats happened through an interface where the ‘patient’ could upload images, just like you might in a modern messaging app.

Afterwards, specialist doctors (in dermatology, cardiology, and internal medicine) and the patient actors themselves reviewed the conversations.

The human doctors scored everything from how well history was taken, the accuracy of the diagnosis, the quality of the suggested management plan, right down to communication skills and empathy—and, of course, how well the AI interpreted the visual information.

Surprising results from the simulated clinic

Here’s where it gets really interesting. In this head-to-head comparison within the controlled study environment, Google found AMIE didn’t just hold its own—it often came out ahead.

The AI was rated as being better than the human PCPs at interpreting the multimodal data shared during the chats. It also scored higher on diagnostic accuracy, producing differential diagnosis lists (the ranked list of possible conditions) that specialists deemed more accurate and complete based on the case details.

Specialist doctors reviewing the transcripts tended to rate AMIE’s performance higher across most areas. They particularly noted “the quality of image interpretation and reasoning,” the thoroughness of its diagnostic workup, the soundness of its management plans, and its ability to flag when a situation needed urgent attention.

Perhaps one of the most surprising findings came from the patient actors: they often found the AI to be more empathetic and trustworthy than the human doctors in these text-based interactions.

And, on a critical safety note, the study found no statistically significant difference between how often AMIE made errors based on the images (hallucinated findings) compared to the human physicians.

Technology never stands still, so Google also ran some early tests swapping out the Gemini 2.0 Flash model for the newer Gemini 2.5 Flash.

Using their simulation framework, the results hinted at further gains, particularly in getting the diagnosis right (Top-3 Accuracy) and suggesting appropriate management plans.

While promising, the team is quick to add a dose of realism: these are just automated results, and “rigorous assessment through expert physician review is essential to confirm these performance benefits.”

Important reality checks

Google is commendably upfront about the limitations here. “This study explores a research-only system in an OSCE-style evaluation using patient actors, which substantially under-represents the complexity… of real-world care,” they state clearly. 

Simulated scenarios, however well-designed, aren’t the same as dealing with the unique complexities of real patients in a busy clinic. They also stress that the chat interface doesn’t capture the richness of a real video or in-person consultation.

So, what’s the next step? Moving carefully towards the real world. Google is already partnering with Beth Israel Deaconess Medical Center for a research study to see how AMIE performs in actual clinical settings with patient consent.

The researchers also acknowledge the need to eventually move beyond text and static images towards handling real-time video and audio—the kind of interaction common in telehealth today.

Giving AI the ability to ‘see’ and interpret the kind of visual evidence doctors use every day offers a glimpse of how AI might one day assist clinicians and patients. However, the path from these promising findings to a safe and reliable tool for everyday healthcare is still a long one that requires careful navigation.

(Photo by Alexander Sinn)

See also: Are AI chatbots really changing the world of work?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/feed/ 0
Are AI chatbots really changing the world of work? https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/ https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/#respond Fri, 02 May 2025 09:54:32 +0000 https://www.artificialintelligence-news.com/?p=106266 We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now. Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far. Researchers Anders Humlum (University of Chicago) […]

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now.

Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far.

Researchers Anders Humlum (University of Chicago) and Emilie Vestergaard (University of Copenhagen) didn’t just rely on anecdotes. They dug deep, connecting responses from two big surveys (late 2023 and 2024) with official, detailed records about jobs and pay in Denmark.

The pair zoomed in on around 25,000 people working in 7,000 different places, covering 11 jobs thought to be right in the path of AI disruption.   

Everyone’s using AI chatbots for work, but where are the benefits?

What they found confirms what many of us see: AI chatbots are everywhere in Danish workplaces now. Most bosses are actually encouraging staff to use them, a real turnaround from the early days when companies were understandably nervous about things like data privacy.

Almost four out of ten employers have even rolled out their own in-house chatbots, and nearly a third of employees have had some formal training on these tools.   

When bosses gave the nod, the number of staff using chatbots practically doubled, jumping from 47% to 83%. It also helped level the playing field a bit. That gap between men and women using chatbots? It shrank noticeably when companies actively encouraged their use, especially when they threw in some training.

So, the tools are popular, companies are investing, people are getting trained… but the big economic shift? It seems to be missing in action.

Using statistical methods to compare people who used AI chatbots for work with those who didn’t, both before and after ChatGPT burst onto the scene, the researchers found… well, basically nothing.

“Precise zeros,” the researchers call their findings. No significant bump in pay, no change in recorded work hours, across all 11 job types they looked at. And they’re pretty confident about this – the numbers rule out any average effect bigger than just 1%.

This wasn’t just a blip, either. The lack of impact held true even for the keen beans who jumped on board early, those using chatbots daily, or folks working where the boss was actively pushing the tech.

Looking at whole workplaces didn’t change the story; places with lots of chatbot users didn’t see different trends in hiring, overall wages, or keeping staff compared to places using them less.

Productivity gains: More of a gentle nudge than a shove

Why the big disconnect? Why all the hype and investment if it’s not showing up in paychecks or job stats? The study flags two main culprits: the productivity boosts aren’t as huge as hoped in the real world, and what little gains there are aren’t really making their way into wages.

Sure, people using AI chatbots for work felt they were helpful. They mentioned better work quality and feeling more creative. But the number one benefit? Saving time.

However, when the researchers crunched the numbers, the average time saved was only about 2.8% of a user’s total work hours. That’s miles away from the huge 15%, 30%, even 50% productivity jumps seen in controlled lab-style experiments (RCTs) involving similar jobs.

Why the difference? A few things seem to be going on. Those experiments often focus on jobs or specific tasks where chatbots really shine (like coding help or basic customer service responses). This study looked at a wider range, including jobs like teaching where the benefits might be smaller.

The researchers stress the importance of what they call “complementary investments”. People whose companies encouraged chatbot use and provided training actually did report bigger benefits – saving more time, improving quality, and feeling more creative. This suggests that just having the tool isn’t enough; you need the right support and company environment to really unlock its potential.

And even those modest time savings weren’t padding wallets. The study reckons only a tiny fraction – maybe 3% to 7% – of the time saved actually showed up as higher earnings. It might be down to standard workplace inertia, or maybe it’s just harder to ask for a raise based on using a tool your boss hasn’t officially blessed, especially when many people started using them off their own bat.

Making new work, not less work

One fascinating twist is that AI chatbots aren’t just about doing old work tasks faster. They seem to be creating new tasks too. Around 17% of people using them said they had new workloads, mostly brand new types of tasks.

This phenomenon happened more often in workplaces that encouraged chatbot use. It even spilled over to people not using the tools – about 5% of non-users reported new tasks popping up because of AI, especially teachers having to adapt assignments or spot AI-written homework.   

What kind of new tasks? Things like figuring out how to weave AI into daily workflows, drafting content with AI help, and importantly, dealing with the ethical side and making sure everything’s above board. It hints that companies are still very much in the ‘figuring it out’ phase, spending time and effort adapting rather than just reaping instant rewards.

What’s the verdict on the work impact of AI chatbots?

The researchers are careful not to write off generative AI completely. They see pathways for it to become more influential over time, especially as companies get better at integrating it and maybe as those “new tasks” evolve.

But for now, their message is clear: the current reality doesn’t match the hype about a massive, immediate job market overhaul.

“Despite rapid adoption and substantial investments… our key finding is that AI chatbots have had minimal impact on productivity and labor market outcomes to date,” the researchers conclude.   

It brings to mind that old quote about the early computer age: seen everywhere, except in the productivity stats. Two years on from ChatGPT’s launch kicking off the fastest tech adoption we’ve ever seen, its actual mark on jobs and pay looks surprisingly light.

The revolution might still be coming, but it seems to be taking its time.   

See also: Claude Integrations: Anthropic adds AI to your favourite work tools

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/feed/ 0
Claude Integrations: Anthropic adds AI to your favourite work tools https://www.artificialintelligence-news.com/news/claude-integrations-anthropic-adds-ai-favourite-work-tools/ https://www.artificialintelligence-news.com/news/claude-integrations-anthropic-adds-ai-favourite-work-tools/#respond Thu, 01 May 2025 17:02:33 +0000 https://www.artificialintelligence-news.com/?p=106258 Anthropic just launched ‘Integrations’ for Claude that enables the AI to talk directly to your favourite daily work tools. In addition, the company has launched a beefed-up ‘Advanced Research’ feature for digging deeper than ever before. Starting with Integrations, the feature builds on a technical standard Anthropic released last year (the Model Context Protocol, or […]

The post Claude Integrations: Anthropic adds AI to your favourite work tools appeared first on AI News.

]]>
Anthropic just launched ‘Integrations’ for Claude that enables the AI to talk directly to your favourite daily work tools. In addition, the company has launched a beefed-up ‘Advanced Research’ feature for digging deeper than ever before.

Starting with Integrations, the feature builds on a technical standard Anthropic released last year (the Model Context Protocol, or MCP), but makes it much easier to use. Before, setting this up was a bit technical and local. Now, developers can build secure bridges allowing Claude to connect safely with apps over the web or on your desktop.

For end-users of Claude, this means you can now hook it up to a growing list of popular work software. Right out of the gate, they’ve included support for ten big names: Atlassian’s Jira and Confluence (hello, project managers and dev teams!), the automation powerhouse Zapier, Cloudflare, customer comms tool Intercom, plus Asana, Square, Sentry, PayPal, Linear, and Plaid. Stripe and GitLab are joining the party soon.

So, what’s the big deal? The real advantage here is context. When Claude can see your project history in Jira, read your team’s knowledge base in Confluence, or check task updates in Asana, it stops guessing and starts understanding what you’re working on.

“When you connect your tools to Claude, it gains deep context about your work—understanding project histories, task statuses, and organisational knowledge—and can take actions across every surface,” explains Anthropic.

They add, “Claude becomes a more informed collaborator, helping you execute complex projects in one place with expert assistance at every step.”

Let’s look at what this means in practice. Connect Zapier, and you suddenly give Claude the keys to thousands of apps linked by Zapier’s workflows. You could just ask Claude, conversationally, to trigger a complex sequence – maybe grab the latest sales numbers from HubSpot, check your calendar, and whip up some meeting notes, all without you lifting a finger in those apps.

For teams using Atlassian’s Jira and Confluence, Claude could become a serious helper. Think drafting product specs, summarising long Confluence documents so you don’t have to wade through them, or even creating batches of linked Jira tickets at once. It might even spot potential roadblocks by analysing project data.

And if you use Intercom for customer chats, this integration could be a game-changer. Intercom’s own AI assistant, Fin, can now work with Claude to do things like automatically create a bug report in Linear if a customer flags an issue. You could also ask Claude to sift through your Intercom chat history to spot patterns, help debug tricky problems, or summarise what customers are saying – making the whole journey from feedback to fix much smoother.

Anthropic is also making it easier for developers to build even more of these connections. They reckon that using their tools (or platforms like Cloudflare that handle the tricky bits like security and setup), developers can whip up a custom Integration with Claude in about half an hour. This could mean connecting Claude to your company’s unique internal systems or specialised industry software.

Beyond tool integrations, Claude gets a serious research upgrade

Alongside these new connections, Anthropic has given Claude’s Research feature a serious boost. It could already search the web and your Google Workspace files, but the new ‘Advanced Research’ mode is built for when you need to dig really deep.

Flip the switch for this advanced mode, and Claude tackles big questions differently. Instead of just one big search, it intelligently breaks your request down into smaller chunks, investigates each part thoroughly – using the web, your Google Docs, and now tapping into any apps you’ve connected via Integrations – before pulling it all together into a detailed report.

Now, this deeper digging takes a bit more time. While many reports might only take five to fifteen minutes, Anthropic says the really complex investigations could have Claude working away for up to 45 minutes. That might sound like a while, but compare it to the hours you might spend grinding through that research manually, and it starts to look pretty appealing.

Importantly, you can trust the results. When Claude uses information from any source – whether it’s a website, an internal doc, a Jira ticket, or a Confluence page – it gives you clear links straight back to the original. No more wondering where the AI got its information from; you can check it yourself.

These shiny new Integrations and the Advanced Research mode are rolling out now in beta for folks on Anthropic’s paid Max, Team, and Enterprise plans. If you’re on the Pro plan, don’t worry – access is coming your way soon.

Also worth noting: the standard web search feature inside Claude is now available everywhere, for everyone on any paid Claude.ai plan (Pro and up). No more geographical restrictions on that front.

Putting it all together, these updates and integrations show Anthropic is serious about making Claude genuinely useful in a professional context. By letting it plug directly into the tools we already use and giving it more powerful ways to analyse information, they’re pushing Claude towards being less of a novelty and more of an essential part of the modern toolkit.

(Image credit: Anthropic)

See also: Baidu ERNIE X1 and 4.5 Turbo boast high performance at low cost

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Claude Integrations: Anthropic adds AI to your favourite work tools appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/claude-integrations-anthropic-adds-ai-favourite-work-tools/feed/ 0
Duolingo shifts to AI-first model, cutting contractor roles https://www.artificialintelligence-news.com/news/duolingo-shifts-to-ai-first-model-cutting-contractor-roles/ https://www.artificialintelligence-news.com/news/duolingo-shifts-to-ai-first-model-cutting-contractor-roles/#respond Wed, 30 Apr 2025 11:23:21 +0000 https://www.artificialintelligence-news.com/?p=106215 Duolingo is restructuring parts of its workforce as it shifts toward becoming an “AI-first” company, according to an internal memo from CEO and co-founder Luis von Ahn that was later shared publicly on the company’s LinkedIn page. The memo outlines a series of planned changes to how the company operates, with a particular focus on […]

The post Duolingo shifts to AI-first model, cutting contractor roles appeared first on AI News.

]]>
Duolingo is restructuring parts of its workforce as it shifts toward becoming an “AI-first” company, according to an internal memo from CEO and co-founder Luis von Ahn that was later shared publicly on the company’s LinkedIn page.

The memo outlines a series of planned changes to how the company operates, with a particular focus on how artificial intelligence will be used to streamline processes, reduce manual tasks, and scale content development.

Duolingo will gradually stop using contractors for work that AI can take over. The company will also begin evaluating job candidates and employee performance partly based on how they use AI tools. Von Ahn said that headcount increases will only be considered when a team can no longer automate parts of its work effectively.

“Being AI-first means we will need to rethink much of how we work. Making minor tweaks to systems designed for humans won’t get us there,” von Ahn wrote. “AI helps us get closer to our mission. To teach well, we need to create a massive amount of content, and doing that manually doesn’t scale.”

One of the main drivers behind the shift is the need to produce content more quickly, and Von Ahn says that producing new content manually would take decades. By integrating AI into its workflow, Duolingo has replaced processes he described as slow and manual those that are more efficient and automated.

The company has also used AI to develop features that weren’t previously feasible such as an AI-powered video call feature, which aims to provide tutoring to the level of human instructors. According to von Ahn, tools like this move the Duolingo platform closer to its mission – to deliver language instruction globally.

The internal shift is not limited to content creation or product development. Von Ahn said most business functions will be expected to rethink how they operate and identify opportunities to embed AI into daily work. Teams will be encouraged to adopt what he called “constructive constraints” – policies that push them to prioritise automation before requesting additional resources.

The move echoes a broader trend in the tech industry. Shopify CEO Tobi Lütke recently gave a similar directive to employees, urging them to demonstrate why tasks couldn’t be completed with AI before requesting new headcount. Both companies appear to be setting new expectations for how teams manage growth in an AI-dominated environment.

Duolingo’s leadership maintains the changes are not intended to reduce its focus on employee well-being, and the company will continue to support staff with training, mentorship, and tools designed to help employees adapt to new workflows. The goal, he wrote, is not to replace staff with AI, but to eliminate bottlenecks and allow employees to concentrate on complex or creative work.

“AI isn’t just a productivity boost,” von Ahn wrote. “It helps us get closer to our mission.”

The company’s move toward more automation reflects a belief that waiting too long to embrace AI could be a missed opportunity. Von Ahn pointed to Duolingo’s early investment in mobile-first design in 2012 as a model. That shift helped the company gain visibility and user adoption, including being named Apple’s iPhone App of the Year in 2013. The decision to go “AI-first” is framed as a similarly forward-looking step.

The transition is expected to take some time. Von Ahn acknowledged that not all systems are ready for full automation and that integrating AI into certain areas, like codebase analysis, could take longer. Nevertheless, he said moving quickly – even if it means accepting occasional setbacks – is more important than waiting for the technology to be fully mature.

By placing AI at the centre of its operations, Duolingo is aiming to deliver more scalable learning experiences and manage internal resources more efficiently. The company plans to provide additional updates as the implementation progresses.

(Photo by Unsplash)

See also: AI in education: Balancing promises and pitfalls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Duolingo shifts to AI-first model, cutting contractor roles appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/duolingo-shifts-to-ai-first-model-cutting-contractor-roles/feed/ 0
Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/ https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/#respond Thu, 24 Apr 2025 19:01:38 +0000 https://www.artificialintelligence-news.com/?p=105488 AI-powered scams are evolving rapidly as cybercriminals use new technologies to target victims, according to Microsoft’s latest Cyber Signals report. Over the past year, the tech giant says it has prevented $4 billion in fraud attempts, blocking approximately 1.6 million bot sign-up attempts every hour – showing the scale of this growing threat. The ninth […]

The post Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud appeared first on AI News.

]]>
AI-powered scams are evolving rapidly as cybercriminals use new technologies to target victims, according to Microsoft’s latest Cyber Signals report.

Over the past year, the tech giant says it has prevented $4 billion in fraud attempts, blocking approximately 1.6 million bot sign-up attempts every hour – showing the scale of this growing threat.

The ninth edition of Microsoft’s Cyber Signals report, titled “AI-powered deception: Emerging fraud threats and countermeasures,” reveals how artificial intelligence has lowered the technical barriers for cybercriminals, enabling even low-skilled actors to generate sophisticated scams with minimal effort.

What previously took scammers days or weeks to create can now be accomplished in minutes.

The democratisation of fraud capabilities represents a shift in the criminal landscape that affects consumers and businesses worldwide.

The evolution of AI-enhanced cyber scams

Microsoft’s report highlights how AI tools can now scan and scrape the web for company information, helping cybercriminals build detailed profiles of potential targets for highly-convincing social engineering attacks.

Bad actors can lure victims into complex fraud schemes using fake AI-enhanced product reviews and AI-generated storefronts, which come complete with fabricated business histories and customer testimonials.

According to Kelly Bissell, Corporate Vice President of Anti-Fraud and Product Abuse at Microsoft Security, the threat numbers continue to increase. “Cybercrime is a trillion-dollar problem, and it’s been going up every year for the past 30 years,” per the report.

“I think we have an opportunity today to adopt AI faster so we can detect and close the gap of exposure quickly. Now we have AI that can make a difference at scale and help us build security and fraud protections into our products much faster.”

The Microsoft anti-fraud team reports that AI-powered fraud attacks happen globally, with significant activity originating from China and Europe – particularly Germany, due to its status as one of the largest e-commerce markets in the European Union.

The report notes that the larger a digital marketplace is, the more likely a proportional degree of attempted fraud will occur.

E-commerce and employment scams leading

Two particularly concerning areas of AI-enhanced fraud include e-commerce and job recruitment scams.In the ecommerce space, fraudulent websites can now be created in minutes using AI tools with minimal technical knowledge.

Sites often mimic legitimate businesses, using AI-generated product descriptions, images, and customer reviews to fool consumers into believing they’re interacting with genuine merchants.

Adding another layer of deception, AI-powered customer service chatbots can interact convincingly with customers, delay chargebacks by stalling with scripted excuses, and manipulate complaints with AI-generated responses that make scam sites appear professional.

Job seekers are equally at risk. According to the report, generative AI has made it significantly easier for scammers to create fake listings on various employment platforms. Criminals generate fake profiles with stolen credentials, fake job postings with auto-generated descriptions, and AI-powered email campaigns to phish job seekers.

AI-powered interviews and automated emails enhance the credibility of these scams, making them harder to identify. “Fraudsters often ask for personal information, like resumes or even bank account details, under the guise of verifying the applicant’s information,” the report says.

Red flags include unsolicited job offers, requests for payment and communication through informal platforms like text messages or WhatsApp.

Microsoft’s countermeasures to AI fraud

To combat emerging threats, Microsoft says it has implemented a multi-pronged approach across its products and services. Microsoft Defender for Cloud provides threat protection for Azure resources, while Microsoft Edge, like many browsers, features website typo protection and domain impersonation protection. Edge is noted by the Microsoft report as using deep learning technology to help users avoid fraudulent websites.

The company has also enhanced Windows Quick Assist with warning messages to alert users about possible tech support scams before they grant access to someone claiming to be from IT support. Microsoft now blocks an average of 4,415 suspicious Quick Assist connection attempts daily.

Microsoft has also introduced a new fraud prevention policy as part of its Secure Future Initiative (SFI). As of January 2025, Microsoft product teams must perform fraud prevention assessments and implement fraud controls as part of their design process, ensuring products are “fraud-resistant by design.”

As AI-powered scams continue to evolve, consumer awareness remains important. Microsoft advises users to be cautious of urgency tactics, verify website legitimacy before making purchases, and never provide personal or financial information to unverified sources.

For enterprises, implementing multi-factor authentication and deploying deepfake-detection algorithms can help mitigate risk.

See also: Wozniak warns AI will power next-gen scams

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/feed/ 0
ChatGPT got another viral moment with ‘AI action figure’ trend https://www.artificialintelligence-news.com/news/chatgpt-got-another-viral-moment-with-ai-action-figure-trend/ https://www.artificialintelligence-news.com/news/chatgpt-got-another-viral-moment-with-ai-action-figure-trend/#respond Mon, 14 Apr 2025 11:19:31 +0000 https://www.artificialintelligence-news.com/?p=105304 ChatGPT’s image generation feature has sparked a new wave of personalised digital creations, with LinkedIn users leading a trend of turning themselves into action figures. The craze began picking up momentum after the viral Studio Ghibli-style portraits sees users sharing images of themselves as boxed dolls – complete with accessories and job-themed packaging. There are […]

The post ChatGPT got another viral moment with ‘AI action figure’ trend appeared first on AI News.

]]>
ChatGPT’s image generation feature has sparked a new wave of personalised digital creations, with LinkedIn users leading a trend of turning themselves into action figures.

The craze began picking up momentum after the viral Studio Ghibli-style portraits sees users sharing images of themselves as boxed dolls – complete with accessories and job-themed packaging.

There are several variations in the latest wave of AI-generated self-representation. The most common format is similar to a traditional action figure or Barbie doll, with props like coffee mugs, books, and laptops reflecting users’ professional lives. The images are designed to resemble toy store displays, complete with bold taglines and personalised packaging.

The movement gained initial attention on LinkedIn, where professionals used the format to showcase their brand identities more playfully. The “AI Action Figure” format, in particular, resonated with marketers, consultants, and others looking to present themselves as standout figures – literally. Popularity of the service has since trickled into other platforms including Instagram, TikTok, and Facebook, though engagement remains largely centred around LinkedIn.

ChatGPT’s image tool – part of its GPT-4o release – serves as the engine. Users upload a high-resolution photo of themselves, usually full-body, with a custom prompt describing how the final image should look. Details frequently include the person’s name, accessories, outfit styles, and package details. Some opt for a nostalgic “Barbiecore” vibe with pink tones and sparkles, while others stick to a corporate design that reflects their day job.

Refinements are common. Many users go through multiple image generations, changing accessories and rewording prompts until the figure matches their wanted personality or profession. The result is a glossy, toy-style portrait that crosses the line between humour and personal branding.

While the toy-style trend hasn’t seen the same viral reach as the Ghibli portrait craze, it has still sparked a steady flow of content across platforms. Hashtags like #AIBarbie and #BarbieBoxChallenge have gained traction, and some brands – including Mac Cosmetics and NYX – were quick to participate. A few public figures have joined in too, most notably US Representative Marjorie Taylor Greene, who shared a doll version of herself featuring accessories like a Bible and gavel.

Regardless of the buzz, engagement levels are different. Many posts receive limited interaction, and most well-known influencers have avoided the trend. Nevertheless, it highlights ChatGPT’s growing presence in mainstream online culture, and its ability to respond to users’ creativity using relatively simple tools.

The is not the first time ChatGPT’s image generation tool has overwhelmed the platform. When the Ghibli-style portraits first went viral, demand spiked so dramatically that OpenAI temporarily limited image generation for free accounts. CEO Sam Altman later described the surge in users as “biblical demand,” noting a dramatic rise in daily active users and infrastructure stress.

The Barbie/action figure trend, though at a smaller scale, follows that same path – using ChatGPT’s simple interface and its growing popularity as a creative tool. As with other viral AI visuals, the trend has also raised broader conversations about identity, aesthetics, and self-presentation in digital spaces. However, unlike the Ghibli portrait craze, it hasn’t attracted much criticism – at least not yet.

The format’s appeal lies in its simplicity. It offers users a way to engage with AI-generated art without needing technical skills, and satisfies an urge for of self-expression. The result is something like part professional head-shot, part novelty toy, and part visual joke, making it a surprisingly versatile format for social media sharing.

While some may see the toy model phenomenon as a gimmick, others view it as a window into what’s possible when AI tools are placed directly in users’ hands.

For now, whether it’s a mini-me holding a coffee mug or a Barbie-style figure ready for the toy shelf, ChatGPT is again changing how people choose to represent themselves in the digital age.

(Photo by Unsplash)

See also: ChatGPT hits record usage after viral Ghibli feature – Here are four risks to know first

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT got another viral moment with ‘AI action figure’ trend appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chatgpt-got-another-viral-moment-with-ai-action-figure-trend/feed/ 0
ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first https://www.artificialintelligence-news.com/news/chatgpt-hits-record-usage-after-viral-ghibli-feature/ https://www.artificialintelligence-news.com/news/chatgpt-hits-record-usage-after-viral-ghibli-feature/#respond Tue, 08 Apr 2025 10:00:47 +0000 https://www.artificialintelligence-news.com/?p=105218 Following the release of ChatGPT’s new image-generation tool, user activity has surged; millions of people have been drawn to a trend whereby uploaded images are inspired by the unique visual style of Studio Ghibli. The spike in interest contributed to record use levels for the chatbot and strained OpenAI’s infrastructure temporarily. Social media platforms were […]

The post ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first appeared first on AI News.

]]>
Following the release of ChatGPT’s new image-generation tool, user activity has surged; millions of people have been drawn to a trend whereby uploaded images are inspired by the unique visual style of Studio Ghibli.

The spike in interest contributed to record use levels for the chatbot and strained OpenAI’s infrastructure temporarily.

Social media platforms were soon flooded with AI-generated images styled after work by the renowned Japanese animation studio, known for titles like Spirited Away and My Neighbor Totoro. According to Similarweb, weekly active ChatGPT users passed 150 million for the first time this year.

OpenAI CEO Sam Altman said the chatbot gained one million users in a single hour in early April – matching the numbers the text-centric ChatGPT reached over five days when it first launched.

SensorTower data shows the company also recorded a jump in app activity. Weekly active users, downloads, and in-app revenue all hit record levels last week, following the update to GPT-4o that enabled new image-generation features. Compared to late March, downloads rose by 11%, active users grew 5%, and revenue increased by 6%.

The new tool’s popularity caused service slowdowns and intermittent outages. OpenAI acknowledged the increased load, with Altman saying that users should expect delays in feature roll-outs and occasional service disruption as capacity issues are settled.

Legal questions surface around ChatGPT’s Ghibli-style AI art

The viral use of Studio Ghibli-inspired AI imagery from OpenAI’s ChatGPT has raised concerns about copyright. Legal experts point out that while artistic styles themselves may not always be protected, closely mimicking a well-known look could fall into a legal grey area.

“The legal landscape of AI-generated images mimicking Studio Ghibli’s distinctive style is an uncertain terrain. Copyright law has generally protected only specific expressions rather than artistic styles themselves,” said Evan Brown, partner at law firm Neal & McDevitt.

Miyazaki’s past comments have also resurfaced. In 2016, the Studio Ghibli co-founder responded to early AI-generated artwork by saying, “I am utterly disgusted. I would never wish to incorporate this technology into my work at all.”

OpenAI has not commented on whether the model used for its image generation was trained on content similar to Ghibli’s animation.

Data privacy and personal risk

The trend has also drawn attention to user privacy and data security. Christoph C. Cemper, founder of AI prompt management firm AIPRM, cautioned that uploading a photo for artistic transformation may come with more risks than many users realise.

“When you upload a photo to an AI art generator, you’re giving away your biometric data (your face). Some AI tools store that data, use it to train future models, or even sell it to third parties – none of which you may be fully aware of unless you read the fine print,” Cemper said.

OpenAI’s privacy policy confirms that it collects both personal information and use data, including images and content submitted by users. Unless users opt out of training data collection or request deletion via their settings, content will be retained and used to improve future AI models.

Cemper said that once a facial image is uploaded, it becomes vulnerable to misuse. That data could be scraped, leaked, or used in identity theft, deepfake content, or other impersonation scams. He also pointed to prior incidents where private images were found in public AI datasets like LAION-5B, which are used to train various tools like Stable Diffusion.

Copyright and licensing considerations

There are also concerns that AI-generated content styled after recognisable artistic brands could cross into copyright infringement. While creating art in the style of Studio Ghibli, Disney, or Pixar might seem harmless, legal experts warn that such works may be considered derivative, especially if the mimicry is too close.

In 2022, several artists filed a class-action lawsuit against AI companies, claiming their models were trained on original artwork without consent. The cases reflect the broader conversation around how to balance innovation with creators’ rights as generative AI becomes more widely used.

Cemper also advised users to review carefully the terms of service on AI platforms. Many contain licensing clauses with language like “transferable rights,” “non-exclusive,” or “irrevocable licence,” which allow platforms to reproduce, modify, or distribute submitted content – even after the app is deleted.

“The rollout of ChatGPT’s 4o image generator shows just how powerful AI has become as it replicates iconic artistic styles with just a few clicks. But this unprecedented capability comes with a growing risk – the lines between creativity and copyright infringement are increasingly blurred,” Cemper said.

“The rapid pace of AI development also raises significant concerns about privacy and data security. There’s a pressing need for clearer, more transparent privacy policies. Users should be empowered to make informed decisions about uploading their photos or personal data.”

Search interest in “ChatGPT Studio Ghibli” has increased by more than 1,200% in the past week, but alongside the creativity and virality comes a wave of serious problems about privacy, copyright, and data use. As AI image tools get more advanced and accessible, users may want to think twice before uploading personal images, especially if they’re not sure where the data may ultimately end up.

(Image by YouTube Fireship)

See also: Midjourney V7: Faster AI image generation


Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chatgpt-hits-record-usage-after-viral-ghibli-feature/feed/ 0
Beyond acceleration: the rise of Agentic AI https://www.artificialintelligence-news.com/news/beyond-acceleration-the-rise-of-agentic-ai/ https://www.artificialintelligence-news.com/news/beyond-acceleration-the-rise-of-agentic-ai/#respond Mon, 07 Apr 2025 06:31:00 +0000 https://www.artificialintelligence-news.com/?p=105001 We already find ourselves at an inflection point with AI. According to a recent study by McKinsey, we’ve reached the turning point where ‘businesses must look beyond automation and towards AI-driven reinvention’ to stay ahead of the competition. While the era of AI-driven acceleration isn’t over, a new phase has already begun – one that […]

The post Beyond acceleration: the rise of Agentic AI appeared first on AI News.

]]>
We already find ourselves at an inflection point with AI. According to a recent study by McKinsey, we’ve reached the turning point where ‘businesses must look beyond automation and towards AI-driven reinvention’ to stay ahead of the competition. While the era of AI-driven acceleration isn’t over, a new phase has already begun – one that goes beyond making existing workflows more efficient and moves toward replacing existing workflows and/or creating new ones.

This is the age of Agentic AI.

Truly autonomous AI agents are capable of reshaping operations entirely. Systems can act autonomously, make decisions, and adapt dynamically. These agents will go beyond conversational interfaces, responding to user input and proactively managing tasks, navigating complex IT environments, and orchestrating business processes.

However, this shift isn’t just about technology — it also comes with a few considerations. Companies will need to address regulatory challenges, build AI literacy, and focus on applied use cases with clear ROI if the evolution is to succeed.

Moving from acceleration to transformation

So far, companies have primarily used AI to accelerate existing processes, whether through chatbots improving customer interactions or AI-driven analytics optimising workflows. In the end, these implementations make businesses more efficient.

But acceleration alone is no longer enough to stay ahead in the game. The real opportunity lies in replacing outdated workflows entirely and creating new, previously impossible capabilities.

For example, AI plays a vital role in automating troubleshooting and enhancing security within the network industry. But what if AI could autonomously anticipate and predict failures, reconfigure networks proactively to avoid service level degradations in real time, and optimise performance without human intervention? As AI becomes more autonomous, its ability to not just assist but act independently will be key to unlocking new levels of productivity and innovation.

That’s what Agentic AI is about.

Navigating the AI regulatory landscape

However, as AI becomes more autonomous, the regulatory landscape governing its deployment will evolve in parallel. The introduction of the EU AI Act, alongside global regulatory frameworks, means companies must already navigate new compliance requirements related to AI transparency, bias mitigation, and ethical deployment.

That means AI governance can no longer be an afterthought.

AI-powered systems must be designed with built-in compliance mechanisms, data privacy protections, and explainability features to build trust among users and regulators alike. Zero-trust security models will also be crucial in mitigating risks, enforcing strict access controls, and ensuring that AI decisions remain auditable and secure.

The importance of AI literacy

As stated, the success of Agentic AI’s era will depend on more than just technical capabilities – it will require alignment between leadership, developers, and end-users. As AI becomes more advanced, AI literacy becomes a key differentiator, and companies must invest in upskilling their workforce to understand AI’s capabilities, limitations, and ethical considerations. A recent report by the ICT Workforce Consortium found that 92% of information and communication technology jobs are expected to undergo significant transformation due to advancements in AI. So, without proper AI education, businesses risk misalignment between AI implementers and those who use the technology.

This can lead to a lack of trust, slow adoption, and ineffective deployment, which can impact the bottom line. So, to unlock the full potential of Agentic AI, it’s essential to build AI literacy across all levels of the organisation.

As this new era of AI blooms, companies must learn from the current era of AI adoption: focus on applied use cases with tangible ROI. The days of experimenting with AI for innovation’s sake are ending – the next generation of AI deployments must prove their worth.

In networking, it could be projects such as AI-powered autonomous network optimisation. These systems do more than automate tasks; they continuously monitor network traffic, predict congestion points, and autonomously adjust configurations to ensure optimal performance. By providing proactive insights and real-time adjustments, these AI-driven solutions help companies prevent issues and outages before they occur.

This level of AI autonomy reduces human intervention and enhances overall security and operational efficiency.

Identifying and implementing high-value, high-impact Agentic AI use cases such as these will be vital.

Trust as the adoption hurdle

While we’re entering a new era, trust plays a key role in widespread AI adoption. Users must feel confident that AI decisions are accurate, fair, and explainable. Even the most advanced AI models will face challenges gaining acceptance without transparency.

This is particularly relevant as AI transitions from assisting users to making autonomous decisions. Whether AI agents manage IT infrastructure or drive customer interactions, organisations must ensure that AI decisions are auditable, unbiased, and aligned with business objectives.

Without transparency and accountability, companies may face resistance from both employees and customers.

The future of AI

Looking ahead, 2025 holds exciting potential for AI. As it reaches a new level of maturity, its success will depend on how well organisations, governments, and individuals adapt to its growing presence in everyday life. Moving beyond efficiency and automation, AI has the opportunity to become a powerful driver of intelligent decision-making, problem-solving, and innovation.

Organisations that harness Agentic AI effectively – balancing autonomy with oversight – will see the greatest benefits. However, success will require a commitment to transparency, education, and ethical deployment to build trust and ensure AI is a true enabler of progress.

Because AI is no longer just an accelerant, it is a transformative force reshaping how we work, communicate, and interact with technology.

Photo by Ryan De Hamer on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Beyond acceleration: the rise of Agentic AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/beyond-acceleration-the-rise-of-agentic-ai/feed/ 0
Amazon Nova Act: A step towards smarter, web-native AI agents https://www.artificialintelligence-news.com/news/amazon-nova-act-step-towards-smarter-web-native-ai-agents/ https://www.artificialintelligence-news.com/news/amazon-nova-act-step-towards-smarter-web-native-ai-agents/#respond Tue, 01 Apr 2025 16:57:43 +0000 https://www.artificialintelligence-news.com/?p=105105 Amazon has introduced Nova Act, an advanced AI model engineered for smarter agents that can execute tasks within web browsers. While large language models popularised the concept of “agents” as tools that answer queries or retrieve information via methods such as Retrieval-Augmented Generation (RAG), Amazon envisions something more robust. The company defines agents not just […]

The post Amazon Nova Act: A step towards smarter, web-native AI agents appeared first on AI News.

]]>
Amazon has introduced Nova Act, an advanced AI model engineered for smarter agents that can execute tasks within web browsers.

While large language models popularised the concept of “agents” as tools that answer queries or retrieve information via methods such as Retrieval-Augmented Generation (RAG), Amazon envisions something more robust. The company defines agents not just as responders but as entities capable of performing tangible, multi-step tasks in diverse digital and physical environments.

“Our dream is for agents to perform wide-ranging, complex, multi-step tasks like organising a wedding or handling complex IT tasks to increase business productivity,” said Amazon.

Current market offerings often fall short, with many agents requiring continuous human supervision and their functionality dependent on comprehensive API integration—something not feasible for all tasks. Nova Act is Amazon’s answer to these limitations.

Alongside the model, Amazon is releasing a research preview of the Amazon Nova Act SDK. Using the SDK, developers can create agents capable of automating web tasks like submitting out-of-office notifications, scheduling calendar holds, or enabling automatic email replies.

The SDK aims to break down complex workflows into dependable “atomic commands” such as searching, checking out, or interacting with specific interface elements like dropdowns or popups. Detailed instructions can be added to refine these commands, allowing developers to, for instance, instruct an agent to bypass an insurance upsell during checkout.

To further enhance accuracy, the SDK supports browser manipulation via Playwright, API calls, Python integrations, and parallel threading to overcome web page load delays.

Nova Act: Exceptional performance on benchmarks

Unlike other generative models that showcase middling accuracy on complex tasks, Nova Act prioritises reliability. Amazon highlights its model’s impressive scores of over 90% on internal evaluations for specific capabilities that typically challenge competitors. 

Nova Act achieved a near-perfect 0.939 on the ScreenSpot Web Text benchmark, which measures natural language instructions for text-based interactions, such as adjusting font sizes. Competing models such as Claude 3.7 Sonnet (0.900) and OpenAI’s CUA (0.883) trail behind by significant margins.

Similarly, Nova Act scored 0.879 in the ScreenSpot Web Icon benchmark, which tests interactions with visual elements like rating stars or icons. While the GroundUI Web test, designed to assess an AI’s proficiency in navigating various user interface elements, showed Nova Act slightly trailing competitors, Amazon sees this as an area ripe for improvement as the model evolves.

Amazon stresses its focus on delivering practical reliability. Once an agent built using Nova Act functions as expected, developers can deploy it headlessly, integrate it as an API, or even schedule it to run tasks asynchronously. In one demonstrated use case, an agent automatically orders a salad for delivery every Tuesday evening without requiring ongoing user intervention.

Amazon sets out its vision for scalable and smart AI agents

One of Nova Act’s standout features is its ability to transfer its user interface understanding to new environments with minimal additional training. Amazon shared an instance where Nova Act performed admirably in browser-based games, even though its training had not included video game experiences. This adaptability positions Nova Act as a versatile agent for diverse applications.

This capability is already being leveraged in Amazon’s own ecosystem. Within Alexa+, Nova Act enables self-directed web navigation to complete tasks for users, even when API access is not comprehensive enough. This represents a step towards smarter AI assistants that can function independently, harnessing their skills in more dynamic ways.

Amazon is clear that Nova Act represents the first stage in a broader mission to craft intelligent, reliable AI agents capable of handling increasingly complex, multi-step tasks. 

Expanding beyond simple instructions, Amazon’s focus is on training agents through reinforcement learning across varied, real-world scenarios rather than overly simplistic demonstrations. This foundational model serves as a checkpoint in a long-term training curriculum for Nova models, indicating the company’s ambition to reshape the AI agent landscape.

“The most valuable use cases for agents have yet to be built,” Amazon noted. “The best developers and designers will discover them. This research preview of our Nova Act SDK enables us to iterate alongside these builders through rapid prototyping and iterative feedback.”

Nova Act is a step towards making AI agents truly useful for complex, digital tasks. From rethinking benchmarks to emphasising reliability, its design philosophy is centred around empowering developers to move beyond what’s possible with current-generation tools. 

See also: Anthropic provides insights into the ‘AI biology’ of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon Nova Act: A step towards smarter, web-native AI agents appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-nova-act-step-towards-smarter-web-native-ai-agents/feed/ 0
OpenAI pulls free GPT-4o image generator after one day https://www.artificialintelligence-news.com/news/openai-pulls-free-gpt-4o-image-generator-after-one-day/ https://www.artificialintelligence-news.com/news/openai-pulls-free-gpt-4o-image-generator-after-one-day/#respond Thu, 27 Mar 2025 12:24:39 +0000 https://www.artificialintelligence-news.com/?p=105037 OpenAI has pulled its upgraded image generation feature, powered by the advanced GPT-4o reasoning model, from the free tier of ChatGPT. The decision comes just a day after the update was launched, following an unforeseen surge in users creating images in the distinctive style of renowned Japanese animation house, Studio Ghibli. The update, which promised […]

The post OpenAI pulls free GPT-4o image generator after one day appeared first on AI News.

]]>
OpenAI has pulled its upgraded image generation feature, powered by the advanced GPT-4o reasoning model, from the free tier of ChatGPT.

The decision comes just a day after the update was launched, following an unforeseen surge in users creating images in the distinctive style of renowned Japanese animation house, Studio Ghibli.

The update, which promised to deliver enhanced realism in both AI-generated images and text, was intended to showcase the capabilities of GPT-4o. 

This new model employs an “autoregressive approach” to image creation, building visuals from left to right and top to bottom, a method that contrasts with the simultaneous generation employed by older models. This technique is designed to improve the accuracy and lifelike quality of the imagery produced.

Furthermore, the new model generates sharper and more coherent text within images, addressing a common shortcoming of previous AI models which often resulted in blurry or nonsensical text. 

OpenAI also conducted post-launch training, guided by human feedback, to identify and rectify common errors in both text and image outputs.

However, the public response to the image generation upgrade took an unexpected turn almost immediately after its release on ChatGPT. 

Users embraced the ability to create images in the iconic style of Studio Ghibli, sharing their imaginative creations across various social media platforms. These included reimagined scenes from classic films like “The Godfather” and “Star Wars,” as well as popular internet memes such as “distracted boyfriend” and “disaster girl,” all rendered with the aesthetic of the beloved animation studio.

Even OpenAI CEO Sam Altman joined in on the fun, changing his X profile picture to a Studio Ghibli-esque rendition of himself:

Screenshot of the profile of OpenAI CEO Sam Altman on Twitter

However, later that day, Altman posted on X announcing a temporary delay in the rollout of the image generator update for free ChatGPT users.

While paid subscribers to ChatGPT Plus, Pro, and Team continue to have access to the feature, Altman provided no specific timeframe for when the functionality would return to the free tier.

The virality of the Studio Ghibli-style images seemingly prompted OpenAI to reconsider its rollout strategy. While the company had attempted to address ethical and legal considerations surrounding AI image generation, the sheer volume and nature of the user-generated content appear to have caught them off-guard.

The intersection of AI-generated art and intellectual property rights is a complex and often debated area. Style is not historically considered as being protected by copyright law in the same respect as specific works.

Despite this legal nuance, OpenAI’s swift decision to withdraw the GPT-4o image generation feature from its free tier suggests a cautious approach. The company appears to be taking a step back to evaluate the situation and determine its next course of action in light of the unexpected popularity of Ghibli-inspired AI art.

OpenAI’s decision to roll back the deployment of its latest image generation feature underscores the ongoing uncertainty around not just copyright law, but also the ethical implications of using AI to replicate human creativity.

(Photo by Kai Pilger)

See also: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI pulls free GPT-4o image generator after one day appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-pulls-free-gpt-4o-image-generator-after-one-day/feed/ 0
Anthropic’s AI assistant Claude learns to search the web https://www.artificialintelligence-news.com/news/anthropic-ai-assistant-claude-learns-search-the-web/ https://www.artificialintelligence-news.com/news/anthropic-ai-assistant-claude-learns-search-the-web/#respond Fri, 21 Mar 2025 12:32:17 +0000 https://www.artificialintelligence-news.com/?p=104953 Anthropic has announced its AI assistant Claude can now search the web, providing users with more up-to-date and relevant responses. This integration of web search functionality means Claude can now access the latest information to expand its knowledge base beyond its initial training data. A key feature of this update is the emphasis on transparency […]

The post Anthropic’s AI assistant Claude learns to search the web appeared first on AI News.

]]>
Anthropic has announced its AI assistant Claude can now search the web, providing users with more up-to-date and relevant responses.

This integration of web search functionality means Claude can now access the latest information to expand its knowledge base beyond its initial training data.

A key feature of this update is the emphasis on transparency and fact-checking. Anthropic highlights that “When Claude incorporates information from the web into its responses, it provides direct citations so you can easily fact check sources.”

Furthermore, Claude aims to streamline the information-gathering process for users. Instead of requiring users to manually sift through search engine results, “Claude processes and delivers relevant sources in a conversational format.”

Anthropic believes this enhancement will unlock a multitude of new use cases for Claude across various industries. They outlined several ways users can leverage Claude with web search:

  • Sales teams: Can now “transform account planning and drive higher win rates through informed conversations with prospects by analysing industry trends to learn key initiatives and pain points.” This allows sales professionals to have more informed and persuasive conversations with potential clients.
  • Financial analysts: Can “assess current market data, earnings reports, and industry trends to make better investment decisions and inform financial model assumptions.” Access to real-time financial data can improve the accuracy and timeliness of financial analysis.
  • Researchers: Can “build stronger grant proposals and literature reviews by searching across primary sources on the web, spotting emerging trends and identifying gaps in the current literature.” This capability can accelerate the research process and lead to more comprehensive and insightful findings.
  • Shoppers: Can “compare product features, prices, and reviews across multiple sources to make more informed purchase decisions.”

While the initial rollout is limited to paid users in the US, Anthropic assures that support for users on their free plan and more countries is coming soon.

To activate the web search feature, users simply need to “toggle on web search in your profile settings and start a conversation with Claude 3.7 Sonnet.” Once enabled, “When applicable, Claude will search the web to inform its response.”

This update aims to make Claude a more powerful and versatile tool for a wide range of tasks. By providing access to real-time information and ensuring transparency through citations, Anthropic is addressing key challenges and further solidifying Claude’s position as a leading AI assistant.

(Image credit: Anthropic)

See also: Hugging Face calls for open-source focus in the AI Action Plan

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic’s AI assistant Claude learns to search the web appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-ai-assistant-claude-learns-search-the-web/feed/ 0
Opera introduces browser-integrated AI agent https://www.artificialintelligence-news.com/news/opera-introduces-browser-integrated-ai-agent/ https://www.artificialintelligence-news.com/news/opera-introduces-browser-integrated-ai-agent/#respond Mon, 03 Mar 2025 16:34:09 +0000 https://www.artificialintelligence-news.com/?p=104668 Opera has introduced “Browser Operator,” a native AI agent designed to perform tasks for users directly within the browser. Rather than acting as a separate tool, Browser Operator is an extension of the browser itself—designed to empower users by automating repetitive tasks like purchasing products, completing online forms, and gathering web content. Unlike server-based AI […]

The post Opera introduces browser-integrated AI agent appeared first on AI News.

]]>
Opera has introduced “Browser Operator,” a native AI agent designed to perform tasks for users directly within the browser.

Rather than acting as a separate tool, Browser Operator is an extension of the browser itself—designed to empower users by automating repetitive tasks like purchasing products, completing online forms, and gathering web content.

Unlike server-based AI integrations which require sensitive data to be sent to third-party servers, Browser Operator processes tasks locally within the Opera browser.

Opera’s demonstration video showcases how Browser Operator can streamline an everyday task like buying socks. Instead of manually scrolling through product pages or filling out payment forms, users could delegate the entire process to Browser Operator—allowing them to shift focus to activities that matter more to them, such as spending time with loved ones.

Harnessing natural language processing powered by Opera’s AI Composer Engine, Browser Operator interprets written instructions from users and executes corresponding tasks within the browser. All operations occur locally on a user’s device, leveraging the browser’s own infrastructure to safely and swiftly complete commands.  

If Browser Operator encounters a sensitive step in the process, such as entering payment details or approving an order, it pauses and requests the user’s input. You also have the freedom to intervene and take control of the process at any time.  

Every step Browser Operator takes is transparent and fully reviewable, providing users a clear understanding of how tasks are being executed. If mistakes occur – like placing an incorrect order – you can further instruct the AI agent to make amends, such as cancelling the order or adjusting a form.

The key differentiators: Privacy, performance, and precision  

What sets Browser Operator apart from other AI-integrated tools is its localised, privacy-first architecture. Unlike competitors that depend on screenshots or video recordings to understand webpage content, Opera’s approach uses the Document Object Model (DOM) Tree and browser layout data—a textual representation of the webpage.  

This difference offers several key advantages:

  • Faster task completion: Browser Operator doesn’t need to “see” and interpret pixels on the screen or emulate mouse movements. Instead, it accesses web page elements directly, avoiding unnecessary overhead and allowing it to process pages holistically without scrolling.
  • Enhanced privacy: With all operations conducted on the browser itself, user data – including logins, cookies, and browsing history – remains secure on the local device. No screenshots, keystrokes, or personal information are sent to Opera’s servers.
  • Easier interaction with page elements: The AI can engage with elements hidden from the user’s view, such as behind cookie popups or verification dialogs, enabling seamless access to web page content.

By enabling the browser to autonomously perform tasks, Opera is taking a significant step forward in making browsers “agentic”—not just tools for accessing the internet, but assistants that actively enhance productivity.  

See also: You.com ARI: Professional-grade AI research agent for businesses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Opera introduces browser-integrated AI agent appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/opera-introduces-browser-integrated-ai-agent/feed/ 0