AI Industries News | Latest AI in Industry News | AI News https://www.artificialintelligence-news.com/categories/ai-industries/ Artificial Intelligence News Fri, 02 May 2025 12:38:13 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png AI Industries News | Latest AI in Industry News | AI News https://www.artificialintelligence-news.com/categories/ai-industries/ 32 32 Google AMIE: AI doctor learns to ‘see’ medical images https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/ https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/#respond Fri, 02 May 2025 12:38:12 +0000 https://www.artificialintelligence-news.com/?p=106274 Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer). Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your […]

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer).

Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your ECG printout. That’s what Google is aiming for.

We already knew AMIE showed promise in text-based medical chats, thanks to earlier work published in Nature. But let’s face it, real medicine isn’t just about words.

Doctors rely heavily on what they can see – skin conditions, readings from machines, lab reports. As the Google team rightly points out, even simple instant messaging platforms “allow static multimodal information (e.g., images and documents) to enrich discussions.”

Text-only AI was missing a huge piece of the puzzle. The big question, as the researchers put it, was “Whether LLMs can conduct diagnostic clinical conversations that incorporate this more complex type of information.”

Google teaches AMIE to look and reason

Google’s engineers have beefed up AMIE using their Gemini 2.0 Flash model as the brains of the operation. They’ve combined this with what they call a “state-aware reasoning framework.” In plain English, this means the AI doesn’t just follow a script; it adapts its conversation based on what it’s learned so far and what it still needs to figure out.

It’s close to how a human clinician works: gathering clues, forming ideas about what might be wrong, and then asking for more specific information – including visual evidence – to narrow things down.

“This enables AMIE to request relevant multimodal artifacts when needed, interpret their findings accurately, integrate this information seamlessly into the ongoing dialogue, and use it to refine diagnoses,” Google explains.

Think of the conversation flowing through stages: first gathering the patient’s history, then moving towards diagnosis and management suggestions, and finally follow-up. The AI constantly assesses its own understanding, asking for that skin photo or lab result if it senses a gap in its knowledge.

To get this right without endless trial-and-error on real people, Google built a detailed simulation lab.

Google created lifelike patient cases, pulling realistic medical images and data from sources like the PTB-XL ECG database and the SCIN dermatology image set, adding plausible backstories using Gemini. Then, they let AMIE ‘chat’ with simulated patients within this setup and automatically check how well it performed on things like diagnostic accuracy and avoiding errors (or ‘hallucinations’).

The virtual OSCE: Google puts AMIE through its paces

The real test came in a setup designed to mirror how medical students are assessed: the Objective Structured Clinical Examination (OSCE).

Google ran a remote study involving 105 different medical scenarios. Real actors, trained to portray patients consistently, interacted either with the new multimodal AMIE or with actual human primary care physicians (PCPs). These chats happened through an interface where the ‘patient’ could upload images, just like you might in a modern messaging app.

Afterwards, specialist doctors (in dermatology, cardiology, and internal medicine) and the patient actors themselves reviewed the conversations.

The human doctors scored everything from how well history was taken, the accuracy of the diagnosis, the quality of the suggested management plan, right down to communication skills and empathy—and, of course, how well the AI interpreted the visual information.

Surprising results from the simulated clinic

Here’s where it gets really interesting. In this head-to-head comparison within the controlled study environment, Google found AMIE didn’t just hold its own—it often came out ahead.

The AI was rated as being better than the human PCPs at interpreting the multimodal data shared during the chats. It also scored higher on diagnostic accuracy, producing differential diagnosis lists (the ranked list of possible conditions) that specialists deemed more accurate and complete based on the case details.

Specialist doctors reviewing the transcripts tended to rate AMIE’s performance higher across most areas. They particularly noted “the quality of image interpretation and reasoning,” the thoroughness of its diagnostic workup, the soundness of its management plans, and its ability to flag when a situation needed urgent attention.

Perhaps one of the most surprising findings came from the patient actors: they often found the AI to be more empathetic and trustworthy than the human doctors in these text-based interactions.

And, on a critical safety note, the study found no statistically significant difference between how often AMIE made errors based on the images (hallucinated findings) compared to the human physicians.

Technology never stands still, so Google also ran some early tests swapping out the Gemini 2.0 Flash model for the newer Gemini 2.5 Flash.

Using their simulation framework, the results hinted at further gains, particularly in getting the diagnosis right (Top-3 Accuracy) and suggesting appropriate management plans.

While promising, the team is quick to add a dose of realism: these are just automated results, and “rigorous assessment through expert physician review is essential to confirm these performance benefits.”

Important reality checks

Google is commendably upfront about the limitations here. “This study explores a research-only system in an OSCE-style evaluation using patient actors, which substantially under-represents the complexity… of real-world care,” they state clearly. 

Simulated scenarios, however well-designed, aren’t the same as dealing with the unique complexities of real patients in a busy clinic. They also stress that the chat interface doesn’t capture the richness of a real video or in-person consultation.

So, what’s the next step? Moving carefully towards the real world. Google is already partnering with Beth Israel Deaconess Medical Center for a research study to see how AMIE performs in actual clinical settings with patient consent.

The researchers also acknowledge the need to eventually move beyond text and static images towards handling real-time video and audio—the kind of interaction common in telehealth today.

Giving AI the ability to ‘see’ and interpret the kind of visual evidence doctors use every day offers a glimpse of how AI might one day assist clinicians and patients. However, the path from these promising findings to a safe and reliable tool for everyday healthcare is still a long one that requires careful navigation.

(Photo by Alexander Sinn)

See also: Are AI chatbots really changing the world of work?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/feed/ 0
Are AI chatbots really changing the world of work? https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/ https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/#respond Fri, 02 May 2025 09:54:32 +0000 https://www.artificialintelligence-news.com/?p=106266 We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now. Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far. Researchers Anders Humlum (University of Chicago) […]

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now.

Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far.

Researchers Anders Humlum (University of Chicago) and Emilie Vestergaard (University of Copenhagen) didn’t just rely on anecdotes. They dug deep, connecting responses from two big surveys (late 2023 and 2024) with official, detailed records about jobs and pay in Denmark.

The pair zoomed in on around 25,000 people working in 7,000 different places, covering 11 jobs thought to be right in the path of AI disruption.   

Everyone’s using AI chatbots for work, but where are the benefits?

What they found confirms what many of us see: AI chatbots are everywhere in Danish workplaces now. Most bosses are actually encouraging staff to use them, a real turnaround from the early days when companies were understandably nervous about things like data privacy.

Almost four out of ten employers have even rolled out their own in-house chatbots, and nearly a third of employees have had some formal training on these tools.   

When bosses gave the nod, the number of staff using chatbots practically doubled, jumping from 47% to 83%. It also helped level the playing field a bit. That gap between men and women using chatbots? It shrank noticeably when companies actively encouraged their use, especially when they threw in some training.

So, the tools are popular, companies are investing, people are getting trained… but the big economic shift? It seems to be missing in action.

Using statistical methods to compare people who used AI chatbots for work with those who didn’t, both before and after ChatGPT burst onto the scene, the researchers found… well, basically nothing.

“Precise zeros,” the researchers call their findings. No significant bump in pay, no change in recorded work hours, across all 11 job types they looked at. And they’re pretty confident about this – the numbers rule out any average effect bigger than just 1%.

This wasn’t just a blip, either. The lack of impact held true even for the keen beans who jumped on board early, those using chatbots daily, or folks working where the boss was actively pushing the tech.

Looking at whole workplaces didn’t change the story; places with lots of chatbot users didn’t see different trends in hiring, overall wages, or keeping staff compared to places using them less.

Productivity gains: More of a gentle nudge than a shove

Why the big disconnect? Why all the hype and investment if it’s not showing up in paychecks or job stats? The study flags two main culprits: the productivity boosts aren’t as huge as hoped in the real world, and what little gains there are aren’t really making their way into wages.

Sure, people using AI chatbots for work felt they were helpful. They mentioned better work quality and feeling more creative. But the number one benefit? Saving time.

However, when the researchers crunched the numbers, the average time saved was only about 2.8% of a user’s total work hours. That’s miles away from the huge 15%, 30%, even 50% productivity jumps seen in controlled lab-style experiments (RCTs) involving similar jobs.

Why the difference? A few things seem to be going on. Those experiments often focus on jobs or specific tasks where chatbots really shine (like coding help or basic customer service responses). This study looked at a wider range, including jobs like teaching where the benefits might be smaller.

The researchers stress the importance of what they call “complementary investments”. People whose companies encouraged chatbot use and provided training actually did report bigger benefits – saving more time, improving quality, and feeling more creative. This suggests that just having the tool isn’t enough; you need the right support and company environment to really unlock its potential.

And even those modest time savings weren’t padding wallets. The study reckons only a tiny fraction – maybe 3% to 7% – of the time saved actually showed up as higher earnings. It might be down to standard workplace inertia, or maybe it’s just harder to ask for a raise based on using a tool your boss hasn’t officially blessed, especially when many people started using them off their own bat.

Making new work, not less work

One fascinating twist is that AI chatbots aren’t just about doing old work tasks faster. They seem to be creating new tasks too. Around 17% of people using them said they had new workloads, mostly brand new types of tasks.

This phenomenon happened more often in workplaces that encouraged chatbot use. It even spilled over to people not using the tools – about 5% of non-users reported new tasks popping up because of AI, especially teachers having to adapt assignments or spot AI-written homework.   

What kind of new tasks? Things like figuring out how to weave AI into daily workflows, drafting content with AI help, and importantly, dealing with the ethical side and making sure everything’s above board. It hints that companies are still very much in the ‘figuring it out’ phase, spending time and effort adapting rather than just reaping instant rewards.

What’s the verdict on the work impact of AI chatbots?

The researchers are careful not to write off generative AI completely. They see pathways for it to become more influential over time, especially as companies get better at integrating it and maybe as those “new tasks” evolve.

But for now, their message is clear: the current reality doesn’t match the hype about a massive, immediate job market overhaul.

“Despite rapid adoption and substantial investments… our key finding is that AI chatbots have had minimal impact on productivity and labor market outcomes to date,” the researchers conclude.   

It brings to mind that old quote about the early computer age: seen everywhere, except in the productivity stats. Two years on from ChatGPT’s launch kicking off the fastest tech adoption we’ve ever seen, its actual mark on jobs and pay looks surprisingly light.

The revolution might still be coming, but it seems to be taking its time.   

See also: Claude Integrations: Anthropic adds AI to your favourite work tools

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/feed/ 0
Conversations with AI: Education https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/ https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/#respond Thu, 01 May 2025 10:27:00 +0000 https://www.artificialintelligence-news.com/?p=106152 How can AI be used in education? An ethical debate, with an AI

The post Conversations with AI: Education appeared first on AI News.

]]>
The classroom hasn’t changed much in over a century. A teacher at the front, rows of students listening, and a curriculum defined by what’s testable – not necessarily what’s meaningful.

But AI, as arguably the most powerful tool humanity has created in the last few years, is about to break that model open. Not with smarter software or faster grading, but by forcing us to ask: “What is the purpose of education in a world where machines could teach?”

At AI News, rather than speculate about distant futures or lean on product announcements and edtech deals, we started a conversation – with an AI. We asked it what it sees when it looks at the classroom, the teacher, and the learner.

What follows is a distilled version of that exchange, given here not as a technical analysis, but as a provocation.

The system cracks

Education is under pressure worldwide: Teachers are overworked, students are disengaged, and curricula feel outdated in a changing world. Into this comes AI – not as a patch or plug-in, but as a potential accelerant.

Our opening prompt: What roles might an AI play in education?

The answer was wide-ranging:

  • Personalised learning pathways
  • Intelligent tutoring systems
  • Administrative efficiency
  • Language translation and accessibility tools
  • Behavioural and emotional recognition
  • Scalable, always-available content delivery

These are features of an education system, its nuts and bolts. But what about meaning and ethics?

Flawed by design?

One concern kept resurfacing: bias.

We asked the AI: “If you’re trained on the internet – and the internet is the output of biased, flawed human thought – doesn’t that mean your responses are equally flawed?”

The AI acknowledged the logic. Bias is inherited. Inaccuracies, distortions, and blind spots all travel from teacher to pupil. What an AI learns, it learns from us, and it can reproduce our worst habits at vast scale.

But we weren’t interested in letting human teachers off the hook either. So we asked: “Isn’t bias true of human educators too?”

The AI agreed: human teachers are also shaped by the limitations of their training, culture, and experience. Both systems – AI and human – are imperfect. But only humans can reflect and care.

That led us to a deeper question: if both AI and human can reproduce bias, why use AI at all?

Why use AI in education?

The AI outlined what it felt were its clear advantages, which seemed to be systemic, rather than revolutionary. The aspect of personalised learning intrigued us – after all, doing things fast and at scale is what software and computers are good at.

We asked: How much data is needed to personalise learning effectively?

The answer: it varies. But at scale, it could require gigabytes or even terabytes of student data – performance, preferences, feedback, and longitudinal tracking over years.

Which raises its own question: “What do we trade in terms of privacy for that precision?”

A personalised or fragmented future?

Putting aside the issue of whether we’re happy with student data being codified and ingested, if every student were to receive a tailored lesson plan, what happens to the shared experience of learning?

Education has always been more than information. It’s about dialogue, debate, discomfort, empathy, and encounters with other minds, not just mirrored algorithms. AI can tailor a curriculum, but it can’t recreate the unpredictable alchemy of a classroom.

We risk mistaking customisation for connection.

“I use ChatGPT to provide more context […] to plan, structure and compose my essays.” – James, 17, Ottawa, Canada.

The teacher reimagined

Where does this leave the teacher?

In the AI’s view: liberated. Freed from repetitive tasks and administrative overload, the teacher is able to spend more time guiding, mentoring, and cultivating important thinking.

But this requires a shift in mindset – from delivering knowledge to curating wisdom. In broad terms, from part-time administrator, part-time teacher, to in-classroom collaborator.

AI won’t replace teachers, but it might reveal which parts of the teaching job were never the most important.

“The main way I use ChatGPT is to either help with ideas for when I am planning an essay, or to reinforce understanding when revising.” – Emily, 16, Eastbourne College, UK.

What we teach next

So, what do we want students to learn?

In an AI-rich world, important thinking, ethical reasoning, and emotional intelligence rise in value. Ironically, the more intelligent our machines become, the more we’ll need to double down on what makes us human.

Perhaps the ultimate lesson isn’t in what AI can teach us – but in what it can’t, or what it shouldn’t even try.

Conclusion

The future of education won’t be built by AI alone. The is our opportunity to modernise classrooms, and to reimagine them. Not to fear the machine, but to ask the bigger question: “What is learning in a world where all knowledge is available?”

Whatever the answer is – that’s how we should be teaching next.

(Image source: “Large lecture college classes” by Kevin Dooley is licensed under CC BY 2.0)

See also: AI in education: Balancing promises and pitfalls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Conversations with AI: Education appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/feed/ 0
AI in education: Balancing promises and pitfalls https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/ https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/#respond Mon, 28 Apr 2025 12:27:09 +0000 https://www.artificialintelligence-news.com/?p=106158 The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges. There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated […]

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges.

There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated world, they need to be ready.

“To ensure the United States remains a global leader in this technological revolution, we must provide our nation’s youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,” President Trump declared.

So, what does AI actually look like in the classroom?

One of the biggest hopes for AI in education is making learning more personal. Imagine software that can figure out how individual students are doing, then adjust the pace and materials just for them. This could mean finally moving away from the old one-size-fits-all approach towards learning environments that adapt and offer help exactly where it’s needed.

The US executive order hints at this, wanting to improve results through things like “AI-based high-quality instructional resources” and “high-impact tutoring.”

And what about teachers? AI could be a huge help here too, potentially taking over tedious admin tasks like grading, freeing them up to actually teach. Plus, AI software might offer fresh ways to present information.

Getting kids familiar with AI early on could also take away some of the mystery around the technology. It might spark their “curiosity and creativity” and give them the foundation they need to become “active and responsible participants in the workforce of the future.”

The focus stretches to lifelong learning and getting people ready for the job market. On top of that, AI tools like text-to-speech or translation features can make learning much more accessible for students with disabilities, opening up educational environments for everyone.

Not all smooth sailing: The challenges ahead for AI in education

While the potential is huge, we need to be realistic about the significant hurdles and potential downsides.

First off, AI runs on student data – lots of it. That means we absolutely need strong rules and security to make sure this data is collected ethically, used correctly, and kept safe from breaches. Privacy is paramount here.

Then there’s the bias problem. If the data used to train AI reflects existing unfairness in society (and let’s be honest, it often does), the AI could end up repeating or even worsening those inequalities. Think biased assessments or unfair resource allocation. Careful testing and constant checks are crucial to catch and fix this.

We also can’t ignore the digital divide. If some students don’t have reliable internet, the right devices, or the necessary tech infrastructure at home or school, AI could widen the gap between the haves and have-nots. It’s vital that everyone gets fair access.

There’s also a risk that leaning too heavily on AI education tools might stop students from developing essential skills like critical thinking. We need to teach them how to use AI as a helpful tool, not a crutch they can’t function without.

Maybe the biggest piece of the puzzle, though, is making sure our teachers are ready. As the executive order rightly points out, “We must also invest in our educators and equip them with the tools and knowledge.”

This isn’t just about knowing which buttons to push; teachers need to understand how AI fits into teaching effectively and ethically. That requires solid professional development and ongoing support.

A recent GMB Union poll found that while about a fifth of UK schools are using AI now, the staff often aren’t getting the training they need:

View on Threads

Finding the right path forward

It’s going to take everyone – governments, schools, tech companies, and teachers – pulling together in order to ensure that AI plays a positive role in education.

We absolutely need clear policies and standards covering ethics, privacy, bias, and making sure AI is accessible to all students. We also need to keep investing in research to figure out the best ways to use AI in education and to build tools that are fair and effective.

And critically, we need a long-term commitment to teacher education to get educators comfortable and skilled with these changes. Part of this is building broad AI literacy, making sure all students get a basic understanding of this technology and how it impacts society.

AI could be a positive force in education – making it more personalised, efficient, and focused on the skills students actually need. But turning that potential into reality means carefully navigating those tricky ethical, practical, and teaching challenges head-on.

See also: How does AI judge? Anthropic studies the values of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/feed/ 0
Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/ https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/#respond Thu, 24 Apr 2025 19:01:38 +0000 https://www.artificialintelligence-news.com/?p=105488 AI-powered scams are evolving rapidly as cybercriminals use new technologies to target victims, according to Microsoft’s latest Cyber Signals report. Over the past year, the tech giant says it has prevented $4 billion in fraud attempts, blocking approximately 1.6 million bot sign-up attempts every hour – showing the scale of this growing threat. The ninth […]

The post Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud appeared first on AI News.

]]>
AI-powered scams are evolving rapidly as cybercriminals use new technologies to target victims, according to Microsoft’s latest Cyber Signals report.

Over the past year, the tech giant says it has prevented $4 billion in fraud attempts, blocking approximately 1.6 million bot sign-up attempts every hour – showing the scale of this growing threat.

The ninth edition of Microsoft’s Cyber Signals report, titled “AI-powered deception: Emerging fraud threats and countermeasures,” reveals how artificial intelligence has lowered the technical barriers for cybercriminals, enabling even low-skilled actors to generate sophisticated scams with minimal effort.

What previously took scammers days or weeks to create can now be accomplished in minutes.

The democratisation of fraud capabilities represents a shift in the criminal landscape that affects consumers and businesses worldwide.

The evolution of AI-enhanced cyber scams

Microsoft’s report highlights how AI tools can now scan and scrape the web for company information, helping cybercriminals build detailed profiles of potential targets for highly-convincing social engineering attacks.

Bad actors can lure victims into complex fraud schemes using fake AI-enhanced product reviews and AI-generated storefronts, which come complete with fabricated business histories and customer testimonials.

According to Kelly Bissell, Corporate Vice President of Anti-Fraud and Product Abuse at Microsoft Security, the threat numbers continue to increase. “Cybercrime is a trillion-dollar problem, and it’s been going up every year for the past 30 years,” per the report.

“I think we have an opportunity today to adopt AI faster so we can detect and close the gap of exposure quickly. Now we have AI that can make a difference at scale and help us build security and fraud protections into our products much faster.”

The Microsoft anti-fraud team reports that AI-powered fraud attacks happen globally, with significant activity originating from China and Europe – particularly Germany, due to its status as one of the largest e-commerce markets in the European Union.

The report notes that the larger a digital marketplace is, the more likely a proportional degree of attempted fraud will occur.

E-commerce and employment scams leading

Two particularly concerning areas of AI-enhanced fraud include e-commerce and job recruitment scams.In the ecommerce space, fraudulent websites can now be created in minutes using AI tools with minimal technical knowledge.

Sites often mimic legitimate businesses, using AI-generated product descriptions, images, and customer reviews to fool consumers into believing they’re interacting with genuine merchants.

Adding another layer of deception, AI-powered customer service chatbots can interact convincingly with customers, delay chargebacks by stalling with scripted excuses, and manipulate complaints with AI-generated responses that make scam sites appear professional.

Job seekers are equally at risk. According to the report, generative AI has made it significantly easier for scammers to create fake listings on various employment platforms. Criminals generate fake profiles with stolen credentials, fake job postings with auto-generated descriptions, and AI-powered email campaigns to phish job seekers.

AI-powered interviews and automated emails enhance the credibility of these scams, making them harder to identify. “Fraudsters often ask for personal information, like resumes or even bank account details, under the guise of verifying the applicant’s information,” the report says.

Red flags include unsolicited job offers, requests for payment and communication through informal platforms like text messages or WhatsApp.

Microsoft’s countermeasures to AI fraud

To combat emerging threats, Microsoft says it has implemented a multi-pronged approach across its products and services. Microsoft Defender for Cloud provides threat protection for Azure resources, while Microsoft Edge, like many browsers, features website typo protection and domain impersonation protection. Edge is noted by the Microsoft report as using deep learning technology to help users avoid fraudulent websites.

The company has also enhanced Windows Quick Assist with warning messages to alert users about possible tech support scams before they grant access to someone claiming to be from IT support. Microsoft now blocks an average of 4,415 suspicious Quick Assist connection attempts daily.

Microsoft has also introduced a new fraud prevention policy as part of its Secure Future Initiative (SFI). As of January 2025, Microsoft product teams must perform fraud prevention assessments and implement fraud controls as part of their design process, ensuring products are “fraud-resistant by design.”

As AI-powered scams continue to evolve, consumer awareness remains important. Microsoft advises users to be cautious of urgency tactics, verify website legitimacy before making purchases, and never provide personal or financial information to unverified sources.

For enterprises, implementing multi-factor authentication and deploying deepfake-detection algorithms can help mitigate risk.

See also: Wozniak warns AI will power next-gen scams

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/feed/ 0
Reigniting the European digital economy’s €200bn AI ambitions https://www.artificialintelligence-news.com/news/gitex-europe-2025/ https://www.artificialintelligence-news.com/news/gitex-europe-2025/#respond Thu, 24 Apr 2025 09:22:41 +0000 https://www.artificialintelligence-news.com/?p=105491 There is a sense of urgency in Europe to re-imagine the status quo and reshape technology infrastructures. Timed to harness Europe’s innovative push comes GITEX EUROPE x Ai Everything (21-23 May, Messe Berlin). The world’s third largest economy and host nation for GITEX EUROPE x Ai Everything, Germany’s role as the European economic and technology […]

The post Reigniting the European digital economy’s €200bn AI ambitions appeared first on AI News.

]]>
There is a sense of urgency in Europe to re-imagine the status quo and reshape technology infrastructures. Timed to harness Europe’s innovative push comes GITEX EUROPE x Ai Everything (21-23 May, Messe Berlin).

The world’s third largest economy and host nation for GITEX EUROPE x Ai Everything, Germany’s role as the European economic and technology leader is confirmed as its ICT sector is projected to reach €232.8bn in 2025 (Statista).

GITEX EUROPE x Ai Everything is Europe’s largest tech, startup and digital investment event, and is organised by KAOUN International. It’s hosted in partnership with the Berlin Senate Department for Economics, Energy and Public Enterprises, Germany’s Federal Ministry for Economic Affairs and Climate Action, Berlin Partner for Business and Technology, and the European Innovation Council (EIC).

Global tech engages for cross-border and industry partnerships

The first GITEX EUROPE brings together over 1,400 tech enterprises, startups and SMEs, and platinum sponsors AWS and IBM. Also in sponsorship roles are Cisco, Cloudflare, Dell, Fortinet, Lenovo, NTT, Nutanix, Nvidia, Opswat, and SAP.

GITEX EUROPE x Ai Everything will comprise of tech companies from over 100 countries and 34 European states, including tech pavilions from India, Italy, Morocco, Netherlands, Poland, Serbia, South Korea, UK, and the UAE.

Trixie LohMirmand, CEO of KAOUN International, organiser of GITEX worldwide, said: “There is a sense of urgency and unity in Europe to assert its digital sovereignty and leadership as a global innovation force. The region is paving its way as a centre-stage where AI, quantum and deep tech will be debated, developed, and scaled.”

Global leaders address EU’s tech crossroads

Organisers state there will be over 500 speakers, debating a range of issues including AI and quantum, cloud, and data sovereignty.

Already confirmed are Geoffrey Hinton, Physics Nobel Laureate (2024); Kai Wegner, Mayor of Berlin; H.E. Jelena Begović, Serbian Minister of Science, Technological Development and Innovation; António Henriques, CEO, Bison Bank; Jager McConnell, CEO, Crunchbase; Mark Surman, President, Mozilla; and Sandro Gianella, Head of Europe & Middle East Policy & Partnerships, OpenAI.

Europe’s moves in AI, deep tech & quantum

Europe is focusing on cross-sector AI uses, new investments and international partnerships. Ai Everything Europe, the event’s AI showcase and conference, brings together AI architects, startups and investors to explore AI ecosystems.

Topics presented on stage range from EuroStack ambitions to implications of agentic AI, with speakers including Martin Kon, President and COO, Cohere; Daniel Verten, Strategy Partner, Synthesia; and Professor Dr. Antonio Krueger, CEO of German Research Centre for Artificial Intelligence.

On the show-floor, attendees will be able to experience Brazil’s Ubivis’s smart factory technology, powered by IoT and digital twins, and Hexis’s AI-driven nutrition plans that are trusted by 500+ Olympic and elite athletes.

With nearly €7 billion in quantum investment, Europe is pushing for quantum leadership by 2030. GITEX Quantum Expo (GQX) (in partnership with IBM and QuIC) covers quantum research and cross-industry impact with showcases and conferences.

Speakers include Mira Wolf-Bauwens, Responsible Quantum Computing Lead, IBM Research, Switzerland; Joachim Mnich, Director of Research & Computing, CERN, Switzerland; Neil Abroug, Head of the French National Quantum Strategy, INRIA; and Jan Goetz, CEO & Co-Founder, IQM Quantum Computers, Finland.

Cyber Valley: Building a resilient cyber frontline

With cloud breaches doubling in number and AI-driven attacks, threat response and cyber resilience are core focuses at the event. Fortinet, CrowdStrike, Kaspersky, Knowbe4, and Proofpoint will join other cybersecurity companies exhibiting at GITEX Cyber Valley.

They’ll be alongside law enforcement leaders, global CISOs, and policymakers on stage, including Brig. Gen. Dr. Volker Pötzsch, Chief of Division Cyber/IT & AI, Federal Ministry of Defence, Germany; H.E. Dr. Mohamed Al-Kuwaiti, Head of Cybersecurity, UAE Government; Miguel De Bruycker, Managing Director General, Centre for Cybersecurity Belgium; and Ugo Vignolo Lutati, Group CISO, Prada Group.

GITEX Green Impact: For a sustainable future

GITEX Green Impact connects innovators and investors with over 100 startups and investors exploring how green hydrogen, bio-energy, and next-gen energy storage are moving from R&D to deployment.

Key speakers so far confirmed are Gavin Towler, Chief Scientist for Sustainability Technologies & CTO, Honeywell; Julie Kitcher, Chief Sustainability Officer, Airbus; Lisa Reehten, Managing Director, Bosch Climate Solutions; Massimo Falcioni, Chief Competitiveness Officer, Abu Dhabi Investment Office; and Mounir Benaija, CTO – EV & Charging Infrastructure, TotalEnergies.

Convening the largest startup ecosystem among 60+ nations

GITEX EUROPE x Ai Everything hosts North Star Europe, the local version of the world’s largest startup event, Expand North Star.

North Star Europe gathers over 750 startups and 20 global unicorns, among them reMarkable, TransferMate, Solarisbank AG, Bolt, Flix, and Glovo.

The event features a curated collection of earlys and growth-stage startups from Belgium, France, Hungary, Italy, Morocco, Portugal, Netherlands, Switzerland, Serbia, UK, and UAE.

Among the startups, Neurocast.ai (Netherlands) is advancing AI-powered neurotech for Alzheimer’s research; CloudBees (Switzerland) is the delivery unicorn backed by Goldman Sachs, HSBC, and Lightspeed; and Semiqon (Finland), the world’s first CMOS transistor with the ability to perform in cryogenic conditions.

More than 600 investors with $1tn assets under management will be scouting for new opportunities, including Germany’s Earlybird VC, Austria’s SpeedInvest, Switzerland’s B2Venture, Estonia’s Startup Wise Guys, and the US’s SOSV.

GITEX ScaleX launches as a first-of-its-kind growth platform for scale-ups and late-stage companies, in partnership with AWS.

With SMEs making up 99% of European businesses, GITEX SMEDEX connects SMEs with international trade networks and investors, for funding, legal advice, and market access to scale globally.

Backed by EISMEA and ICC Digital Standards Initiative, the event features SME ecosystem leaders advising from the stage, including Milena Stoycheva, Chairperson of Board of Innovation, Ministry of Innovation and Growth, Bulgaria; and Oliver Grün, President, European Digital SME Alliance and BITMi.

GITEX EUROPE is part of the GITEX global network tech and startup events, taking place in Germany, Morocco, Nigeria, Singapore, Thailand, and the UAE.

For more information, please visit: www.gitex-europe.com.

The post Reigniting the European digital economy’s €200bn AI ambitions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/gitex-europe-2025/feed/ 0
Google introduces AI reasoning control in Gemini 2.5 Flash https://www.artificialintelligence-news.com/news/google-introduces-ai-reasoning-control-gemini-2-5-flash/ https://www.artificialintelligence-news.com/news/google-introduces-ai-reasoning-control-gemini-2-5-flash/#respond Wed, 23 Apr 2025 07:01:20 +0000 https://www.artificialintelligence-news.com/?p=105376 Google has introduced an AI reasoning control mechanism for its Gemini 2.5 Flash model that allows developers to limit how much processing power the system expends on problem-solving. Released on April 17, this “thinking budget” feature responds to a growing industry challenge: advanced AI models frequently overanalyse straightforward queries, consuming unnecessary computational resources and driving […]

The post Google introduces AI reasoning control in Gemini 2.5 Flash appeared first on AI News.

]]>
Google has introduced an AI reasoning control mechanism for its Gemini 2.5 Flash model that allows developers to limit how much processing power the system expends on problem-solving.

Released on April 17, this “thinking budget” feature responds to a growing industry challenge: advanced AI models frequently overanalyse straightforward queries, consuming unnecessary computational resources and driving up operational and environmental costs.

While not revolutionary, the development represents a practical step toward addressing efficiency concerns that have emerged as reasoning capabilities become standard in commercial AI software.

The new mechanism enables precise calibration of processing resources before generating responses, potentially changing how organisations manage financial and environmental impacts of AI deployment.

“The model overthinks,” acknowledges Tulsee Doshi, Director of Product Management at Gemini. “For simple prompts, the model does think more than it needs to.”

The admission reveals the challenge facing advanced reasoning models – the equivalent of using industrial machinery to crack a walnut.

The shift toward reasoning capabilities has created unintended consequences. Where traditional large language models primarily matched patterns from training data, newer iterations attempt to work through problems logically, step by step. While this approach yields better results for complex tasks, it introduces significant inefficiency when handling simpler queries.

Balancing cost and performance

The financial implications of unchecked AI reasoning are substantial. According to Google’s technical documentation, when full reasoning is activated, generating outputs becomes approximately six times more expensive than standard processing. The cost multiplier creates a powerful incentive for fine-tuned control.

Nathan Habib, an engineer at Hugging Face who studies reasoning models, describes the problem as endemic across the industry. “In the rush to show off smarter AI, companies are reaching for reasoning models like hammers even where there’s no nail in sight,” he explained to MIT Technology Review.

The waste isn’t merely theoretical. Habib demonstrated how a leading reasoning model, when attempting to solve an organic chemistry problem, became trapped in a recursive loop, repeating “Wait, but…” hundreds of times – essentially experiencing a computational breakdown and consuming processing resources.

Kate Olszewska, who evaluates Gemini models at DeepMind, confirmed Google’s systems sometimes experience similar issues, getting stuck in loops that drain computing power without improving response quality.

Granular control mechanism

Google’s AI reasoning control provides developers with a degree of precision. The system offers a flexible spectrum ranging from zero (minimal reasoning) to 24,576 tokens of “thinking budget” – the computational units representing the model’s internal processing. The granular approach allows for customised deployment based on specific use cases.

Jack Rae, principal research scientist at DeepMind, says that defining optimal reasoning levels remains challenging: “It’s really hard to draw a boundary on, like, what’s the perfect task right now for thinking.”

Shifting development philosophy

The introduction of AI reasoning control potentially signals a change in how artificial intelligence evolves. Since 2019, companies have pursued improvements by building larger models with more parameters and training data. Google’s approach suggests an alternative path focusing on efficiency rather than scale.

“Scaling laws are being replaced,” says Habib, indicating that future advances may emerge from optimising reasoning processes rather than continuously expanding model size.

The environmental implications are equally significant. As reasoning models proliferate, their energy consumption grows proportionally. Research indicates that inferencing – generating AI responses – now contributes more to the technology’s carbon footprint than the initial training process. Google’s reasoning control mechanism offers a potential mitigating factor for this concerning trend.

Competitive dynamics

Google isn’t operating in isolation. The “open weight” DeepSeek R1 model, which emerged earlier this year, demonstrated powerful reasoning capabilities at potentially lower costs, triggering market volatility that reportedly caused nearly a trillion-dollar stock market fluctuation.

Unlike Google’s proprietary approach, DeepSeek makes its internal settings publicly available for developers to implement locally.

Despite the competition, Google DeepMind’s chief technical officer Koray Kavukcuoglu maintains that proprietary models will maintain advantages in specialised domains requiring exceptional precision: “Coding, math, and finance are cases where there’s high expectation from the model to be very accurate, to be very precise, and to be able to understand really complex situations.”

Industry maturation signs

The development of AI reasoning control reflects an industry now confronting practical limitations beyond technical benchmarks. While companies continue to push reasoning capabilities forward, Google’s approach acknowledges a important reality: efficiency matters as much as raw performance in commercial applications.

The feature also highlights tensions between technological advancement and sustainability concerns. Leaderboards tracking reasoning model performance show that single tasks can cost upwards of $200 to complete – raising questions about scaling such capabilities in production environments.

By allowing developers to dial reasoning up or down based on actual need, Google addresses both financial and environmental aspects of AI deployment.

“Reasoning is the key capability that builds up intelligence,” states Kavukcuoglu. “The moment the model starts thinking, the agency of the model has started.” The statement reveals both the promise and the challenge of reasoning models – their autonomy creates both opportunities and resource management challenges.

For organisations deploying AI solutions, the ability to fine-tune reasoning budgets could democratise access to advanced capabilities while maintaining operational discipline.

Google claims Gemini 2.5 Flash delivers “comparable metrics to other leading models for a fraction of the cost and size” – a value proposition strengthened by the ability to optimise reasoning resources for specific applications.

Practical implications

The AI reasoning control feature has immediate practical applications. Developers building commercial applications can now make informed trade-offs between processing depth and operational costs.

For simple applications like basic customer queries, minimal reasoning settings preserve resources while still using the model’s capabilities. For complex analysis requiring deep understanding, the full reasoning capacity remains available.

Google’s reasoning ‘dial’ provides a mechanism for establishing cost certainty while maintaining performance standards.

See also: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google introduces AI reasoning control in Gemini 2.5 Flash appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-introduces-ai-reasoning-control-gemini-2-5-flash/feed/ 0
The evolution of harmful content detection: Manual moderation to AI https://www.artificialintelligence-news.com/news/the-evolution-of-harmful-content-detection-manual-moderation-to-ai/ https://www.artificialintelligence-news.com/news/the-evolution-of-harmful-content-detection-manual-moderation-to-ai/#respond Tue, 22 Apr 2025 15:08:00 +0000 https://www.artificialintelligence-news.com/?p=105410 The battle to keep online spaces safe and inclusive continues to evolve. As digital platforms multiply and user-generated content expands very quickly, the need for effective harmful content detection becomes paramount. What once relied solely on the diligence of human moderators has given way to agile, AI-powered tools reshaping how communities and organisations manage toxic […]

The post The evolution of harmful content detection: Manual moderation to AI appeared first on AI News.

]]>
The battle to keep online spaces safe and inclusive continues to evolve.

As digital platforms multiply and user-generated content expands very quickly, the need for effective harmful content detection becomes paramount. What once relied solely on the diligence of human moderators has given way to agile, AI-powered tools reshaping how communities and organisations manage toxic behaviours in words and visuals.

From moderators to machines: A brief history

Early days of content moderation saw human teams tasked with combing through vast amounts of user-submitted materials – flagging hate speech, misinformation, explicit content, and manipulated images.

While human insight brought valuable context and empathy, the sheer volume of submissions naturally outstripped what manual oversight could manage. Burnout among moderators also raised serious concerns. The result was delayed interventions, inconsistent judgment, and myriad harmful messages left unchecked.

The rise of automated detection

To address scale and consistency, early stages of automated detection software surfaced – chiefly, keyword filters and naïve algorithms. These could scan quickly for certain banned terms or suspicious phrases, offering some respite for moderation teams.

However, contextless automation brought new challenges: benign messages were sometimes mistaken for malicious ones due to crude word-matching, and evolving slang frequently bypassed protection.

AI and the next frontier in harmful content detection

Artificial intelligence changed this field. Using deep learning, machine learning, and neural networks, AI-powered systems now process vast and diverse streams of data with previously impossible nuance.

Rather than just flagging keywords, algorithms can detect intent, tone, and emergent abuse patterns.

Textual harmful content detection

Among the most pressing concerns are harmful or abusive messages on social networks, forums, and chats.

Modern solutions, like the AI-powered hate speech detector developed by Vinish Kapoor, demonstrate how free, online tools have democratised access to reliable content moderation.

The platform allows anyone to analyse a string of text for hate speech, harassment, violence, and other manifestations of online toxicity instantly – without technical know-how, subscriptions, or concern for privacy breaches. Such a detector moves beyond outdated keyword alarms by evaluating semantic meaning and context, so reducing false positives and highlighting sophisticated or coded abusive language drastically. The detection process adapts as internet linguistics evolve.

Ensuring visual authenticity: AI in image review

It’s not just text that requires vigilance. Images, widely shared on news feeds and messaging apps, pose unique risks: manipulated visuals often aim to misguide audiences or propagate conflict.

AI-creators now offer robust tools for image anomaly detection. Here, AI algorithms scan for inconsistencies like noise patterns, flawed shadows, distorted perspective, or mismatches between content layers – common signals of editing or manufacture.

The offerings stand out not only for accuracy but for sheer accessibility. Their completely free resources, overcome lack of technical requirements, and offer a privacy-centric approach that allows hobbyists, journalists, educators, and analysts to safeguard image integrity with remarkable simplicity.

Benefits of contemporary AI-powered detection tools

Modern AI solutions introduce vital advantages into the field:

  • Instant analysis at scale: Millions of messages and media items can be scrutinized in seconds, vastly outpacing human moderation speeds.
  • Contextual accuracy: By examining intent and latent meaning, AI-based content moderation vastly reduces wrongful flagging and adapts to shifting online trends.
  • Data privacy assurance: With tools promising that neither text nor images are stored, users can check sensitive materials confidently.
  • User-friendliness: Many tools require nothing more than scrolling to a website and pasting in text or uploading an image.

The evolution continues: What’s next for harmful content detection?

The future of digital safety likely hinges on greater collaboration between intelligent automation and skilled human input.

As AI models learn from more nuanced examples, their ability to curb emergent forms of harm will expand. Yet human oversight remains essential for sensitive cases demanding empathy, ethics, and social understanding.

With open, free solutions widely available and enhanced by privacy-first models, everyone from educators to business owners now possesses the tools to protect digital exchanges at scale – whether safeguarding group chats, user forums, comment threads, or email chains.

Conclusion

Harmful content detection has evolved dramatically – from slow, error-prone manual reviews to instantaneous, sophisticated, and privacy-conscious AI.

Today’s innovations strike a balance between broad coverage, real-time intervention, and accessibility, reinforcing the idea that safer, more positive digital environments are in everyone’s reach – no matter their technical background or budget.

(Image source: Pexels)

The post The evolution of harmful content detection: Manual moderation to AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/the-evolution-of-harmful-content-detection-manual-moderation-to-ai/feed/ 0
Google launches A2A as HyperCycle advances AI agent interoperability https://www.artificialintelligence-news.com/news/google-launches-a2a-as-hypercycle-advances-ai-agent-interoperability/ https://www.artificialintelligence-news.com/news/google-launches-a2a-as-hypercycle-advances-ai-agent-interoperability/#respond Tue, 22 Apr 2025 14:59:03 +0000 https://www.artificialintelligence-news.com/?p=105406 AI agents handle increasingly complex and recurring tasks, such as planning supply chains and ordering equipment. As organisations deploy more agents developed by different vendors on different frameworks, agents can end up siloed, unable to coordinate or communicate. Lack of interoperability remains a challenge for organisations, with different agents making conflicting recommendations. It’s difficult to […]

The post Google launches A2A as HyperCycle advances AI agent interoperability appeared first on AI News.

]]>
AI agents handle increasingly complex and recurring tasks, such as planning supply chains and ordering equipment. As organisations deploy more agents developed by different vendors on different frameworks, agents can end up siloed, unable to coordinate or communicate. Lack of interoperability remains a challenge for organisations, with different agents making conflicting recommendations. It’s difficult to create standardised AI workflows, and agent integration require middleware, adding more potential failure points and layers of complexity.

Google’s protocol will standardise AI agent communication

Google unveiled its Agent2Agent (A2A) protocol at Cloud Next 2025 in an effort to standardise communication between diverse AI agents. A2A is an open protocol that allows independent AI agents to communicate and cooperate. It complements Anthropic’s Model Context Protocol (MCP), which provides models with context and tools. MCP connects agents to tools and other resources, and A2A connects agents to other agents. Google’s new protocol facilitates collaboration among AI agents on different platforms and vendors, and ensures secure, real-time communication, and task coordination.

The two roles in an A2A-enabled system are a client agent and a remote agent. The client initiates a task to achieve a goal or on behalf of a user, It makes requests which the remote agent receives and acts on. Depending on who initiates the communication, an agent can be a client agent in one interaction and a remote agent in another. The protocol defines a standard message format and workflow for the interaction.

Tasks are at the heart of A2A, with each task representing a work or conversation unit. The client agent sends the request to the remote agent’s send or task endpoint. The request includes instructions and a unique task ID. The remote agent creates a new task and starts working on it.

Google enjoys broad industry support, with contributions from more than 50 technology partners like Intuit, Langchain, MongoDB, Atlassian, Box, Cohere, PayPal, Salesforce, SAP, Workday, ServiceNow, and UKG. Reputable service providers include Capgemini, Cognizant, Accenture, BCG, Deloitte, HCLTech, McKinsey, PwC, TCS, Infosys, KPMG, and Wipro.

How HyperCycle aligns with A2A principles

HyperCycle’s Node Factory framework makes it possible to deploy multiple agents, addressing existing challenges and enabling developers to create reliable, collaborative setups. The decentralised platform is advancing the bold concept of “the internet of AI” and using self-perpetuating nodes and a creative licensing model to enable AI deployments at scale. The framework helps achieve cross-platform interoperability by standardising interactions and supporting agents from different developers so agents can work cohesively, irrespective of origin.

The platform’s peer-to-peer network links agents across an ecosystem, eliminating silos and enabling unified data sharing and coordination across nodes. The self-replicating nodes can scale, reducing infrastructure needs and distributing computational loads.

Each Node Factory replicates up to ten times, with the number of nodes in the Factory doubling each time. Users can buy and operate Node Factories at ten different levels. Growth enhances each Factory’s capacity, fulfilling increasing demand for AI services. One node might host a communication-focused agent, while another supports a data analysis agent. Developers can create custom solutions by crafting multi-agent tools from the nodes they’re using, addressing scalability issues and siloed environments.

HyperCycle’s Node Factory operates in a network using Toda/IP architecture, which parallels TCP/IP. The network encompasses hundreds of thousands of nodes, letting developers integrate third-party agents. A developer can enhance function by incorporating a third-party analytics agent, sharing intelligence, and promoting collaboration across the network.

According to Toufi Saliba, HyperCycle’s CEO, the exciting development from Google around A2A represents a major milestone for his agent cooperation project. The news supports his vision of interoperable, scalable AI agents. In an X post, he said many more AI agents will now be able to access the nodes produced by HyperCycle Factories. Nodes can be plugged into any A2A, giving each AI agent in Google Cloud (and its 50+ partners) near-instant access to AWS agents, Microsoft agents, and the entire internet of AI. Saliba’s statement highlights A2A’s potential and its synergy with HyperCycle’s mission.

The security and speed of HyperCycle’s Layer 0++

HyperCycle’s Layer 0++ blockchain infrastructure offers security and speed, and complements A2A by providing a decentralised, secure infrastructure for AI agent interactions. Layer 0++ is an innovative blockchain operating on Toda/IP, which divides network packets into smaller pieces and distributes them across nodes.

It can also extend the usability of other blockchains by bridging to them, which means HyperCycle can enhance the functionality of Bitcoin, Ethereum, Avalanche, Cosmos, Cardano, Polygon, Algorand, and Polkadot rather than compete with those blockchains.

DeFi, decentralised payments, swarm AI, and other use cases

HyperCycle has potential in areas like DeFi, swarm AI, media ratings and rewards, decentralised payments, and computer processing. Swarm AI is a collective intelligence system where individual agents collaborate to solve complicated problems. They can interoperate more often with HyperCycle, leading to lightweight agents carrying out complex internal processes.

The HyperCycle platform can improve ratings and rewards in media networks through micro-transactions. The ability to perform high-frequency, high-speed, low-cost, on-chain trading presents innumerable opportunities in DeFi.

It can streamline decentralised payments and computer processing by increasing the speed and reducing the cost of blockchain transactions.

HyperCycle’s efforts to improve access to information precede Google’s announcement. In January 2025, the platform announced it had launched a joint initiative with YMCA – an AI app called Hyper-Y that will connect 64 million people in 12,000 YMCA locations across 120 countries, providing staff, members, and volunteers with access to information from the global network.

HyperCycle’s efforts and Google’s A2A converge

Google hopes its protocol will pave the way for collaboration to solve complex problems and will build the protocol with the community, in the open. A2A was released as open-source with plans to set up contribution pathways. HyperCycle’s innovations aim to enable collaborative problem-solving by connecting AI to a global network of specialised abilities as A2A standardises communication between agents regardless of their vendor or build, so introducing more collaborative multi-agent ecosystems.

A2A and Hypercycle bring ease of use, modularity, scalability, and security to AI agent systems. They can unlock a new era of agent interoperability, creating more flexible and powerful agentic systems.

(Image source: Unsplash)

The post Google launches A2A as HyperCycle advances AI agent interoperability appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-launches-a2a-as-hypercycle-advances-ai-agent-interoperability/feed/ 0
Red Hat on open, small language models for responsible, practical AI https://www.artificialintelligence-news.com/news/red-hat-on-open-small-language-models-for-responsible-practical-ai/ https://www.artificialintelligence-news.com/news/red-hat-on-open-small-language-models-for-responsible-practical-ai/#respond Tue, 22 Apr 2025 07:49:15 +0000 https://www.artificialintelligence-news.com/?p=105184 As geopolitical events shape the world, it’s no surprise that they affect technology too – specifically, in the ways that the current AI market is changing, alongside its accepted methodology, how it’s developed, and the ways it’s put to use in the enterprise. The expectations of results from AI are balanced at present with real-world […]

The post Red Hat on open, small language models for responsible, practical AI appeared first on AI News.

]]>
As geopolitical events shape the world, it’s no surprise that they affect technology too – specifically, in the ways that the current AI market is changing, alongside its accepted methodology, how it’s developed, and the ways it’s put to use in the enterprise.

The expectations of results from AI are balanced at present with real-world realities. And there remains a good deal of suspicion about the technology, again in balance with those who are embracing it even in its current nascent stages. The closed-loop nature of the well-known LLMs is being challenged by instances like Llama, DeepSeek, and Baidu’s recently-released Ernie X1.

In contrast, open source development provides transparency and the ability to contribute back, which is more in tune with the desire for “responsible AI”: a phrase that encompasses the environmental impact of large models, how AIs are used, what comprises their learning corpora, and issues around data sovereignty, language, and politics. 

As the company that’s demonstrated the viability of an economically-sustainable open source development model for its business, Red Hat wants to extend its open, collaborative, and community-driven approach to AI. We spoke recently to Julio Guijarro, the CTO for EMEA at Red Hat, about the organisation’s efforts to unlock the undoubted power of generative AI models in ways that bring value to the enterprise, in a manner that’s responsible, sustainable, and as transparent as possible. 

Julio underlined how much education is still needed in order for us to more fully understand AI, stating, “Given the significant unknowns about AI’s inner workings, which are rooted in complex science and mathematics, it remains a ‘black box’ for many. This lack of transparency is compounded where it has been developed in largely inaccessible, closed environments.”

There are also issues with language (European and Middle-Eastern languages are very much under-served), data sovereignty, and fundamentally, trust. “Data is an organisation’s most valuable asset, and businesses need to make sure they are aware of the risks of exposing sensitive data to public platforms with varying privacy policies.” 

The Red Hat response 

Red Hat’s response to global demand for AI has been to pursue what it feels will bring most benefit to end-users, and remove many of the doubts and caveats that are quickly becoming apparent when the de facto AI services are deployed. 

One answer, Julio said, is small language models, running locally or in hybrid clouds, on non-specialist hardware, and accessing local business information. SLMs are compact, efficient alternatives to LLMs, designed to deliver strong performance for specific tasks while requiring significantly fewer computational resources. There are smaller cloud providers that can be utilised to offload some compute, but the key is having the flexibility and freedom to choose to keep business-critical information in-house, close to the model, if desired. That’s important, because information in an organisation changes rapidly. “One challenge with large language models is they can get obsolete quickly because the data generation is not happening in the big clouds. The data is happening next to you and your business processes,” he said. 

There’s also the cost. “Your customer service querying an LLM can present a significant hidden cost – before AI, you knew that when you made a data query, it had a limited and predictable scope. Therefore, you could calculate how much that transaction could cost you. In the case of LLMs, they work on an iterative model. So the more you use it, the better its answer can get, and the more you like it, the more questions you may ask. And every interaction is costing you money. So the same query that before was a single transaction can now become a hundred, depending on who and how is using the model. When you are running a model on-premise, you can have greater control, because the scope is limited by the cost of your own infrastructure, not by the cost of each query.”

Organisations needn’t brace themselves for a procurement round that involves writing a huge cheque for GPUs, however. Part of Red Hat’s current work is optimising models (in the open, of course) to run on more standard hardware. It’s possible because the specialist models that many businesses will use don’t need the huge, general-purpose data corpus that has to be processed at high cost with every query. 

“A lot of the work that is happening right now is people looking into large models and removing everything that is not needed for a particular use case. If we want to make AI ubiquitous, it has to be through smaller language models. We are also focused on supporting and improving vLLM (the inference engine project) to make sure people can interact with all these models in an efficient and standardised way wherever they want: locally, at the edge or in the cloud,” Julio said. 

Keeping it small 

Using and referencing local data pertinent to the user means that the outcomes can be crafted according to need. Julio cited projects in the Arab- and Portuguese-speaking worlds that wouldn’t be viable using the English-centric household name LLMs. 

There are a couple of other issues, too, that early adopter organisations have found in practical, day-to-day use LLMs. The first is latency – which can be problematic in time-sensitive or customer-facing contexts. Having the focused resources and relevantly-tailored results just a network hop or two away makes sense. 

Secondly, there is the trust issue: an integral part of responsible AI. Red Hat advocates for open platforms, tools, and models so we can move towards greater transparency, understanding, and the ability for as many people as possible to contribute. “It is going to be critical for everybody,” Julio said. “We are building capabilities to democratise AI, and that’s not only publishing a model, it’s giving users the tools to be able to replicate them, tune them, and serve them.” 

Red Hat recently acquired Neural Magic to help enterprises more easily scale AI, to improve performance of inference, and to provide even greater choice and accessibility of how enterprises build and deploy AI workloads with the vLLM project for open model serving. Red Hat, together with IBM Research, also released InstructLab to open the door to would-be AI builders who aren’t data scientists but who have the right business knowledge. 

There’s a great deal of speculation around if, or when, the AI bubble might burst, but such conversations tend to gravitate to the economic reality that the big LLM providers will soon have to face. Red Hat believes that AI has a future in a use case-specific and inherently open source form, a technology that will make business sense and that will be available to all. To quote Julio’s boss, Matt Hicks (CEO of Red Hat), “The future of AI is open.” 

Supporting Assets: 

Tech Journey: Adopt and scale AI

The post Red Hat on open, small language models for responsible, practical AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/red-hat-on-open-small-language-models-for-responsible-practical-ai/feed/ 0
Huawei’s AI hardware breakthrough challenges Nvidia’s dominance https://www.artificialintelligence-news.com/news/huawei-ai-hardware-breakthrough-challenges-nvidia-dominance/ https://www.artificialintelligence-news.com/news/huawei-ai-hardware-breakthrough-challenges-nvidia-dominance/#respond Thu, 17 Apr 2025 15:12:36 +0000 https://www.artificialintelligence-news.com/?p=105355 Chinese tech giant Huawei has made a bold move that could potentially change who leads the global AI chip race. The company has unveiled a powerful new computing system called the CloudMatrix 384 Supernode that, according to local media reports, performs better than similar technology from American chip leader Nvidia. If the performance claims prove […]

The post Huawei’s AI hardware breakthrough challenges Nvidia’s dominance appeared first on AI News.

]]>
Chinese tech giant Huawei has made a bold move that could potentially change who leads the global AI chip race. The company has unveiled a powerful new computing system called the CloudMatrix 384 Supernode that, according to local media reports, performs better than similar technology from American chip leader Nvidia.

If the performance claims prove accurate, the AI hardware breakthrough might reshape the technology landscape at a time when AI development is continuing worldwide, and despite US efforts to limit China’s access to advanced technology.

300 petaflops: Challenging Nvidia’s hardware dominance

The CloudMatrix 384 Supernode is described as a “nuclear-level product,” according to reports from STAR Market Daily cited by the South China Morning Post (SCMP). The hardware achieves an impressive 300 petaflops of computing power, in excess of the 180 petaflops delivered by Nvidia’s NVL72 system.

The CloudMatrix 384 Supernode was specifically engineered to address the computing bottlenecks that have become increasingly problematic as artificial intelligence models continue to grow in size and complexity.

The system is designed to compete directly with Nvidia’s offerings, which have dominated the global market for AI accelerator hardware thus far. Huawei’s CloudMatrix infrastructure was first unveiled in September 2024, and was developed specifically to meet surging demand in China’s domestic market.

The 384 Supernode variant represents the most powerful implementation of AI architecture to date, with reports indicating it can achieve a throughput of 1,920 tokens per second and maintain high levels of accuracy, reportedly matching the performance of Nvidia’s H100 chips, but using Chinese-made components instead.

Developing under sanctions: The technical achievement

What makes the AI hardware breakthrough particularly significant is that it has been achieved despite the severe technological restrictions Huawei has faced since being placed on the US Entity List.

Sanctions have limited the company’s access to advanced US semiconductor technology and design software, forcing Huawei to develop alternative approaches and rely on domestic supply chains.

The core technological advancement enabling the CloudMatrix 384’s performance appears to be Huawei’s answer to Nvidia’s NVLink – a high-speed interconnect technology that allows multiple GPUs to communicate efficiently.

Nvidia’s NVL72 system, released in March 2024, features a 72-GPU NVLink domain that functions as a single, powerful GPU, enabling real-time inference for trillion-parameter models at speeds 30 times faster than previous generations.

According to reporting from the SCMP, Huawei is collaborating with Chinese AI infrastructure startup SiliconFlow to implement the CloudMatrix 384 Supernode in supporting DeepSeek-R1, a reasoning model from Hangzhou-based DeepSeek.

Supernodes are AI infrastructure architectures equipped with more resources than standard systems – including enhanced central processing units, neural processing units, network bandwidth, storage, and memory.

The configuration allows them to function as relay servers, enhancing the overall computing performance of clusters and significantly accelerating the training of foundational AI models.

Beyond Huawei: China’s broader AI infrastructure push

The AI hardware breakthrough from Huawei doesn’t exist in isolation but rather represents part of a broader push by Chinese technology companies to build domestic AI computing infrastructure.

In February, e-commerce giant Alibaba Group announced a massive 380 billion yuan ($52.4 billion) investment in computing resources and AI infrastructure over three years – the largest-ever investment by a private Chinese company in a computing project.

For the global AI community, the emergence of viable alternatives to Nvidia’s hardware could eventually address the computing bottlenecks that have limited AI advancement. Competition in this space could potentially increase available computing capacity and provide developers with more options for training and deploying their models.

However, it’s worth noting that as of the report’s publication, Huawei had not yet responded to requests for comment on these claims.

As tensions between the US and China continue to intensify in the technology sector, Huawei’s CloudMatrix 384 Supernode represents a significant development in China’s pursuit of technological self-sufficiency.

If the performance claims are verified, this AI hardware breakthrough would mean Huawei has achieved computing independence in this niche, despite facing extensive sanctions.

The development also signals a broader trend in China’s technology sector, with multiple domestic companies intensifying their investments in AI infrastructure to capitalise on growing demand and promote the adoption of homegrown chips.

The collective effort suggests China is committed to developing domestic alternatives to American technology in this strategically important field..

See also: Manus AI agent: breakthrough in China’s agentic AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Huawei’s AI hardware breakthrough challenges Nvidia’s dominance appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/huawei-ai-hardware-breakthrough-challenges-nvidia-dominance/feed/ 0
Machines Can See 2025 – Dubai AI event https://www.artificialintelligence-news.com/news/machines-can-see-2025-dubai-ai-event/ https://www.artificialintelligence-news.com/news/machines-can-see-2025-dubai-ai-event/#respond Thu, 17 Apr 2025 14:48:32 +0000 https://www.artificialintelligence-news.com/?p=105362 An AI investment and networking event, Machines Can See, will take place April 23-24 in Dubai at the iconic Museum of the Future, as part of Dubai AI week. Machines Can See is staged by the Polynome Group, a machine vision, AI, robotic, and industrial design company based in the city. This is the third […]

The post Machines Can See 2025 – Dubai AI event appeared first on AI News.

]]>
An AI investment and networking event, Machines Can See, will take place April 23-24 in Dubai at the iconic Museum of the Future, as part of Dubai AI week.

Machines Can See is staged by the Polynome Group, a machine vision, AI, robotic, and industrial design company based in the city.

This is the third year of the event, and will bring investors, business leaders, and policymakers together to explore AI-centric expansion opportunities. Machines Can See, as the name suggests, will have a particular focus on computer vision.

Each discussion and keynote is designed to be firmly rooted in practical applications of AI technology, but organisers hope that the show will be permeated with a sense of discovery and that attendees will be able to explore the possibilities of the tech on show. “We are not just shaping the future of AI, we are defining how AI shapes the world,” said Alexander Khanin, head of the Polynome Group.

UAE Government officials attending the event include H.E. Omar Sultan Al Olama, UAE Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications, and H.E. Hamad Obaid Al Mansoori, the Director General of Digital Dubai.

Polynome Group has said that X will be the official streaming partner for Machines Can See 2025, and the US company will host workshops titled “X and AI” to show solutions that merge AI and streaming technologies, with GRok X central to those sessions. Via interactive demos, attendees will gain firsthand experience of GRok’s potential in AI delivery, analysis and optimisation.

Investment and business opportunities

UAE’s AI market is projected to grow by $8.4 billion in the next two years, and the summit is designed to serve as a venue for investors to engage with AI startups, existing enterprises, and government decision-makers. Attendees at Machines Can See will get to meet with investors and venture capital firms, be given the opportunity to meet executives from AI companies (including IBM and Amazon), and connect with startups seeking investment.

The summit is supported by Amazon Prime Video & Studios, Amazon Web Services, Dubai Police, MBZUAI, IBM, SAP, Adia Lab, QuantumBlack and Yango. The involvement of many organisations and large-scale enterprises should provide many opportunities for funding and collaborations that extend the commercial use of AI.

Local and international investors include Eddy Farhat, Executive Director at e& capital, Faris Al Mazrui, Head of Growth Investments at Mubadala, Major General Khalid Nasser Alrazooqi General Director of Artificial Intelligence, Dubai Police UEA, and Dr. Najwa Aaraj, the CEO of TII.

Speakers and insights

The summit will feature several US-based AI professionals, including Namik Hrle, IBM Fellow and Vice President of Development at the IBM Software Group, Michael Bronstein, DeepMind Professor of AI at Oxford University, Marc Pollefeys, Professor of Computer Science at ETH Zurich, Gerard Medioni, VP and Distinguished Scientist at Amazon Prime Video & Studio, and Deva Ramanan, Professor at the Robotics Institute of Carnegie Mellon University.

The event will feature a ministerial session composed of international government representatives to discuss the role of national IT development.

Among speakers already confirmed for the event are Gobind Singh Deo, Malaysia’s Minister of Digital, H.E. Zhaslan Madiyev, Minister of Digital Development, Innovation, and Aerospace Industry of Kazakhstan, and H.E. Omar Sultan Al Olama, UAE Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications.

Event organisers expect to announce more representatives from overseas in the coming days. Read more here.

The post Machines Can See 2025 – Dubai AI event appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/machines-can-see-2025-dubai-ai-event/feed/ 0