artificial intelligence Archives - AI News https://www.artificialintelligence-news.com/news/tag/artificial-intelligence/ Artificial Intelligence News Fri, 02 May 2025 12:38:13 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png artificial intelligence Archives - AI News https://www.artificialintelligence-news.com/news/tag/artificial-intelligence/ 32 32 Google AMIE: AI doctor learns to ‘see’ medical images https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/ https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/#respond Fri, 02 May 2025 12:38:12 +0000 https://www.artificialintelligence-news.com/?p=106274 Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer). Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your […]

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer).

Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your ECG printout. That’s what Google is aiming for.

We already knew AMIE showed promise in text-based medical chats, thanks to earlier work published in Nature. But let’s face it, real medicine isn’t just about words.

Doctors rely heavily on what they can see – skin conditions, readings from machines, lab reports. As the Google team rightly points out, even simple instant messaging platforms “allow static multimodal information (e.g., images and documents) to enrich discussions.”

Text-only AI was missing a huge piece of the puzzle. The big question, as the researchers put it, was “Whether LLMs can conduct diagnostic clinical conversations that incorporate this more complex type of information.”

Google teaches AMIE to look and reason

Google’s engineers have beefed up AMIE using their Gemini 2.0 Flash model as the brains of the operation. They’ve combined this with what they call a “state-aware reasoning framework.” In plain English, this means the AI doesn’t just follow a script; it adapts its conversation based on what it’s learned so far and what it still needs to figure out.

It’s close to how a human clinician works: gathering clues, forming ideas about what might be wrong, and then asking for more specific information – including visual evidence – to narrow things down.

“This enables AMIE to request relevant multimodal artifacts when needed, interpret their findings accurately, integrate this information seamlessly into the ongoing dialogue, and use it to refine diagnoses,” Google explains.

Think of the conversation flowing through stages: first gathering the patient’s history, then moving towards diagnosis and management suggestions, and finally follow-up. The AI constantly assesses its own understanding, asking for that skin photo or lab result if it senses a gap in its knowledge.

To get this right without endless trial-and-error on real people, Google built a detailed simulation lab.

Google created lifelike patient cases, pulling realistic medical images and data from sources like the PTB-XL ECG database and the SCIN dermatology image set, adding plausible backstories using Gemini. Then, they let AMIE ‘chat’ with simulated patients within this setup and automatically check how well it performed on things like diagnostic accuracy and avoiding errors (or ‘hallucinations’).

The virtual OSCE: Google puts AMIE through its paces

The real test came in a setup designed to mirror how medical students are assessed: the Objective Structured Clinical Examination (OSCE).

Google ran a remote study involving 105 different medical scenarios. Real actors, trained to portray patients consistently, interacted either with the new multimodal AMIE or with actual human primary care physicians (PCPs). These chats happened through an interface where the ‘patient’ could upload images, just like you might in a modern messaging app.

Afterwards, specialist doctors (in dermatology, cardiology, and internal medicine) and the patient actors themselves reviewed the conversations.

The human doctors scored everything from how well history was taken, the accuracy of the diagnosis, the quality of the suggested management plan, right down to communication skills and empathy—and, of course, how well the AI interpreted the visual information.

Surprising results from the simulated clinic

Here’s where it gets really interesting. In this head-to-head comparison within the controlled study environment, Google found AMIE didn’t just hold its own—it often came out ahead.

The AI was rated as being better than the human PCPs at interpreting the multimodal data shared during the chats. It also scored higher on diagnostic accuracy, producing differential diagnosis lists (the ranked list of possible conditions) that specialists deemed more accurate and complete based on the case details.

Specialist doctors reviewing the transcripts tended to rate AMIE’s performance higher across most areas. They particularly noted “the quality of image interpretation and reasoning,” the thoroughness of its diagnostic workup, the soundness of its management plans, and its ability to flag when a situation needed urgent attention.

Perhaps one of the most surprising findings came from the patient actors: they often found the AI to be more empathetic and trustworthy than the human doctors in these text-based interactions.

And, on a critical safety note, the study found no statistically significant difference between how often AMIE made errors based on the images (hallucinated findings) compared to the human physicians.

Technology never stands still, so Google also ran some early tests swapping out the Gemini 2.0 Flash model for the newer Gemini 2.5 Flash.

Using their simulation framework, the results hinted at further gains, particularly in getting the diagnosis right (Top-3 Accuracy) and suggesting appropriate management plans.

While promising, the team is quick to add a dose of realism: these are just automated results, and “rigorous assessment through expert physician review is essential to confirm these performance benefits.”

Important reality checks

Google is commendably upfront about the limitations here. “This study explores a research-only system in an OSCE-style evaluation using patient actors, which substantially under-represents the complexity… of real-world care,” they state clearly. 

Simulated scenarios, however well-designed, aren’t the same as dealing with the unique complexities of real patients in a busy clinic. They also stress that the chat interface doesn’t capture the richness of a real video or in-person consultation.

So, what’s the next step? Moving carefully towards the real world. Google is already partnering with Beth Israel Deaconess Medical Center for a research study to see how AMIE performs in actual clinical settings with patient consent.

The researchers also acknowledge the need to eventually move beyond text and static images towards handling real-time video and audio—the kind of interaction common in telehealth today.

Giving AI the ability to ‘see’ and interpret the kind of visual evidence doctors use every day offers a glimpse of how AI might one day assist clinicians and patients. However, the path from these promising findings to a safe and reliable tool for everyday healthcare is still a long one that requires careful navigation.

(Photo by Alexander Sinn)

See also: Are AI chatbots really changing the world of work?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/feed/ 0
Are AI chatbots really changing the world of work? https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/ https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/#respond Fri, 02 May 2025 09:54:32 +0000 https://www.artificialintelligence-news.com/?p=106266 We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now. Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far. Researchers Anders Humlum (University of Chicago) […]

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now.

Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far.

Researchers Anders Humlum (University of Chicago) and Emilie Vestergaard (University of Copenhagen) didn’t just rely on anecdotes. They dug deep, connecting responses from two big surveys (late 2023 and 2024) with official, detailed records about jobs and pay in Denmark.

The pair zoomed in on around 25,000 people working in 7,000 different places, covering 11 jobs thought to be right in the path of AI disruption.   

Everyone’s using AI chatbots for work, but where are the benefits?

What they found confirms what many of us see: AI chatbots are everywhere in Danish workplaces now. Most bosses are actually encouraging staff to use them, a real turnaround from the early days when companies were understandably nervous about things like data privacy.

Almost four out of ten employers have even rolled out their own in-house chatbots, and nearly a third of employees have had some formal training on these tools.   

When bosses gave the nod, the number of staff using chatbots practically doubled, jumping from 47% to 83%. It also helped level the playing field a bit. That gap between men and women using chatbots? It shrank noticeably when companies actively encouraged their use, especially when they threw in some training.

So, the tools are popular, companies are investing, people are getting trained… but the big economic shift? It seems to be missing in action.

Using statistical methods to compare people who used AI chatbots for work with those who didn’t, both before and after ChatGPT burst onto the scene, the researchers found… well, basically nothing.

“Precise zeros,” the researchers call their findings. No significant bump in pay, no change in recorded work hours, across all 11 job types they looked at. And they’re pretty confident about this – the numbers rule out any average effect bigger than just 1%.

This wasn’t just a blip, either. The lack of impact held true even for the keen beans who jumped on board early, those using chatbots daily, or folks working where the boss was actively pushing the tech.

Looking at whole workplaces didn’t change the story; places with lots of chatbot users didn’t see different trends in hiring, overall wages, or keeping staff compared to places using them less.

Productivity gains: More of a gentle nudge than a shove

Why the big disconnect? Why all the hype and investment if it’s not showing up in paychecks or job stats? The study flags two main culprits: the productivity boosts aren’t as huge as hoped in the real world, and what little gains there are aren’t really making their way into wages.

Sure, people using AI chatbots for work felt they were helpful. They mentioned better work quality and feeling more creative. But the number one benefit? Saving time.

However, when the researchers crunched the numbers, the average time saved was only about 2.8% of a user’s total work hours. That’s miles away from the huge 15%, 30%, even 50% productivity jumps seen in controlled lab-style experiments (RCTs) involving similar jobs.

Why the difference? A few things seem to be going on. Those experiments often focus on jobs or specific tasks where chatbots really shine (like coding help or basic customer service responses). This study looked at a wider range, including jobs like teaching where the benefits might be smaller.

The researchers stress the importance of what they call “complementary investments”. People whose companies encouraged chatbot use and provided training actually did report bigger benefits – saving more time, improving quality, and feeling more creative. This suggests that just having the tool isn’t enough; you need the right support and company environment to really unlock its potential.

And even those modest time savings weren’t padding wallets. The study reckons only a tiny fraction – maybe 3% to 7% – of the time saved actually showed up as higher earnings. It might be down to standard workplace inertia, or maybe it’s just harder to ask for a raise based on using a tool your boss hasn’t officially blessed, especially when many people started using them off their own bat.

Making new work, not less work

One fascinating twist is that AI chatbots aren’t just about doing old work tasks faster. They seem to be creating new tasks too. Around 17% of people using them said they had new workloads, mostly brand new types of tasks.

This phenomenon happened more often in workplaces that encouraged chatbot use. It even spilled over to people not using the tools – about 5% of non-users reported new tasks popping up because of AI, especially teachers having to adapt assignments or spot AI-written homework.   

What kind of new tasks? Things like figuring out how to weave AI into daily workflows, drafting content with AI help, and importantly, dealing with the ethical side and making sure everything’s above board. It hints that companies are still very much in the ‘figuring it out’ phase, spending time and effort adapting rather than just reaping instant rewards.

What’s the verdict on the work impact of AI chatbots?

The researchers are careful not to write off generative AI completely. They see pathways for it to become more influential over time, especially as companies get better at integrating it and maybe as those “new tasks” evolve.

But for now, their message is clear: the current reality doesn’t match the hype about a massive, immediate job market overhaul.

“Despite rapid adoption and substantial investments… our key finding is that AI chatbots have had minimal impact on productivity and labor market outcomes to date,” the researchers conclude.   

It brings to mind that old quote about the early computer age: seen everywhere, except in the productivity stats. Two years on from ChatGPT’s launch kicking off the fastest tech adoption we’ve ever seen, its actual mark on jobs and pay looks surprisingly light.

The revolution might still be coming, but it seems to be taking its time.   

See also: Claude Integrations: Anthropic adds AI to your favourite work tools

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/feed/ 0
Claude Integrations: Anthropic adds AI to your favourite work tools https://www.artificialintelligence-news.com/news/claude-integrations-anthropic-adds-ai-favourite-work-tools/ https://www.artificialintelligence-news.com/news/claude-integrations-anthropic-adds-ai-favourite-work-tools/#respond Thu, 01 May 2025 17:02:33 +0000 https://www.artificialintelligence-news.com/?p=106258 Anthropic just launched ‘Integrations’ for Claude that enables the AI to talk directly to your favourite daily work tools. In addition, the company has launched a beefed-up ‘Advanced Research’ feature for digging deeper than ever before. Starting with Integrations, the feature builds on a technical standard Anthropic released last year (the Model Context Protocol, or […]

The post Claude Integrations: Anthropic adds AI to your favourite work tools appeared first on AI News.

]]>
Anthropic just launched ‘Integrations’ for Claude that enables the AI to talk directly to your favourite daily work tools. In addition, the company has launched a beefed-up ‘Advanced Research’ feature for digging deeper than ever before.

Starting with Integrations, the feature builds on a technical standard Anthropic released last year (the Model Context Protocol, or MCP), but makes it much easier to use. Before, setting this up was a bit technical and local. Now, developers can build secure bridges allowing Claude to connect safely with apps over the web or on your desktop.

For end-users of Claude, this means you can now hook it up to a growing list of popular work software. Right out of the gate, they’ve included support for ten big names: Atlassian’s Jira and Confluence (hello, project managers and dev teams!), the automation powerhouse Zapier, Cloudflare, customer comms tool Intercom, plus Asana, Square, Sentry, PayPal, Linear, and Plaid. Stripe and GitLab are joining the party soon.

So, what’s the big deal? The real advantage here is context. When Claude can see your project history in Jira, read your team’s knowledge base in Confluence, or check task updates in Asana, it stops guessing and starts understanding what you’re working on.

“When you connect your tools to Claude, it gains deep context about your work—understanding project histories, task statuses, and organisational knowledge—and can take actions across every surface,” explains Anthropic.

They add, “Claude becomes a more informed collaborator, helping you execute complex projects in one place with expert assistance at every step.”

Let’s look at what this means in practice. Connect Zapier, and you suddenly give Claude the keys to thousands of apps linked by Zapier’s workflows. You could just ask Claude, conversationally, to trigger a complex sequence – maybe grab the latest sales numbers from HubSpot, check your calendar, and whip up some meeting notes, all without you lifting a finger in those apps.

For teams using Atlassian’s Jira and Confluence, Claude could become a serious helper. Think drafting product specs, summarising long Confluence documents so you don’t have to wade through them, or even creating batches of linked Jira tickets at once. It might even spot potential roadblocks by analysing project data.

And if you use Intercom for customer chats, this integration could be a game-changer. Intercom’s own AI assistant, Fin, can now work with Claude to do things like automatically create a bug report in Linear if a customer flags an issue. You could also ask Claude to sift through your Intercom chat history to spot patterns, help debug tricky problems, or summarise what customers are saying – making the whole journey from feedback to fix much smoother.

Anthropic is also making it easier for developers to build even more of these connections. They reckon that using their tools (or platforms like Cloudflare that handle the tricky bits like security and setup), developers can whip up a custom Integration with Claude in about half an hour. This could mean connecting Claude to your company’s unique internal systems or specialised industry software.

Beyond tool integrations, Claude gets a serious research upgrade

Alongside these new connections, Anthropic has given Claude’s Research feature a serious boost. It could already search the web and your Google Workspace files, but the new ‘Advanced Research’ mode is built for when you need to dig really deep.

Flip the switch for this advanced mode, and Claude tackles big questions differently. Instead of just one big search, it intelligently breaks your request down into smaller chunks, investigates each part thoroughly – using the web, your Google Docs, and now tapping into any apps you’ve connected via Integrations – before pulling it all together into a detailed report.

Now, this deeper digging takes a bit more time. While many reports might only take five to fifteen minutes, Anthropic says the really complex investigations could have Claude working away for up to 45 minutes. That might sound like a while, but compare it to the hours you might spend grinding through that research manually, and it starts to look pretty appealing.

Importantly, you can trust the results. When Claude uses information from any source – whether it’s a website, an internal doc, a Jira ticket, or a Confluence page – it gives you clear links straight back to the original. No more wondering where the AI got its information from; you can check it yourself.

These shiny new Integrations and the Advanced Research mode are rolling out now in beta for folks on Anthropic’s paid Max, Team, and Enterprise plans. If you’re on the Pro plan, don’t worry – access is coming your way soon.

Also worth noting: the standard web search feature inside Claude is now available everywhere, for everyone on any paid Claude.ai plan (Pro and up). No more geographical restrictions on that front.

Putting it all together, these updates and integrations show Anthropic is serious about making Claude genuinely useful in a professional context. By letting it plug directly into the tools we already use and giving it more powerful ways to analyse information, they’re pushing Claude towards being less of a novelty and more of an essential part of the modern toolkit.

(Image credit: Anthropic)

See also: Baidu ERNIE X1 and 4.5 Turbo boast high performance at low cost

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Claude Integrations: Anthropic adds AI to your favourite work tools appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/claude-integrations-anthropic-adds-ai-favourite-work-tools/feed/ 0
Meta beefs up AI security with new Llama tools  https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/ https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/#respond Wed, 30 Apr 2025 13:35:22 +0000 https://www.artificialintelligence-news.com/?p=106233 If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools. The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to […]

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools.

The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to make developing and using AI a bit safer for everyone involved.

Developers working with the Llama family of models now have some upgraded kit to play with. You can grab these latest Llama Protection tools directly from Meta’s own Llama Protections page, or find them where many developers live: Hugging Face and GitHub.

First up is Llama Guard 4. Think of it as an evolution of Meta’s customisable safety filter for AI. The big news here is that it’s now multimodal so it can understand and apply safety rules not just to text, but to images as well. That’s crucial as AI applications get more visual. This new version is also being baked into Meta’s brand-new Llama API, which is currently in a limited preview.

Then there’s LlamaFirewall. This is a new piece of the puzzle from Meta, designed to act like a security control centre for AI systems. It helps manage different safety models working together and hooks into Meta’s other protection tools. Its job? To spot and block the kind of risks that keep AI developers up at night – things like clever ‘prompt injection’ attacks designed to trick the AI, potentially dodgy code generation, or risky behaviour from AI plug-ins.

Meta has also given its Llama Prompt Guard a tune-up. The main Prompt Guard 2 (86M) model is now better at sniffing out those pesky jailbreak attempts and prompt injections. More interestingly, perhaps, is the introduction of Prompt Guard 2 22M.

Prompt Guard 2 22M is a much smaller, nippier version. Meta reckons it can slash latency and compute costs by up to 75% compared to the bigger model, without sacrificing too much detection power. For anyone needing faster responses or working on tighter budgets, that’s a welcome addition.

But Meta isn’t just focusing on the AI builders; they’re also looking at the cyber defenders on the front lines of digital security. They’ve heard the calls for better AI-powered tools to help in the fight against cyberattacks, and they’re sharing some updates aimed at just that.

The CyberSec Eval 4 benchmark suite has been updated. This open-source toolkit helps organisations figure out how good AI systems actually are at security tasks. This latest version includes two new tools:

  • CyberSOC Eval: Built with the help of cybersecurity experts CrowdStrike, this framework specifically measures how well AI performs in a real Security Operation Centre (SOC) environment. It’s designed to give a clearer picture of AI’s effectiveness in threat detection and response. The benchmark itself is coming soon.
  • AutoPatchBench: This benchmark tests how good Llama and other AIs are at automatically finding and fixing security holes in code before the bad guys can exploit them.

To help get these kinds of tools into the hands of those who need them, Meta is kicking off the Llama Defenders Program. This seems to be about giving partner companies and developers special access to a mix of AI solutions – some open-source, some early-access, some perhaps proprietary – all geared towards different security challenges.

As part of this, Meta is sharing an AI security tool they use internally: the Automated Sensitive Doc Classification Tool. It automatically slaps security labels on documents inside an organisation. Why? To stop sensitive info from walking out the door, or to prevent it from being accidentally fed into an AI system (like in RAG setups) where it could be leaked.

They’re also tackling the problem of fake audio generated by AI, which is increasingly used in scams. The Llama Generated Audio Detector and Llama Audio Watermark Detector are being shared with partners to help them spot AI-generated voices in potential phishing calls or fraud attempts. Companies like ZenDesk, Bell Canada, and AT&T are already lined up to integrate these.

Finally, Meta gave a sneak peek at something potentially huge for user privacy: Private Processing. This is new tech they’re working on for WhatsApp. The idea is to let AI do helpful things like summarise your unread messages or help you draft replies, but without Meta or WhatsApp being able to read the content of those messages.

Meta is being quite open about the security side, even publishing their threat model and inviting security researchers to poke holes in the architecture before it ever goes live. It’s a sign they know they need to get the privacy aspect right.

Overall, it’s a broad set of AI security announcements from Meta. They’re clearly trying to put serious muscle behind securing the AI they build, while also giving the wider tech community better tools to build safely and defend effectively.

See also: Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/feed/ 0
AI strategies for cybersecurity press releases that get coverage https://www.artificialintelligence-news.com/news/ai-strategies-for-cybersecurity-press-releases-that-get-coverage/ https://www.artificialintelligence-news.com/news/ai-strategies-for-cybersecurity-press-releases-that-get-coverage/#respond Mon, 28 Apr 2025 14:26:27 +0000 https://www.artificialintelligence-news.com/?p=106172 If you’ve ever tried to get your cybersecurity news picked up by media outlets, you’ll know just how much of a challenge (and how disheartening) it can be. You pour hours into what you think is an excellent announcement about your new security tool, threat research, or vulnerability discovery, only to watch it disappear into […]

The post AI strategies for cybersecurity press releases that get coverage appeared first on AI News.

]]>
If you’ve ever tried to get your cybersecurity news picked up by media outlets, you’ll know just how much of a challenge (and how disheartening) it can be. You pour hours into what you think is an excellent announcement about your new security tool, threat research, or vulnerability discovery, only to watch it disappear into journalists’ overflowing inboxes without a trace.

The cyber PR space is brutally competitive. Reporters at top publications receive tens, if not hundreds, of pitches each day, and they have no choice but to be highly selective about which releases they choose to cover and which to discard. Your challenge then isn’t just creating a good press release, it’s making one that grabs attention and stands out in an industry drowning in technical jargon and “revolutionary” solutions.

Why most cybersecurity press releases fall flat

Let’s first look at some of the main reasons why many cyber press releases fail:

  • They’re too complex from the start, losing non-technical reporters
  • They bury the actual news under corporate marketing speak.
  • They focus on product features rather than the real-world impact or problems they solve.
  • They lack credible data or specific research findings that journalists can cite as support.

Most of these problems have one main theme: Journalists aren’t interested in promoting your product or your business. They are looking after their interests and seeking newsworthy stories their audiences care about. Keep this in mind and make their job easier by showing them exactly why your announcement matters.

Learning how to write a cybersecurity press release

What does a well-written press release look like? Alongside the reasons listed above, many companies make the mistake of submitting poorly formatted releases that journalists will be unlikely to spend time reading.

It’s worth learning how to write a cybersecurity press release properly, including the preferred structure (headline, subheader, opening paragraph, boilerplate, etc). And, be sure to review some examples of high-quality press releases as well.

AI strategies that transform your press release process

Let’s examine how AI tools can significantly enhance your cyber PR at every stage.

1. Research Enhancement

Use AI tools to track media coverage patterns and identify emerging trends in cybersecurity news. You can analyse which types of security stories gain traction, and this can help you position your announcement in that context.

Another idea is to use LLMs (like Google’s Gemini or OpenAI’s ChatGPT) to analyse hundreds of successful cybersecurity press releases in a niche similar to yours. Ask it to identify common elements in those that generated significant coverage, and then use these same features in your cyber PR efforts.

To take this a step further, AI-powered sentiment analysis can help you understand how different audience segments receive specific cybersecurity topics. The intelligence can help you tailor your messaging to address current concerns and capitalise on positive industry momentum.

2. Writing assistance

If you struggle to convey complex ideas and terminology in more accessible language, consider asking the LLM to help simplify your messaging. This can help transform technical specifications into clear, accessible language that non-technical journalists can understand.

Since the headline is the most important part of your release, use an LLM to generate a handful of options based on your core announcement, then select the best one based on clarity and impact. Once your press release is complete, run it through an LLM to identify and replace jargon that might be second nature to your security team but may be confusing to general tech reporters.

3. Visual storytelling

If you are struggling to find ways to explain your product or service in accessible language, visuals can help. AI image generation tools, like Midjourney, create custom visuals based on prompts that help illustrate your message. The latest models can handle highly complex tasks.

With a bit of prompt engineering (and by incorporating the press release you want help with), you should be able to create accompanying images and infographics that bring your message to life.

4. Video content

Going one step further than a static image, a brief AI-generated explainer video can sit alongside your press release, providing journalists with ready-to-use content that explains complex security concepts. Some ideas include:

  • Short Explainer Videos: Use text-to-video tools to turn essential sections of your press release into a brief (60 seconds or less) animated or stock-footage-based video. You can usually use narration and text overlays directly on the AI platforms as well.
  • AI Avatar Summaries: Several tools now enable you to create a brief video featuring an AI avatar that presents the core message of the press release. A human-looking avatar reads out the content and delivers an audio and video component for your release.
  • Data Visualisation Videos: Use AI tools to animate key statistics or processes described in the release for enhanced clarity.

Final word

Even as you use the AI tools you have at your disposal, remember that the most effective cybersecurity press releases still require that all-important human insight and expertise. Your goal isn’t to automate the entire process. Instead, use AI to enhance your cyber PR efforts and make your releases stand out from the crowd.

AI should help emphasise, not replace, the human elements that make security stories so engaging and compelling. Be sure to shine a spotlight on the researchers who made the discovery, the real-world implications of any threat vulnerabilities you uncover, and the people security measures ultimately protect.

Combine this human-focused storytelling with the power of AI automation, and you’ll ensure that your press releases and cyber PR campaigns get the maximum mileage.

The post AI strategies for cybersecurity press releases that get coverage appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-strategies-for-cybersecurity-press-releases-that-get-coverage/feed/ 0
AI in education: Balancing promises and pitfalls https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/ https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/#respond Mon, 28 Apr 2025 12:27:09 +0000 https://www.artificialintelligence-news.com/?p=106158 The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges. There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated […]

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges.

There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated world, they need to be ready.

“To ensure the United States remains a global leader in this technological revolution, we must provide our nation’s youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,” President Trump declared.

So, what does AI actually look like in the classroom?

One of the biggest hopes for AI in education is making learning more personal. Imagine software that can figure out how individual students are doing, then adjust the pace and materials just for them. This could mean finally moving away from the old one-size-fits-all approach towards learning environments that adapt and offer help exactly where it’s needed.

The US executive order hints at this, wanting to improve results through things like “AI-based high-quality instructional resources” and “high-impact tutoring.”

And what about teachers? AI could be a huge help here too, potentially taking over tedious admin tasks like grading, freeing them up to actually teach. Plus, AI software might offer fresh ways to present information.

Getting kids familiar with AI early on could also take away some of the mystery around the technology. It might spark their “curiosity and creativity” and give them the foundation they need to become “active and responsible participants in the workforce of the future.”

The focus stretches to lifelong learning and getting people ready for the job market. On top of that, AI tools like text-to-speech or translation features can make learning much more accessible for students with disabilities, opening up educational environments for everyone.

Not all smooth sailing: The challenges ahead for AI in education

While the potential is huge, we need to be realistic about the significant hurdles and potential downsides.

First off, AI runs on student data – lots of it. That means we absolutely need strong rules and security to make sure this data is collected ethically, used correctly, and kept safe from breaches. Privacy is paramount here.

Then there’s the bias problem. If the data used to train AI reflects existing unfairness in society (and let’s be honest, it often does), the AI could end up repeating or even worsening those inequalities. Think biased assessments or unfair resource allocation. Careful testing and constant checks are crucial to catch and fix this.

We also can’t ignore the digital divide. If some students don’t have reliable internet, the right devices, or the necessary tech infrastructure at home or school, AI could widen the gap between the haves and have-nots. It’s vital that everyone gets fair access.

There’s also a risk that leaning too heavily on AI education tools might stop students from developing essential skills like critical thinking. We need to teach them how to use AI as a helpful tool, not a crutch they can’t function without.

Maybe the biggest piece of the puzzle, though, is making sure our teachers are ready. As the executive order rightly points out, “We must also invest in our educators and equip them with the tools and knowledge.”

This isn’t just about knowing which buttons to push; teachers need to understand how AI fits into teaching effectively and ethically. That requires solid professional development and ongoing support.

A recent GMB Union poll found that while about a fifth of UK schools are using AI now, the staff often aren’t getting the training they need:

View on Threads

Finding the right path forward

It’s going to take everyone – governments, schools, tech companies, and teachers – pulling together in order to ensure that AI plays a positive role in education.

We absolutely need clear policies and standards covering ethics, privacy, bias, and making sure AI is accessible to all students. We also need to keep investing in research to figure out the best ways to use AI in education and to build tools that are fair and effective.

And critically, we need a long-term commitment to teacher education to get educators comfortable and skilled with these changes. Part of this is building broad AI literacy, making sure all students get a basic understanding of this technology and how it impacts society.

AI could be a positive force in education – making it more personalised, efficient, and focused on the skills students actually need. But turning that potential into reality means carefully navigating those tricky ethical, practical, and teaching challenges head-on.

See also: How does AI judge? Anthropic studies the values of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/feed/ 0
“Machines Can See 2025” wraps in Dubai after two‑day showcase of AI https://www.artificialintelligence-news.com/news/machines-can-see-2025-post-event/ https://www.artificialintelligence-news.com/news/machines-can-see-2025-post-event/#respond Mon, 28 Apr 2025 07:47:48 +0000 https://www.artificialintelligence-news.com/?p=106136 The third edition of Machines Can See (MCS) Summit has concluded at Dubai’s Museum of the Future. More than 300 start‑ups pitched to investors from EQT Ventures, Balderton, Lakestar, e& capital and Mubadala, and more than 3,500 delegates from 45 countries attended the summit, while online engagement levels were high (4.7 million views). Real-time updates […]

The post “Machines Can See 2025” wraps in Dubai after two‑day showcase of AI appeared first on AI News.

]]>
The third edition of Machines Can See (MCS) Summit has concluded at Dubai’s Museum of the Future. More than 300 start‑ups pitched to investors from EQT Ventures, Balderton, Lakestar, e& capital and Mubadala, and more than 3,500 delegates from 45 countries attended the summit, while online engagement levels were high (4.7 million views). Real-time updates with the #MCS2025 hashtag are projected to exceed 5 million views.

The summit was hosted by UAE-based Polynome Group under the patronage of H.H. Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum. Strategic backers included Digital Dubai, Dubai Police, Emirates, Amazon Web Services, NVIDIA, IBM, SAP, MBZUAI among others.

“In just three years, MCS has evolved from a specialist meet‑up into a true crossroads for the world’s top minds in science, business and public policy. The week proved that when researchers, entrepreneurs and governments share one stage, we move a step closer to transparent, human‑centred AI that delivers real value for society,” said Alexander Khanin, founder & CEO of Polynome Group

Landmark agreements announced live on stage

During the two‑day programme, several high‑profile agreements were signed at the summit, including:

  • A trilateral Memorandum of Understanding between Astana Hub (Kazakhstan), IT‑Park Uzbekistan and Al‑Farabi Innovation Hub (UAE), creating a Central‑Asia‑to‑MENA soft‑landing platform for high‑growth start‑ups.
  • A Google Cloud initiative offering no‑cost “Gen‑AI Leader” learning paths and discounted certification vouchers to accelerate responsible AI adoption across the region.

Polynome Group officially launched AI Academy, an educational initiative developed in collaboration with the Abu Dhabi School of Management and supported by NVIDIA’s Deep Learning Institute. The Academy will offer short executive seminars and a specialised four‑month Mini‑MBA in AI, aimed at equipping leaders and innovators with practical AI knowledge to bridge the gap between technology research and commercial application.

Policy & talent

Day one opened with a ministerial round‑table – “Wanted: AI to Retain and Attract Talent to the Country.” Ministers Omar Sultan Al Olama (UAE), Amr Talaat (Egypt), Gobind Singh Deo (Malaysia), Zhaslan Madiyev (Kazakhstan) and Meutya Hafid (Indonesia) detailed visa‑fast‑track programmes, national GPU clouds and cross‑border sandboxes designed to reverse brain‑drain and accelerate R&D.

Breakthrough research

  • Prof. Michael Bronstein (University of Oxford/Google DeepMind) demonstrated Geometric Deep Learning applications that shorten drug‑discovery timelines and model subatomic physics.
  • Marco Tempest (NASA JPL/MagicLab.nyc) blended GPT‑4o dialogue with mixed‑reality holograms, turning the stage into an interactive mind‑map.
  • Prof. Michal Irani (Weizmann Institute) showed perception‑to‑cognition systems capable of reconstructing scenes from a single gaze sequence.
  • Andrea Vedaldi (Oxford) premiered a 3‑D generative‑AI pipeline for instant city‑scale digital twins, while Marc Pollefeys (ETH Zurich/Microsoft) demonstrated real‑time spatial mapping at sub‑10 ms latency.

Industry workshops & panels

AWS ran a hands‑on clinic – “Building Enterprise Gen‑AI Applications” – covering RAG, agentic orchestration and secure deployment. NVIDIA’s workshop unveiled its platform approach to production generative‑AI on Hopper‑class GPUs, complementing its newly announced Service Delivery Partnership with Polynome Group’s legal entity, Intelligent Machines Consultancies. Dubai Police hosted a closed‑door DFA session on predictive policing, while X and AI workshops explored social‑data pipelines on GPU clusters.

The parallel Machines Can Create forum examined AI’s role in luxury, digital art and media, with speakers from HEC Paris, The Sandbox, IBM Research and BBC, culminating in the panel “Pixels and Palettes: The Canvas of Tomorrow.”

Prof. Marc Pollefeys, Director of the Mixed Reality and AI Lab at ETH Zurich and Microsoft, highlighted the role of cutting-edge technology in daily life: “We are at a turning point where technologies like spatial AI and real-time 3D mapping are moving from laboratories into everyday life, transforming cities, workplaces, and how we interact with the digital world. The Machines Can See Summit underscores how collaboration between researchers, industry, and policymakers accelerates this transition, bringing innovative solutions closer to everyone,” he said.

Ethical & security focus

Panels “Good AI: Between Hype and Mediocrity” and “Defending Intelligence: Navigating Adversarial Machine Learning” stressed the need for continuous audits, red‑teaming and transparent supply chains. Dubai Police, TII UAE and IBM urged adoption of ISO‑aligned governance tool‑kits to safeguard public‑sector deployments.

High‑profile awards

On Day Two, H.H. Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum presented trophies for the Global Prompt Engineering Championship, for breakthroughs in multilingual, safety-aligned LLM prompting.

Key takeaways

The summit underscored three strategic imperatives for the decade ahead. Talent aviation – backed by unified tech visas, national GPU clouds and government‑funded sandbox clusters – is emerging as the most effective antidote to AI brain‑drain. Spatial computing is moving from laboratory to street level as sub‑10‑millisecond mapping unlocks safe humanoid robotics and city‑scale augmented‑reality services. Finally, secure generative AI must couple adversarial robustness with transparent, explainable pipelines before the technology can achieve mass‑market adoption in regulated industries.

The post “Machines Can See 2025” wraps in Dubai after two‑day showcase of AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/machines-can-see-2025-post-event/feed/ 0
Baidu ERNIE X1 and 4.5 Turbo boast high performance at low cost https://www.artificialintelligence-news.com/news/baidu-ernie-x1-and-4-5-turbo-high-performance-low-cost/ https://www.artificialintelligence-news.com/news/baidu-ernie-x1-and-4-5-turbo-high-performance-low-cost/#respond Fri, 25 Apr 2025 12:28:01 +0000 https://www.artificialintelligence-news.com/?p=106047 Baidu has unveiled ERNIE X1 Turbo and 4.5 Turbo, two fast models that boast impressive performance alongside dramatic cost reductions. Developed as enhancements to the existing ERNIE X1 and 4.5 models, both new Turbo versions highlight multimodal processing, robust reasoning skills, and aggressive pricing strategies designed to capture developer interest and marketshare. Baidu ERNIE X1 […]

The post Baidu ERNIE X1 and 4.5 Turbo boast high performance at low cost appeared first on AI News.

]]>
Baidu has unveiled ERNIE X1 Turbo and 4.5 Turbo, two fast models that boast impressive performance alongside dramatic cost reductions.

Developed as enhancements to the existing ERNIE X1 and 4.5 models, both new Turbo versions highlight multimodal processing, robust reasoning skills, and aggressive pricing strategies designed to capture developer interest and marketshare.

Baidu ERNIE X1 Turbo: Deep reasoning meets cost efficiency

Positioned as a deep-thinking reasoning model, ERNIE X1 Turbo tackles complex tasks requiring sophisticated understanding. It enters a competitive field, claiming superior performance in some benchmarks against rivals like DeepSeek R1, V3, and OpenAI o1:

Benchmarks of Baidu ERNIE X1 Turbo compared to rival AI large language models like DeepSeek R1 and OpenAI o1.

Key to X1 Turbo’s enhanced capabilities is an advanced “chain of thought” process, enabling more structured and logical problem-solving.

Furthermore, ERNIE X1 Turbo boasts improved multimodal functions – the ability to understand and process information beyond just text, potentially including images or other data types – alongside refined tool utilisation abilities. This makes it particularly well-suited for nuanced applications such as literary creation, complex logical reasoning challenges, code generation, and intricate instruction following.

ERNIE X1 Turbo achieves this performance while undercutting competitor pricing. Input token costs start at $0.14 per million tokens, with output tokens priced at $0.55 per million. This pricing structure is approximately 25% of DeepSeek R1.

Baidu ERNIE 4.5 Turbo: Multimodal muscle at a fraction of the cost

Sharing the spotlight is ERNIE 4.5 Turbo, which focuses on delivering upgraded multimodal features and significantly faster response times compared to its non-Turbo counterpart. The emphasis here is on providing a versatile, responsive AI experience while slashing operational costs.

The model achieves an 80% price reduction compared to the original ERNIE 4.5 with input set at $0.11 per million tokens and output at $0.44 per million tokens. This represents roughly 40% of the cost of the latest version of DeepSeek V3, again highlighting a deliberate strategy to attract users through cost-effectiveness.

Performance benchmarks further bolster its credentials. In multiple tests evaluating both multimodal and text capabilities, Baidu ERNIE 4.5 Turbo outperforms OpenAI’s highly-regarded GPT-4o model. 

In multimodal capability assessments, ERNIE 4.5 Turbo achieved an average score of 77.68 to surpass GPT-4o’s score of 72.76 in the same tests.

Benchmarks of Baidu ERNIE 4.5 Turbo compared to rival AI large language models like DeepSeek R1 and OpenAI o1.

While benchmark results always require careful interpretation, this suggests ERNIE 4.5 Turbo is a serious contender for tasks involving an integrated understanding of different data types.

Baidu continues to shake up the AI marketplace

The launch of ERNIE X1 Turbo and 4.5 Turbo signifies a growing trend in the AI sector: the democratisation of high-end capabilities. While foundational models continue to push the boundaries of performance, there is increasing demand for models that balance power with accessibility and affordability.

By lowering the price points for models with sophisticated reasoning and multimodal features, the Baidu ERNIE Turbo series could enable a wider range of developers and businesses to integrate advanced AI into their applications.

This competitive pricing puts pressure on established players like OpenAI and Anthropic, as well as emerging competitors like DeepSeek, potentially leading to further price adjustments across the market.

(Image Credit: Alpha Photo under CC BY-NC 2.0 license)

See also: China’s MCP adoption: AI assistants that actually do things

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Baidu ERNIE X1 and 4.5 Turbo boast high performance at low cost appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/baidu-ernie-x1-and-4-5-turbo-high-performance-low-cost/feed/ 0
How AI infrastructure is changing Solana’s price trends https://www.artificialintelligence-news.com/news/how-ai-infrastructure-is-changing-solanas-price-trends/ https://www.artificialintelligence-news.com/news/how-ai-infrastructure-is-changing-solanas-price-trends/#respond Fri, 25 Apr 2025 09:33:59 +0000 https://www.artificialintelligence-news.com/?p=106043 As of April 2025, Solana price trends – hovering near $141 – are attracting renewed attention from investors and developers. While market fluctuations are typical in the cryptocurrency space, Solana’s price movements are tied increasingly to its evolving role as an infrastructure layer for artificial intelligence. Solana’s technical architecture can handle high-speed, low-cost transactions, thus […]

The post How AI infrastructure is changing Solana’s price trends appeared first on AI News.

]]>
As of April 2025, Solana price trends – hovering near $141 – are attracting renewed attention from investors and developers. While market fluctuations are typical in the cryptocurrency space, Solana’s price movements are tied increasingly to its evolving role as an infrastructure layer for artificial intelligence.

Solana’s technical architecture can handle high-speed, low-cost transactions, thus making it an appealing foundation for AI developers who build real-time, decentralised applications. The convergence of blockchain and AI is influencing the network’s utility and market valuation, creating new narratives around the future of scalable, intelligent systems.

Why Solana appeals to the AI industry

One appeal of Solana is its extremely agile performance. The network can theoretically support over 65,000 transactions per second (TPS), with real-world numbers averaging 3,000 to 4,500 TPS. This is possible because of the unique proof-of-history (PoH) mechanism, which timestamps transactions to enhance the validation process.

Considering the consistently low transaction costs – an average of $0.036 per transaction – Solana offers an ecosystem for computation-dominated AI operations. Blockchain technology, at this level, allows large-scale interactions without latency or high costs.

Solana pricing momentum mirrors AI integration

While the price of Solana generally mirrors the prevailing mood of the market, analysts have noted correlation between price movements and AI developments on the network. For example, the launch of AI-focused projects and integrations has caused price increases.

Distributing AI models for Solana

Solana is emerging as a go-to hosting platform for AI-powered decentralised applications. Some of the more prominent projects using the network for AI include:

  • Nosana (NOS) – A decentralised GPU marketplace where users can distribute AI model training.
  • io.net – an AI-centric cost-effective cloud computing service.
  • Grass – Project building an open, incentivised system where an AI agents get paid to write software for large-scale web crawling.

These projects rely on the throughput of Solana to maintain real-time inference, moving large data streams, and executing microtransactions. Unlike many chains, Solana’s architecture accommodates AI applications that need to access the blockchain directly and quickly – without losing on cost.

Economic feasibility and AI-powered microtransactions

Most required micropayments in Solana for data, model updates, or compute payments, need to be executed frequently, via decentralisation. Through its fee structure, where the cost per interaction (transaction) is roughly $0.036, Solana guarantees low interaction costs.

This capability fosters concepts like token-incentivised federated learning, autonomous model marketplace architectures, and on-the-go autonomous services – which each depend on micro-interactions unfeasible on slower, more-pricey blockchains.

Blockchain analytics show a surge in transaction activity in Solana associated with AI tools and services implementations. The network is capable of thousands of transactions per second; an increasing proportion of those include AI-related functions.

The number of active daily addresses on Solana has increased in parallel, due to growing activity from developers working with AI, machine learning infrastructure, predictive analytics and real-time automation systems.

The numbers show the demand for blockchain-associated workloads and development related to artificial intelligence is rising, which may help bolster Solana’s longer-term prospects and shape market sentiment – as evidenced by fluctuations in the price of Solana.

Solana uses AI for network efficiency

Solana’s contributions to artificial intelligence do not stop at decentralised apps. The network also uses AI for its internal processes, with the Solana Foundation developing ML models for validator clustering and network optimisation. Using traffic patterns and predicting possible blockages, algorithms help maintain the lower latency Solana is known for, even at busy times. This increases the network’s resilience for applications like live dashboards powered by AI or data processing systems on the blockchain.

Ecosystem investment AI momentum

Throughout last year, there was a great deal of venture capital investment in Solana-based projects that used AI add-ons. Important, funded projects include:

  • STARDEER formed the “STARDUST Fund” with $10 million earmarked for Solana ecosystem projects that focus on building intelligent backbone solutions.
  • Seek Protocol announced the development of a Solana-based AR + AI platform, reportedly valued at $8.89 million.
  • Pioneer AI Foundry: Began executing a disbursement strategy for investing in Solana-based AI-dedicated, decentralised educational tools.

The funding helps build infrastructure for functions that drive on-chain activity – like AI model training, decentralised inference, data lineage and provenance, and more.

Closing thoughts

Transitions in Solana’s infrastructural, AI-transformable blockchain capabilities are changing the ecosystem’s scope. The architecture provides for services that demand speed, scalability and cost-effectiveness. Its features will help create next-generation, AI-dedicated, decentralised systems; effectively becoming serverless AI.

With increasing adoption of AI models with blockchain implementation, Solana is set to become a foundation where information, logic, and algorithms co-exist. The ongoing development of advanced intelligent infrastructure on the Solana network provides an opportunity to redefine perceptions of changes to Solana’s price.

(Image source: Unsplash)

The post How AI infrastructure is changing Solana’s price trends appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/how-ai-infrastructure-is-changing-solanas-price-trends/feed/ 0
Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/ https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/#respond Thu, 24 Apr 2025 19:01:38 +0000 https://www.artificialintelligence-news.com/?p=105488 AI-powered scams are evolving rapidly as cybercriminals use new technologies to target victims, according to Microsoft’s latest Cyber Signals report. Over the past year, the tech giant says it has prevented $4 billion in fraud attempts, blocking approximately 1.6 million bot sign-up attempts every hour – showing the scale of this growing threat. The ninth […]

The post Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud appeared first on AI News.

]]>
AI-powered scams are evolving rapidly as cybercriminals use new technologies to target victims, according to Microsoft’s latest Cyber Signals report.

Over the past year, the tech giant says it has prevented $4 billion in fraud attempts, blocking approximately 1.6 million bot sign-up attempts every hour – showing the scale of this growing threat.

The ninth edition of Microsoft’s Cyber Signals report, titled “AI-powered deception: Emerging fraud threats and countermeasures,” reveals how artificial intelligence has lowered the technical barriers for cybercriminals, enabling even low-skilled actors to generate sophisticated scams with minimal effort.

What previously took scammers days or weeks to create can now be accomplished in minutes.

The democratisation of fraud capabilities represents a shift in the criminal landscape that affects consumers and businesses worldwide.

The evolution of AI-enhanced cyber scams

Microsoft’s report highlights how AI tools can now scan and scrape the web for company information, helping cybercriminals build detailed profiles of potential targets for highly-convincing social engineering attacks.

Bad actors can lure victims into complex fraud schemes using fake AI-enhanced product reviews and AI-generated storefronts, which come complete with fabricated business histories and customer testimonials.

According to Kelly Bissell, Corporate Vice President of Anti-Fraud and Product Abuse at Microsoft Security, the threat numbers continue to increase. “Cybercrime is a trillion-dollar problem, and it’s been going up every year for the past 30 years,” per the report.

“I think we have an opportunity today to adopt AI faster so we can detect and close the gap of exposure quickly. Now we have AI that can make a difference at scale and help us build security and fraud protections into our products much faster.”

The Microsoft anti-fraud team reports that AI-powered fraud attacks happen globally, with significant activity originating from China and Europe – particularly Germany, due to its status as one of the largest e-commerce markets in the European Union.

The report notes that the larger a digital marketplace is, the more likely a proportional degree of attempted fraud will occur.

E-commerce and employment scams leading

Two particularly concerning areas of AI-enhanced fraud include e-commerce and job recruitment scams.In the ecommerce space, fraudulent websites can now be created in minutes using AI tools with minimal technical knowledge.

Sites often mimic legitimate businesses, using AI-generated product descriptions, images, and customer reviews to fool consumers into believing they’re interacting with genuine merchants.

Adding another layer of deception, AI-powered customer service chatbots can interact convincingly with customers, delay chargebacks by stalling with scripted excuses, and manipulate complaints with AI-generated responses that make scam sites appear professional.

Job seekers are equally at risk. According to the report, generative AI has made it significantly easier for scammers to create fake listings on various employment platforms. Criminals generate fake profiles with stolen credentials, fake job postings with auto-generated descriptions, and AI-powered email campaigns to phish job seekers.

AI-powered interviews and automated emails enhance the credibility of these scams, making them harder to identify. “Fraudsters often ask for personal information, like resumes or even bank account details, under the guise of verifying the applicant’s information,” the report says.

Red flags include unsolicited job offers, requests for payment and communication through informal platforms like text messages or WhatsApp.

Microsoft’s countermeasures to AI fraud

To combat emerging threats, Microsoft says it has implemented a multi-pronged approach across its products and services. Microsoft Defender for Cloud provides threat protection for Azure resources, while Microsoft Edge, like many browsers, features website typo protection and domain impersonation protection. Edge is noted by the Microsoft report as using deep learning technology to help users avoid fraudulent websites.

The company has also enhanced Windows Quick Assist with warning messages to alert users about possible tech support scams before they grant access to someone claiming to be from IT support. Microsoft now blocks an average of 4,415 suspicious Quick Assist connection attempts daily.

Microsoft has also introduced a new fraud prevention policy as part of its Secure Future Initiative (SFI). As of January 2025, Microsoft product teams must perform fraud prevention assessments and implement fraud controls as part of their design process, ensuring products are “fraud-resistant by design.”

As AI-powered scams continue to evolve, consumer awareness remains important. Microsoft advises users to be cautious of urgency tactics, verify website legitimacy before making purchases, and never provide personal or financial information to unverified sources.

For enterprises, implementing multi-factor authentication and deploying deepfake-detection algorithms can help mitigate risk.

See also: Wozniak warns AI will power next-gen scams

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/feed/ 0
Coalition opposes OpenAI shift from nonprofit roots https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/ https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/#respond Thu, 24 Apr 2025 15:02:57 +0000 https://www.artificialintelligence-news.com/?p=106036 A coalition of experts, including former OpenAI employees, has voiced strong opposition to the company’s shift away from its nonprofit roots. In an open letter addressed to the Attorneys General of California and Delaware, the group – which also includes legal experts, corporate governance specialists, AI researchers, and nonprofit representatives – argues that the proposed […]

The post Coalition opposes OpenAI shift from nonprofit roots appeared first on AI News.

]]>
A coalition of experts, including former OpenAI employees, has voiced strong opposition to the company’s shift away from its nonprofit roots.

In an open letter addressed to the Attorneys General of California and Delaware, the group – which also includes legal experts, corporate governance specialists, AI researchers, and nonprofit representatives – argues that the proposed changes fundamentally threaten OpenAI’s original charitable mission.   

OpenAI was founded with a unique structure. Its core purpose, enshrined in its Articles of Incorporation, is “to ensure that artificial general intelligence benefits all of humanity” rather than serving “the private gain of any person.”

The letter’s signatories contend that the planned restructuring – transforming the current for-profit subsidiary (OpenAI-profit) controlled by the original nonprofit entity (OpenAI-nonprofit) into a Delaware public benefit corporation (PBC) – would dismantle crucial governance safeguards.

This shift, the signatories argue, would transfer ultimate control over the development and deployment of potentially transformative Artificial General Intelligence (AGI) from a charity focused on humanity’s benefit to a for-profit enterprise accountable to shareholders.

Original vision of OpenAI: Nonprofit control as a bulwark

OpenAI defines AGI as “highly autonomous systems that outperform humans at most economically valuable work”. While acknowledging AGI’s potential to “elevate humanity,” OpenAI’s leadership has also warned of “serious risk of misuse, drastic accidents, and societal disruption.”

Co-founder Sam Altman and others have even signed statements equating mitigating AGI extinction risks with preventing pandemics and nuclear war.   

The company’s founders – including Altman, Elon Musk, and Greg Brockman – were initially concerned about AGI being developed by purely commercial entities like Google. They established OpenAI as a nonprofit specifically “unconstrained by a need to generate financial return”. As Altman stated in 2017, “The only people we want to be accountable to is humanity as a whole.”

Even when OpenAI introduced a “capped-profit” subsidiary in 2019 to attract necessary investment, it emphasised that the nonprofit parent would retain control and that the mission remained paramount. Key safeguards included:   

  • Nonprofit control: The for-profit subsidiary was explicitly “controlled by OpenAI Nonprofit’s board”.   
  • Capped profits: Investor returns were capped, with excess value flowing back to the nonprofit for humanity’s benefit.   
  • Independent board: A majority of nonprofit board members were required to be independent, holding no financial stake in the subsidiary.   
  • Fiduciary duty: The board’s legal duty was solely to the nonprofit’s mission, not to maximising investor profit.   
  • AGI ownership: AGI technologies were explicitly reserved for the nonprofit to govern.

Altman himself testified to Congress in 2023 that this “unusual structure” “ensures it remains focused on [its] long-term mission.”

A threat to the mission?

The critics argue the move to a PBC structure would jeopardise these safeguards:   

  • Subordination of mission: A PBC board – while able to consider public benefit – would also have duties to shareholders, potentially balancing profit against the mission rather than prioritising the mission above all else.   
  • Loss of enforceable duty: The current structure gives Attorneys General the power to enforce the nonprofit’s duty to the public. Under a PBC, this direct public accountability – enforceable by regulators – would likely vanish, leaving shareholder derivative suits as the primary enforcement mechanism.   
  • Uncapped profits?: Reports suggest the profit cap might be removed, potentially reallocating vast future wealth from the public benefit mission to private shareholders.   
  • Board independence uncertain: Commitments to a majority-independent board overseeing AI development could disappear.   
  • AGI control shifts: Ownership and control of AGI would likely default to the PBC and its investors, not the mission-focused nonprofit. Reports even suggest OpenAI and Microsoft have discussed removing contractual restrictions on Microsoft’s access to future AGI.   
  • Charter commitments at risk: Commitments like the “stop-and-assist” clause (pausing competition to help a safer, aligned AGI project) might not be honoured by a profit-driven entity.  

OpenAI has publicly cited competitive pressures (i.e. attracting investment and talent against rivals with conventional equity structures) as reasons for the change.

However, the letter counters that competitive advantage isn’t the charitable purpose of OpenAI and that its unique nonprofit structure was designed to impose certain competitive costs in favour of safety and public benefit. 

“Obtaining a competitive advantage by abandoning the very governance safeguards designed to ensure OpenAI remains true to its mission is unlikely to, on balance, advance the mission,” the letter states.   

The authors also question why OpenAI abandoning nonprofit control is necessary merely to simplify the capital structure, suggesting the core issue is the subordination of investor interests to the mission. They argue that while the nonprofit board can consider investor interests if it serves the mission, the restructuring appears aimed at allowing these interests to prevail at the expense of the mission.

Many of these arguments have also been pushed by Elon Musk in his legal action against OpenAI. Earlier this month, OpenAI counter-sued Musk for allegedly orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the company years ago and started rival AI firm xAI.

Call for intervention

The signatories of the open letter urge intervention, demanding answers from OpenAI about how the restructuring away from a nonprofit serves its mission and why safeguards previously deemed essential are now obstacles.

Furthemore, the signatories request a halt to the restructuring, preservation of nonprofit control and other safeguards, and measures to ensure the board’s independence and ability to oversee management effectively in line with the charitable purpose.   

“The proposed restructuring would eliminate essential safeguards, effectively handing control of, and profits from, what could be the most powerful technology ever created to a for-profit entity with legal duties to prioritise shareholder returns,” the signatories conclude.

See also: How does AI judge? Anthropic studies the values of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Coalition opposes OpenAI shift from nonprofit roots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/feed/ 0
Reigniting the European digital economy’s €200bn AI ambitions https://www.artificialintelligence-news.com/news/gitex-europe-2025/ https://www.artificialintelligence-news.com/news/gitex-europe-2025/#respond Thu, 24 Apr 2025 09:22:41 +0000 https://www.artificialintelligence-news.com/?p=105491 There is a sense of urgency in Europe to re-imagine the status quo and reshape technology infrastructures. Timed to harness Europe’s innovative push comes GITEX EUROPE x Ai Everything (21-23 May, Messe Berlin). The world’s third largest economy and host nation for GITEX EUROPE x Ai Everything, Germany’s role as the European economic and technology […]

The post Reigniting the European digital economy’s €200bn AI ambitions appeared first on AI News.

]]>
There is a sense of urgency in Europe to re-imagine the status quo and reshape technology infrastructures. Timed to harness Europe’s innovative push comes GITEX EUROPE x Ai Everything (21-23 May, Messe Berlin).

The world’s third largest economy and host nation for GITEX EUROPE x Ai Everything, Germany’s role as the European economic and technology leader is confirmed as its ICT sector is projected to reach €232.8bn in 2025 (Statista).

GITEX EUROPE x Ai Everything is Europe’s largest tech, startup and digital investment event, and is organised by KAOUN International. It’s hosted in partnership with the Berlin Senate Department for Economics, Energy and Public Enterprises, Germany’s Federal Ministry for Economic Affairs and Climate Action, Berlin Partner for Business and Technology, and the European Innovation Council (EIC).

Global tech engages for cross-border and industry partnerships

The first GITEX EUROPE brings together over 1,400 tech enterprises, startups and SMEs, and platinum sponsors AWS and IBM. Also in sponsorship roles are Cisco, Cloudflare, Dell, Fortinet, Lenovo, NTT, Nutanix, Nvidia, Opswat, and SAP.

GITEX EUROPE x Ai Everything will comprise of tech companies from over 100 countries and 34 European states, including tech pavilions from India, Italy, Morocco, Netherlands, Poland, Serbia, South Korea, UK, and the UAE.

Trixie LohMirmand, CEO of KAOUN International, organiser of GITEX worldwide, said: “There is a sense of urgency and unity in Europe to assert its digital sovereignty and leadership as a global innovation force. The region is paving its way as a centre-stage where AI, quantum and deep tech will be debated, developed, and scaled.”

Global leaders address EU’s tech crossroads

Organisers state there will be over 500 speakers, debating a range of issues including AI and quantum, cloud, and data sovereignty.

Already confirmed are Geoffrey Hinton, Physics Nobel Laureate (2024); Kai Wegner, Mayor of Berlin; H.E. Jelena Begović, Serbian Minister of Science, Technological Development and Innovation; António Henriques, CEO, Bison Bank; Jager McConnell, CEO, Crunchbase; Mark Surman, President, Mozilla; and Sandro Gianella, Head of Europe & Middle East Policy & Partnerships, OpenAI.

Europe’s moves in AI, deep tech & quantum

Europe is focusing on cross-sector AI uses, new investments and international partnerships. Ai Everything Europe, the event’s AI showcase and conference, brings together AI architects, startups and investors to explore AI ecosystems.

Topics presented on stage range from EuroStack ambitions to implications of agentic AI, with speakers including Martin Kon, President and COO, Cohere; Daniel Verten, Strategy Partner, Synthesia; and Professor Dr. Antonio Krueger, CEO of German Research Centre for Artificial Intelligence.

On the show-floor, attendees will be able to experience Brazil’s Ubivis’s smart factory technology, powered by IoT and digital twins, and Hexis’s AI-driven nutrition plans that are trusted by 500+ Olympic and elite athletes.

With nearly €7 billion in quantum investment, Europe is pushing for quantum leadership by 2030. GITEX Quantum Expo (GQX) (in partnership with IBM and QuIC) covers quantum research and cross-industry impact with showcases and conferences.

Speakers include Mira Wolf-Bauwens, Responsible Quantum Computing Lead, IBM Research, Switzerland; Joachim Mnich, Director of Research & Computing, CERN, Switzerland; Neil Abroug, Head of the French National Quantum Strategy, INRIA; and Jan Goetz, CEO & Co-Founder, IQM Quantum Computers, Finland.

Cyber Valley: Building a resilient cyber frontline

With cloud breaches doubling in number and AI-driven attacks, threat response and cyber resilience are core focuses at the event. Fortinet, CrowdStrike, Kaspersky, Knowbe4, and Proofpoint will join other cybersecurity companies exhibiting at GITEX Cyber Valley.

They’ll be alongside law enforcement leaders, global CISOs, and policymakers on stage, including Brig. Gen. Dr. Volker Pötzsch, Chief of Division Cyber/IT & AI, Federal Ministry of Defence, Germany; H.E. Dr. Mohamed Al-Kuwaiti, Head of Cybersecurity, UAE Government; Miguel De Bruycker, Managing Director General, Centre for Cybersecurity Belgium; and Ugo Vignolo Lutati, Group CISO, Prada Group.

GITEX Green Impact: For a sustainable future

GITEX Green Impact connects innovators and investors with over 100 startups and investors exploring how green hydrogen, bio-energy, and next-gen energy storage are moving from R&D to deployment.

Key speakers so far confirmed are Gavin Towler, Chief Scientist for Sustainability Technologies & CTO, Honeywell; Julie Kitcher, Chief Sustainability Officer, Airbus; Lisa Reehten, Managing Director, Bosch Climate Solutions; Massimo Falcioni, Chief Competitiveness Officer, Abu Dhabi Investment Office; and Mounir Benaija, CTO – EV & Charging Infrastructure, TotalEnergies.

Convening the largest startup ecosystem among 60+ nations

GITEX EUROPE x Ai Everything hosts North Star Europe, the local version of the world’s largest startup event, Expand North Star.

North Star Europe gathers over 750 startups and 20 global unicorns, among them reMarkable, TransferMate, Solarisbank AG, Bolt, Flix, and Glovo.

The event features a curated collection of earlys and growth-stage startups from Belgium, France, Hungary, Italy, Morocco, Portugal, Netherlands, Switzerland, Serbia, UK, and UAE.

Among the startups, Neurocast.ai (Netherlands) is advancing AI-powered neurotech for Alzheimer’s research; CloudBees (Switzerland) is the delivery unicorn backed by Goldman Sachs, HSBC, and Lightspeed; and Semiqon (Finland), the world’s first CMOS transistor with the ability to perform in cryogenic conditions.

More than 600 investors with $1tn assets under management will be scouting for new opportunities, including Germany’s Earlybird VC, Austria’s SpeedInvest, Switzerland’s B2Venture, Estonia’s Startup Wise Guys, and the US’s SOSV.

GITEX ScaleX launches as a first-of-its-kind growth platform for scale-ups and late-stage companies, in partnership with AWS.

With SMEs making up 99% of European businesses, GITEX SMEDEX connects SMEs with international trade networks and investors, for funding, legal advice, and market access to scale globally.

Backed by EISMEA and ICC Digital Standards Initiative, the event features SME ecosystem leaders advising from the stage, including Milena Stoycheva, Chairperson of Board of Innovation, Ministry of Innovation and Growth, Bulgaria; and Oliver Grün, President, European Digital SME Alliance and BITMi.

GITEX EUROPE is part of the GITEX global network tech and startup events, taking place in Germany, Morocco, Nigeria, Singapore, Thailand, and the UAE.

For more information, please visit: www.gitex-europe.com.

The post Reigniting the European digital economy’s €200bn AI ambitions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/gitex-europe-2025/feed/ 0