Security | Security AI News | AI News https://www.artificialintelligence-news.com/categories/ai-security/ Artificial Intelligence News Wed, 30 Apr 2025 13:35:24 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Security | Security AI News | AI News https://www.artificialintelligence-news.com/categories/ai-security/ 32 32 Meta beefs up AI security with new Llama tools  https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/ https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/#respond Wed, 30 Apr 2025 13:35:22 +0000 https://www.artificialintelligence-news.com/?p=106233 If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools. The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to […]

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools.

The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to make developing and using AI a bit safer for everyone involved.

Developers working with the Llama family of models now have some upgraded kit to play with. You can grab these latest Llama Protection tools directly from Meta’s own Llama Protections page, or find them where many developers live: Hugging Face and GitHub.

First up is Llama Guard 4. Think of it as an evolution of Meta’s customisable safety filter for AI. The big news here is that it’s now multimodal so it can understand and apply safety rules not just to text, but to images as well. That’s crucial as AI applications get more visual. This new version is also being baked into Meta’s brand-new Llama API, which is currently in a limited preview.

Then there’s LlamaFirewall. This is a new piece of the puzzle from Meta, designed to act like a security control centre for AI systems. It helps manage different safety models working together and hooks into Meta’s other protection tools. Its job? To spot and block the kind of risks that keep AI developers up at night – things like clever ‘prompt injection’ attacks designed to trick the AI, potentially dodgy code generation, or risky behaviour from AI plug-ins.

Meta has also given its Llama Prompt Guard a tune-up. The main Prompt Guard 2 (86M) model is now better at sniffing out those pesky jailbreak attempts and prompt injections. More interestingly, perhaps, is the introduction of Prompt Guard 2 22M.

Prompt Guard 2 22M is a much smaller, nippier version. Meta reckons it can slash latency and compute costs by up to 75% compared to the bigger model, without sacrificing too much detection power. For anyone needing faster responses or working on tighter budgets, that’s a welcome addition.

But Meta isn’t just focusing on the AI builders; they’re also looking at the cyber defenders on the front lines of digital security. They’ve heard the calls for better AI-powered tools to help in the fight against cyberattacks, and they’re sharing some updates aimed at just that.

The CyberSec Eval 4 benchmark suite has been updated. This open-source toolkit helps organisations figure out how good AI systems actually are at security tasks. This latest version includes two new tools:

  • CyberSOC Eval: Built with the help of cybersecurity experts CrowdStrike, this framework specifically measures how well AI performs in a real Security Operation Centre (SOC) environment. It’s designed to give a clearer picture of AI’s effectiveness in threat detection and response. The benchmark itself is coming soon.
  • AutoPatchBench: This benchmark tests how good Llama and other AIs are at automatically finding and fixing security holes in code before the bad guys can exploit them.

To help get these kinds of tools into the hands of those who need them, Meta is kicking off the Llama Defenders Program. This seems to be about giving partner companies and developers special access to a mix of AI solutions – some open-source, some early-access, some perhaps proprietary – all geared towards different security challenges.

As part of this, Meta is sharing an AI security tool they use internally: the Automated Sensitive Doc Classification Tool. It automatically slaps security labels on documents inside an organisation. Why? To stop sensitive info from walking out the door, or to prevent it from being accidentally fed into an AI system (like in RAG setups) where it could be leaked.

They’re also tackling the problem of fake audio generated by AI, which is increasingly used in scams. The Llama Generated Audio Detector and Llama Audio Watermark Detector are being shared with partners to help them spot AI-generated voices in potential phishing calls or fraud attempts. Companies like ZenDesk, Bell Canada, and AT&T are already lined up to integrate these.

Finally, Meta gave a sneak peek at something potentially huge for user privacy: Private Processing. This is new tech they’re working on for WhatsApp. The idea is to let AI do helpful things like summarise your unread messages or help you draft replies, but without Meta or WhatsApp being able to read the content of those messages.

Meta is being quite open about the security side, even publishing their threat model and inviting security researchers to poke holes in the architecture before it ever goes live. It’s a sign they know they need to get the privacy aspect right.

Overall, it’s a broad set of AI security announcements from Meta. They’re clearly trying to put serious muscle behind securing the AI they build, while also giving the wider tech community better tools to build safely and defend effectively.

See also: Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/feed/ 0
Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/ https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/#respond Thu, 24 Apr 2025 19:01:38 +0000 https://www.artificialintelligence-news.com/?p=105488 AI-powered scams are evolving rapidly as cybercriminals use new technologies to target victims, according to Microsoft’s latest Cyber Signals report. Over the past year, the tech giant says it has prevented $4 billion in fraud attempts, blocking approximately 1.6 million bot sign-up attempts every hour – showing the scale of this growing threat. The ninth […]

The post Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud appeared first on AI News.

]]>
AI-powered scams are evolving rapidly as cybercriminals use new technologies to target victims, according to Microsoft’s latest Cyber Signals report.

Over the past year, the tech giant says it has prevented $4 billion in fraud attempts, blocking approximately 1.6 million bot sign-up attempts every hour – showing the scale of this growing threat.

The ninth edition of Microsoft’s Cyber Signals report, titled “AI-powered deception: Emerging fraud threats and countermeasures,” reveals how artificial intelligence has lowered the technical barriers for cybercriminals, enabling even low-skilled actors to generate sophisticated scams with minimal effort.

What previously took scammers days or weeks to create can now be accomplished in minutes.

The democratisation of fraud capabilities represents a shift in the criminal landscape that affects consumers and businesses worldwide.

The evolution of AI-enhanced cyber scams

Microsoft’s report highlights how AI tools can now scan and scrape the web for company information, helping cybercriminals build detailed profiles of potential targets for highly-convincing social engineering attacks.

Bad actors can lure victims into complex fraud schemes using fake AI-enhanced product reviews and AI-generated storefronts, which come complete with fabricated business histories and customer testimonials.

According to Kelly Bissell, Corporate Vice President of Anti-Fraud and Product Abuse at Microsoft Security, the threat numbers continue to increase. “Cybercrime is a trillion-dollar problem, and it’s been going up every year for the past 30 years,” per the report.

“I think we have an opportunity today to adopt AI faster so we can detect and close the gap of exposure quickly. Now we have AI that can make a difference at scale and help us build security and fraud protections into our products much faster.”

The Microsoft anti-fraud team reports that AI-powered fraud attacks happen globally, with significant activity originating from China and Europe – particularly Germany, due to its status as one of the largest e-commerce markets in the European Union.

The report notes that the larger a digital marketplace is, the more likely a proportional degree of attempted fraud will occur.

E-commerce and employment scams leading

Two particularly concerning areas of AI-enhanced fraud include e-commerce and job recruitment scams.In the ecommerce space, fraudulent websites can now be created in minutes using AI tools with minimal technical knowledge.

Sites often mimic legitimate businesses, using AI-generated product descriptions, images, and customer reviews to fool consumers into believing they’re interacting with genuine merchants.

Adding another layer of deception, AI-powered customer service chatbots can interact convincingly with customers, delay chargebacks by stalling with scripted excuses, and manipulate complaints with AI-generated responses that make scam sites appear professional.

Job seekers are equally at risk. According to the report, generative AI has made it significantly easier for scammers to create fake listings on various employment platforms. Criminals generate fake profiles with stolen credentials, fake job postings with auto-generated descriptions, and AI-powered email campaigns to phish job seekers.

AI-powered interviews and automated emails enhance the credibility of these scams, making them harder to identify. “Fraudsters often ask for personal information, like resumes or even bank account details, under the guise of verifying the applicant’s information,” the report says.

Red flags include unsolicited job offers, requests for payment and communication through informal platforms like text messages or WhatsApp.

Microsoft’s countermeasures to AI fraud

To combat emerging threats, Microsoft says it has implemented a multi-pronged approach across its products and services. Microsoft Defender for Cloud provides threat protection for Azure resources, while Microsoft Edge, like many browsers, features website typo protection and domain impersonation protection. Edge is noted by the Microsoft report as using deep learning technology to help users avoid fraudulent websites.

The company has also enhanced Windows Quick Assist with warning messages to alert users about possible tech support scams before they grant access to someone claiming to be from IT support. Microsoft now blocks an average of 4,415 suspicious Quick Assist connection attempts daily.

Microsoft has also introduced a new fraud prevention policy as part of its Secure Future Initiative (SFI). As of January 2025, Microsoft product teams must perform fraud prevention assessments and implement fraud controls as part of their design process, ensuring products are “fraud-resistant by design.”

As AI-powered scams continue to evolve, consumer awareness remains important. Microsoft advises users to be cautious of urgency tactics, verify website legitimacy before making purchases, and never provide personal or financial information to unverified sources.

For enterprises, implementing multi-factor authentication and deploying deepfake-detection algorithms can help mitigate risk.

See also: Wozniak warns AI will power next-gen scams

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/feed/ 0
Apple AI stresses privacy with synthetic and anonymised data https://www.artificialintelligence-news.com/news/apple-leans-on-synthetic-data-to-upgrade-ai-privately/ https://www.artificialintelligence-news.com/news/apple-leans-on-synthetic-data-to-upgrade-ai-privately/#respond Tue, 15 Apr 2025 08:58:08 +0000 https://www.artificialintelligence-news.com/?p=105319 Apple is taking a new approach to training its AI models – one that avoids collecting or copying user content from iPhones or Macs. According to a recent blog post, the company plans to continue to rely on synthetic data (constructed data that is used to mimic user behaviour) and differential privacy to improve features […]

The post Apple AI stresses privacy with synthetic and anonymised data appeared first on AI News.

]]>
Apple is taking a new approach to training its AI models – one that avoids collecting or copying user content from iPhones or Macs.

According to a recent blog post, the company plans to continue to rely on synthetic data (constructed data that is used to mimic user behaviour) and differential privacy to improve features like email summaries, without gaining access to personal emails or messages.

For users who opt in to Apple’s Device Analytics program, the company’s AI models will compare synthetic email-like messages against a small sample of a real user’s content stored locally on the device. The device then identifies which of the synthetic messages most closely matches its user sample, and sends information about the selected match back to Apple. No actual user data leaves the device, and Apple says it receives only aggregated information.

The technique will allow Apple to improve its models for longer-form text generation tasks without collecting real user content. It’s an extension of the company’s long-standing use of differential privacy, which introduces randomised data into broader datasets to help protect individual identities. Apple has used this method since 2016 to understand use patterns, in line with the company’s safeguarding policies.

Improving Genmoji and other Apple Intelligence features

The company already uses differential privacy to improve features like Genmoji, where it collects general trends about which prompts are most popular without linking any prompt with a specific user or device. In upcoming releases, Apple plans to apply similar methods to other Apple Intelligence features, including Image Playground, Image Wand, Memories Creation, and Writing Tools.

For Genmoji, the company anonymously polls participating devices to determine whether specific prompt fragments have been seen. Each device responds with a noisy signal – some responses reflect actual use, while others are randomised. The approach ensures that only widely-used terms become visible to Apple, and no individual response can be traced back to a user or device, the company says.

Curating synthetic data for better email summaries

While the above method has worked well with respect to short prompts, Apple needed a new approach for more complex tasks like summarising emails. For this, Apple generates thousands of sample messages, and these synthetic messages are converted into numerical representations, or ’embeddings,’ based on language, tone, and topic. Participating user devices then compare the embeddings to locally stored samples. Again, only the selected match is shared, not the content itself.

Apple collects the most frequently-selected synthetic embeddings from participating devices and uses them to refine its training data. Over time, this process allows the system to generate more relevant and realistic synthetic emails, helping Apple to improve its AI outputs for summarisation and text generation without apparent compromise of user privacy.

Available in beta

Apple is rolling out the system in beta versions of iOS 18.5, iPadOS 18.5, and macOS 15.5. According to Bloomberg’s Mark Gurman, Apple is attempting to address challenges with its AI development in this way, problems which have included delayed feature rollouts and the fallout from leadership changes in the Siri team.

Whether its approach will yield more useful AI outputs in practice remains to be seen, but it signals a clear public effort to balance user privacy with model performance.

(Photo by Unsplash)

See also: ChatGPT got another viral moment with ‘AI action figure’ trend

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Apple AI stresses privacy with synthetic and anonymised data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/apple-leans-on-synthetic-data-to-upgrade-ai-privately/feed/ 0
Spot AI introduces the world’s first universal AI agent builder for security cameras https://www.artificialintelligence-news.com/news/spot-ai-introduces-the-worlds-first-universal-ai-agent-builder-for-security-cameras/ https://www.artificialintelligence-news.com/news/spot-ai-introduces-the-worlds-first-universal-ai-agent-builder-for-security-cameras/#respond Thu, 10 Apr 2025 03:31:47 +0000 https://www.artificialintelligence-news.com/?p=105242 Spot AI has introduced Iris, which the company describes as the world’s first universal video AI agent builder for enterprise camera systems. The tool allows businesses to create customised AI agents through a conversational interface, making it easier to monitor and act on video data from physical settings without the need for technical expertise. Designed […]

The post Spot AI introduces the world’s first universal AI agent builder for security cameras appeared first on AI News.

]]>
Spot AI has introduced Iris, which the company describes as the world’s first universal video AI agent builder for enterprise camera systems.

The tool allows businesses to create customised AI agents through a conversational interface, making it easier to monitor and act on video data from physical settings without the need for technical expertise.

Designed for industries like manufacturing, logistics, retail, construction, and healthcare, Iris builds on Spot AI’s earlier launch of out-of-the-box Video AI Agents for safety, security, and operations. While those prebuilt agents focus on common use cases, Iris gives organisations the flexibility to train agents for more specific, business-critical scenarios.

According to Spot AI, users can build video agents in a matter of minutes. The system allows training through reinforcement—using examples of what the AI should and shouldn’t detect—and can be configured to trigger real-world responses like shutting down equipment, locking doors, or generating alerts.

CEO and Co-Founder Rish Gupta said the tool dramatically shortens the time required to create specialised video detection systems.

“What used to take months of development now happens in minutes,” Gupta explained. Before Iris, creating specialised video detection required dedicated AI/ML teams with advanced degrees, thousands of annotated images, and 8 weeks of complex development,” he explained. “Iris puts that same power in the hands of any business leader through simple conversation with 8 minutes and 20 training images.”

Examples from real-world settings

Spot AI highlighted a variety of industry-specific use cases that Iris could support:

  • Manufacturing: Detecting product backups or fluid leaks, with automatic responses based on severity.
  • Warehousing: Spotting unsafe stacking of boxes or pallets to prevent accidents.
  • Retail: Monitoring shelf stock levels and generating alerts for restocking.
  • Healthcare: Distinguishing between staff and patients wearing similar uniforms to optimise traffic flow and safety.
  • Security: Identifying tools like bolt cutters in parking areas to address evolving security threats.
  • Safety compliance: Verifying whether workers are wearing required safety gear on-site.

Video AI agents continuously monitor critical areas and help teams respond quickly to safety hazards, operational inefficiencies, and security issues. With Iris, those agents can be developed and modified through natural language interaction, reducing the need for engineering support and making video insights more accessible across departments.

Looking ahead

Iris is part of Spot AI’s broader effort to make video data more actionable in physical environments. The company plans to discuss the tool and its capabilities at Google Cloud Next, where Rish Gupta is scheduled to speak during a media roundtable on April 9.

(Image by Spot AI)

See also: ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Spot AI introduces the world’s first universal AI agent builder for security cameras appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/spot-ai-introduces-the-worlds-first-universal-ai-agent-builder-for-security-cameras/feed/ 0
Is America falling behind in the AI race? https://www.artificialintelligence-news.com/news/is-america-falling-behind-in-the-ai-race/ https://www.artificialintelligence-news.com/news/is-america-falling-behind-in-the-ai-race/#respond Mon, 24 Mar 2025 09:35:32 +0000 https://www.artificialintelligence-news.com/?p=104963 Several major US artificial intelligence companies have expressed fear around an erosion of America’s edge in AI development. In recent submissions to the US government, the companies warned that Chinese models, such as DeepSeek R1, are becoming more sophisticated and competitive. The submissions, filed in March 2025 in response to a request for input on […]

The post Is America falling behind in the AI race? appeared first on AI News.

]]>
Several major US artificial intelligence companies have expressed fear around an erosion of America’s edge in AI development.

In recent submissions to the US government, the companies warned that Chinese models, such as DeepSeek R1, are becoming more sophisticated and competitive. The submissions, filed in March 2025 in response to a request for input on an AI Action Plan, highlight the growing challenge from China in technological capability and price.

China’s growing AI presence

Chinese state-supported AI model DeepSeek R1 has piqued the interest of US developers. According to OpenAI, DeepSeek demonstrates that the technological gap between the US and China is narrowing. The company described DeepSeek as “state-subsidised, state-controlled, and freely available,” raises concerns about the model’s ability to influence global AI development.

OpenAI compared DeepSeek to Chinese telecommunications company Huawei, warning that Chinese regulations could allow the government to compel DeepSeek to compromise sensitive US systems or infrastructure. Concerns about data privacy were also raised, with OpenAI pointing out that Chinese rules could force DeepSeek to disclose user data to the government, and enhance China’s ability to develop more advanced AI systems.

The competition from China also includes Ernie X1 and Ernie 4.5, released by Baidu, which are designed to compete with Western systems.

According to Baidu, Ernie X1 “delivers performance on par with DeepSeek R1 at only half the price.” Meanwhile, Ernie 4.5 is priced at just 1% of OpenAI’s GPT-4.5 while outperforming it in multiple benchmarks.

DeepSeek’s aggressive pricing strategy is also raising concerns with the US companies. According to Bernstein Research, DeepSeek’s V3 and R1 models are priced “anywhere from 20-40x cheaper” than equivalent models from OpenAI. The pricing pressure could force US developers to adjust their business models to remain competitive.

Baidu’s strategy of open-sourcing its models is also gaining traction. “One thing we learned from DeepSeek is that open-sourcing the best models can greatly help adoption,” Baidu CEO Robin Li said in February. Baidu plans to open-source the Ernie 4.5 series starting June 30, which could accelerate adoption and further increase competitive pressure on US firms.

Cost aside, early user feedback on Baidu’s models has been positive. “[I’ve] been playing around with it for hours, impressive performance,” Alvin Foo, a venture partner at Zero2Launch, said in a post on social media, suggesting China’s AI models are becoming more affordable and effective.

US AI security and economic risks

The submissions also highlight what the US companies perceive as risks to security and the economy.

OpenAI warned that Chinese regulations could allow the government to compel DeepSeek to manipulate its models to compromise infrastructure or sensitive applications, creating vulnerabilities in important systems.

Anthropic’s concerns centred on biosecurity. It disclosed that its own Claude 3.7 Sonnet model demonstrated capabilities in biological weapon development, highlighting the dual-use nature of AI systems.

Anthropic also raised issues with US export controls on AI chips. While Nvidia’s H20 chips meet US export restrictions, they nonetheless perform well in text generation – a important feature for reinforcement learning. Anthropic called on the government to tighten controls to prevent China from gaining a technological edge using the chips.

Google took a more cautious approach, acknowledging security risks yet warned against over-regulation. The company argues that strict AI export rules could harm US competitiveness by limiting business opportunities for domestic cloud providers. Google recommended targeted export controls to protect national security but without disruption to its business operations.

Maintaining US AI competitiveness

All US three companies emphasised the need for better government oversight and infrastructure investment to maintain US AI leadership.

Anthropic warned that by 2027, training a single advanced AI model could require up to five gigawatts of power – enough to power a small city. The company proposed a national target to build 50 additional gigawatts of AI-dedicated power capacity by 2027 and to streamline regulations around power transmission infrastructure.

OpenAI positioned the competition between US and Chinese AI as a contest between democratic and authoritarian AI models. The company argued that promoting a free-market approach would drive better outcomes and maintain America’s technological edge.

Google focused on urging practical measures, including increased federal funding for AI research, improved access to government contracts, and streamlined export controls. The company also recommended more flexible procurement rules to accelerate AI adoption by federal agencies.

Regulatory strategies for US AI

The US companies called for a unified federal approach to AI regulation.

OpenAI proposed a regulatory framework managed by the Department of Commerce, warning that fragmented state-level regulations could drive AI development overseas. The company supported a tiered export control framework, allowing broader access to US-developed AI in democratic countries while restricting it in authoritarian states.

Anthropic called for stricter export controls on AI hardware and training data, warning that even minor improvements in model performance could give China a strategic advantage.

Google focused on copyright and intellectual property rights, stressing that its interpretation of ‘fair use’ is important for AI development. The company warned that overly restrictive copyright rules could disadvantage US AI firms compared to their Chinese competitors.

All three companies stressed the need for faster government adoption of AI. OpenAI recommended removing some existing testing and procurement barriers, while Anthropic supported streamlined procurement processes. Google emphasised the need for improved interoperability in government cloud infrastructure.

See also: The best AI prompt generator: Create perfect AI prompts

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Is America falling behind in the AI race? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/is-america-falling-behind-in-the-ai-race/feed/ 0
DeepSeek is a reminder to approach the AI unknown with caution https://www.artificialintelligence-news.com/news/deepseek-is-a-reminder-to-approach-the-ai-unknown-with-caution/ https://www.artificialintelligence-news.com/news/deepseek-is-a-reminder-to-approach-the-ai-unknown-with-caution/#respond Mon, 17 Mar 2025 06:33:00 +0000 https://www.artificialintelligence-news.com/?p=104725 There has been a lot of excitement and many headlines generated by the recent launch of DeepSeek. And, while the technology behind this latest iteration of Generative AI is undoubtedly impressive, in many ways its arrival encapsulates the state of AI today. That is to say, it’s interesting, promising and maybe a little overhyped. I […]

The post DeepSeek is a reminder to approach the AI unknown with caution appeared first on AI News.

]]>
There has been a lot of excitement and many headlines generated by the recent launch of DeepSeek. And, while the technology behind this latest iteration of Generative AI is undoubtedly impressive, in many ways its arrival encapsulates the state of AI today. That is to say, it’s interesting, promising and maybe a little overhyped.

I wonder whether that may be partly a generational thing. The baby boomer generation was the first to be widely employed in IT and that cohort learned the lessons of business the hard way.  Projects had to be cost-justified because technology was expensive and needed to be attached to a robust ROI case. Projects were rolled out slowly because they were complex and had to be aligned to a specific business need, endorsed by the right stakeholders. ‘Project creep’ was feared and the relationship between IT and ‘the business’ was often fraught and complex, characterised by mutual suspicion.

Today, the situation is somewhat different. The IT industry is enormous, the Fortune 50 is replete with major tech brands and other sectors marvel at the profit margins of the software sector. That may all be very well for Silicon Valley and the venture capitalists of Sand Hill Road desperate to find The Next Big Thing. But back in the real world of corporate IT, matters should be seen with more caution, an appropriate level of pragmatism and even a raised eyebrow or two.

Which brings us back to AI. AI is far from new and has its roots all the way back in the middle of the previous century. So far, despite all the excitement, it has played only a moderate role in the business world. The success of tools like Chat-GPT has catapulted it to mainstream attention but it is still beset by familiar issues. It is costly to deploy in earnest, it requires (at least until DeepSeek) enormous compute power to develop and it delivers responses that are often questionable. There are also serious questions to be asked about legal liability and copyright.

A balancing act

We need to strike a happy balance between the boosterism and experimentation inherent in AI today and a healthy sense of pragmatism. We should begin with the business case and ask how AI helps us. What is our mission? Where are our strategic opportunities and risks? OK, now how can AI help us? Today, there is too much “AI is great, let’s see what we can do with it”.

Today, I see AI as a massive opportunity but use cases need to be worked out. AI is great at massive computation tasks that human beings are bad at. It can study patterns and detect trends faster than our feeble human brains can. It doesn’t get out of the bed on the wrong side in the morning, tire easily or require two weeks holiday in the Mediterranean each year. It is surprisingly excellent at a limited number of creative tasks such as making images, music, poems and videos. But it is bad at seeing the big picture. It lacks the human sense of caution that keeps us from danger, and it has no experience of the real world of work that is composed of an enormous range of variables, not the least of which is human mood and perception.

AI today is great at the edge: in powering bots that answer predictable questions or agents that help us achieve rote tasks faster than would otherwise be the case. Robotic process automation has been a useful aid and has changed the dynamic of how the human being interacts with computers: we can now hand off dull jobs like processing credit card applications or expense claims and focus on being creative thinkers.

There are grey areas too. Conversational AI is a work in progress, but we can expect rapid improvements based on iterative continuous learning by our binary friends. Soon we may be impressed by AI’s ability to guess our next steps and to suggest smarter ways to accomplish our work. Similarly, there is scope for AI to learn more about our vertical businesses and to understand trends that humans may miss when we fail to see the forest for the trees.

But we are some way off robot CEOs, and we need to ensure that AI ‘decisions’ are tempered by human bosses that have common sense, the ability to check, test and revert. The future is one where AI and humanity work in concert but for now we are wise to deploy with care and with sensible budgets and the appropriate level of commitment.

We need to watch carefully for the next DeepSeek hit, query it and always begin with old-fashioned questions as to applicability, costs and risk. I note that DeepSeek’s website bears the tagline “Into the Unknown”. That’s about right: we need to maintain a spirit of adventure and optimism but avoid getting lost in a new technological wilderness.

Photo by Solen Feyissa on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek is a reminder to approach the AI unknown with caution appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-is-a-reminder-to-approach-the-ai-unknown-with-caution/feed/ 0
Best data security platforms of 2025 https://www.artificialintelligence-news.com/news/best-data-security-platforms-of-2025/ https://www.artificialintelligence-news.com/news/best-data-security-platforms-of-2025/#respond Wed, 12 Mar 2025 08:44:49 +0000 https://www.artificialintelligence-news.com/?p=104737 With the rapid growth in the generation, storage, and sharing of data, ensuring its security has become both a necessity and a formidable challenge. Data breaches, cyberattacks, and insider threats are constant risks that require sophisticated solutions. This is where Data Security Platforms come into play, providing organisations with centralised tools and strategies to protect […]

The post Best data security platforms of 2025 appeared first on AI News.

]]>
With the rapid growth in the generation, storage, and sharing of data, ensuring its security has become both a necessity and a formidable challenge. Data breaches, cyberattacks, and insider threats are constant risks that require sophisticated solutions. This is where Data Security Platforms come into play, providing organisations with centralised tools and strategies to protect sensitive information and maintain compliance.

Key components of data security platforms

Effective DSPs are built on several core components that work together to protect data from unauthorised access, misuse, and theft. The components include:

1. Data discovery and classification

Before data can be secured, it needs to be classified and understood. DSPs typically include tools that automatically discover and categorize data based on its sensitivity and use. For example:

  • Personal identifiable information (PII): Names, addresses, social security numbers, etc.
  • Financial data: Credit card details, transaction records.
  • Intellectual property (IP): Trade secrets, proprietary designs.
  • Regulated data: Information governed by laws like GDPR, HIPAA, or CCPA.

By identifying data types and categorizing them by sensitivity level, organisations can prioritise their security efforts.

2. Data encryption

Encryption transforms readable data into an unreadable format, ensuring that even if unauthorised users access the data, they cannot interpret it without the decryption key. Most DSPs support various encryption methods, including:

  • At-rest encryption: Securing data stored on drives, databases, or other storage systems.
  • In-transit encryption: Protecting data as it moves between devices, networks, or applications.

Modern DSPs often deploy advanced encryption standards (AES) or bring-your-own-key (BYOK) solutions, ensuring data security even when using third-party cloud storage.

3. Access control and identity management

Managing who has access to data is a important aspect of data security. DSPs enforce robust role-based access control (RBAC), ensuring only authorised users and systems can access sensitive information. With identity and access management (IAM) integration, DSPs can enhance security by combining authentication methods like:

  • Passwords.
  • Biometrics (e.g. fingerprint or facial recognition).
  • Multi-factor authentication (MFA).
  • Behaviour-based authentication (monitoring user actions for anomalies).

4. Data loss prevention (DLP)

Data loss prevention tools in DSPs help prevent unauthorised sharing or exfiltration of sensitive data. They monitor and control data flows, blocking suspicious activity like:

  • Sending confidential information over email.
  • Transferring sensitive data to unauthorised external devices.
  • Uploading important files to unapproved cloud services.

By enforcing data-handling policies, DSPs help organisations maintain control over their sensitive information.

5. Threat detection and response

DSPs employ threat detection systems powered by machine learning, artificial intelligence (AI), and behaviour analytics to identify unauthorised or malicious activity. Common features include:

  • Anomaly detection: Identifies unusual behaviour, like accessing files outside normal business hours.
  • Insider threat detection: Monitors employees or contractors who might misuse their access to internal data.
  • Real-time alerts: Provide immediate notifications when a potential threat is detected.

Some platforms also include automated response mechanisms to isolate affected data or deactivate compromised user accounts.

6. Compliance audits and reporting

Many industries are subject to strict data protection regulations, like GDPR, HIPAA, CCPA, or PCI DSS. DSPs help organisations comply with these laws by:

  • Continuously monitoring data handling practices.
  • Generating detailed audit trails.
  • Providing pre-configured compliance templates and reporting tools.

The features simplify regulatory audits and reduce the risk of non-compliance penalties.

Best data security platforms of 2025

Whether you’re a small business or a large enterprise, these tools will help you manage risks, secure databases, and protect sensitive information.

1. Velotix

Velotix is an AI-driven data security platform focused on policy automation and intelligent data access control. It simplifies compliance with stringent data regulations like GDPR, HIPAA, and CCPA, and helps organisations strike the perfect balance between accessibility and security. Key Features:

  • AI-powered access governance: Velotix uses machine learning to ensure users only access data they need to see, based on dynamic access policies.
  • Seamless integration: It integrates smoothly with existing infrastructures across cloud and on-premises environments.
  • Compliance automation: Simplifies meeting legal and regulatory requirements by automating compliance processes.
  • Scalability: Ideal for enterprises with complex data ecosystems, supporting hundreds of terabytes of sensitive data.

Velotix stands out for its ability to reduce the complexity of data governance, making it a must-have in today’s security-first corporate world.

2. NordLayer

NordLayer, from the creators of NordVPN, offers a secure network access solution tailored for businesses. While primarily a network security tool, it doubles as a robust data security platform by ensuring end-to-end encryption for your data in transit.

Key features:

  • Zero trust security: Implements a zero trust approach, meaning users and devices must be verified every time data access is requested.
  • AES-256 encryption Standards: Protects data flows with military-grade encryption.
  • Cloud versatility: Supports hybrid and multi-cloud environments for maximum flexibility.
  • Rapid deployment: Easy to implement even for smaller teams, requiring minimal IT involvement.

NordLayer ensures secure, encrypted communications between your team and the cloud, offering peace of mind when managing sensitive data.

3. HashiCorp Vault

HashiCorp Vault is a leader in secrets management, encryption as a service, and identity-based access. Designed for developers, it simplifies access control without placing sensitive data at risk, making it important for modern application development.

Key features:

  • Secrets management: Protect sensitive credentials like API keys, tokens, and passwords.
  • Dynamic secrets: Automatically generate temporary, time-limited credentials for improved security.
  • Encryption as a service: Offers flexible tools for encrypting any data across multiple environments.
  • Audit logging: Monitor data access attempts for greater accountability and compliance.

With a strong focus on application-level security, HashiCorp Vault is ideal for organisations seeking granular control over sensitive operational data.

4. Imperva Database Risk & Compliance

Imperva is a pioneer in database security. Its Database Risk & Compliance solution combines analytics, automation, and real-time monitoring to protect sensitive data from breaches and insider threats.

Key features:

  • Database activity monitoring (DAM): Tracks database activity in real time to identify unusual patterns.
  • Vulnerability assessment: Scans databases for security weaknesses and provides actionable remediation steps.
  • Cloud and hybrid deployment: Supports flexible environments, ranging from on-premises deployments to modern cloud setups.
  • Audit preparation: Simplifies audit readiness with detailed reporting tools and predefined templates.

Imperva’s tools are trusted by enterprises to secure their most confidential databases, ensuring compliance and top-notch protection.

5. ESET

ESET, a well-known name in cybersecurity, offers an enterprise-grade security solution that includes powerful data encryption tools. Famous for its malware protection, ESET combines endpoint security with encryption to safeguard sensitive information.

Key features:

  • Endpoint encryption: Ensures data remains protected even if devices are lost or stolen.
  • Multi-platform support: Works across Windows, Mac, and Linux systems.
  • Proactive threat detection: Combines AI and machine learning to detect potential threats before they strike.
  • Ease of use: User-friendly dashboards enable intuitive management of security policies.

ESET provides an all-in-one solution for companies needing endpoint protection, encryption, and proactive threat management.

6. SQL Secure

Aimed at database administrators, SQL Secure delivers specialised tools to safeguard SQL Server environments. It allows for detailed role-based analysis, helping organisations improve their database security posture and prevent data leaks.

Key features:

  • Role analysis: Identifies and mitigates excessive or unauthorised permission assignments.
  • Dynamic data masking: Protects sensitive data by obscuring it in real-time in applications and queries.
  • Customisable alerts: Notify teams of improper database access or policy violations immediately.
  • Regulatory compliance: Predefined policies make it easy to align with GDPR, HIPAA, PCI DSS, and other regulations.

SQL Secure is a tailored solution for businesses dependent on SQL databases, providing immediate insights and action plans for tighter security.

7. Acra

Acra is a modern, developer-friendly cryptographic tool engineered for data encryption and secure data lifecycle management. It brings cryptography closer to applications, ensuring deep-rooted data protection at every level.

Key features:

  • Application-level encryption: Empowers developers to integrate customised encryption policies directly into their apps.
  • Intrusion detection: Monitors for data leaks with a robust intrusion detection mechanism.
  • End-to-end data security: Protect data at rest, in transit, and in use, making it more versatile than traditional encryption tools.
  • Open source availability: Trusted by developers thanks to its open-source model, offering transparency and flexibility.

Acra is particularly popular with startups and tech-savvy enterprises needing a lightweight, developer-first approach to securing application data.

8. BigID

BigID focuses on privacy, data discovery, and compliance by using AI to identify sensitive data across structured and unstructured environments. Known for its data intelligence capabilities, BigID is one of the most comprehensive platforms for analysing and protecting enterprise data.

Key Features:

  • Data discovery: Automatically classify sensitive data like PII (Personally Identifiable Information) and PHI (Protected Health Information).
  • Privacy-by-design: Built to streamline compliance with global privacy laws like GDPR, CCPA, and more.
  • Risk management: Assess data risks and prioritise actions based on importance.
  • Integrations: Easily integrates with other security platforms and cloud providers for a unified approach.

BigID excels at uncovering hidden risks and ensuring compliance, making it an essential tool for data-driven enterprises.

9. DataSunrise Database Security

DataSunrise specialises in database firewall protection and intrusion detection for a variety of databases, including SQL-based platforms, NoSQL setups, and cloud-hosted solutions. It focuses on safeguarding sensitive data while providing robust real-time monitoring.

Key features:

  • Database firewall: Blocks unauthorised access attempts with role-specific policies.
  • Sensitive data discovery: Identifies risky data in your database for preventative action.
  • Audit reporting: Generate detailed investigative reports about database activity.
  • Cross-platform compatibility: Works with MySQL, PostgreSQL, Oracle, Amazon Aurora, Snowflake, and more.

DataSunrise is highly configurable and scalable, making it a solid choice for organisations running diverse database environments.

10. Covax Polymer

Covax Polymer is an innovative data security platform dedicated to governing sensitive data use in cloud-based collaboration tools like Slack, Microsoft Teams, and Google Workspace. It’s perfect for businesses that rely on SaaS applications for productivity.

Key features:

  • Real-time governance: Monitors and protects data transfers occurring across cloud collaboration tools.
  • Context-aware decisions: Evaluates interactions to identify potential risks, ensuring real-time security responses.
  • Data loss prevention (DLP): Prevents sensitive information from being shared outside approved networks.
  • Comprehensive reporting: Tracks and analyses data sharing trends, offering actionable insights for compliance.

Covax Polymer addresses the growing need for securing communications and shared data in collaborative workspaces.

(Image source: Unsplash)

The post Best data security platforms of 2025 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/best-data-security-platforms-of-2025/feed/ 0
Endor Labs: AI transparency vs ‘open-washing’ https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/ https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/#respond Mon, 24 Feb 2025 18:15:45 +0000 https://www.artificialintelligence-news.com/?p=104605 As the AI industry focuses on transparency and security, debates around the true meaning of “openness” are intensifying. Experts from open-source security firm Endor Labs weighed in on these pressing topics. Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, emphasised the importance of applying lessons learned from software security to AI systems. “The US […]

The post Endor Labs: AI transparency vs ‘open-washing’ appeared first on AI News.

]]>
As the AI industry focuses on transparency and security, debates around the true meaning of “openness” are intensifying. Experts from open-source security firm Endor Labs weighed in on these pressing topics.

Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, emphasised the importance of applying lessons learned from software security to AI systems.

“The US government’s 2021 Executive Order on Improving America’s Cybersecurity includes a provision requiring organisations to produce a software bill of materials (SBOM) for each product sold to federal government agencies.”

An SBOM is essentially an inventory detailing the open-source components within a product, helping detect vulnerabilities. Stiefel argued that “applying these same principles to AI systems is the logical next step.”  

“Providing better transparency for citizens and government employees not only improves security,” he explained, “but also gives visibility into a model’s datasets, training, weights, and other components.”

What does it mean for an AI model to be “open”?  

Julien Sobrier, Senior Product Manager at Endor Labs, added crucial context to the ongoing discussion about AI transparency and “openness.” Sobrier broke down the complexity inherent in categorising AI systems as truly open.

“An AI model is made of many components: the training set, the weights, and programs to train and test the model, etc. It is important to make the whole chain available as open source to call the model ‘open’. It is a broad definition for now.”  

Sobrier noted the lack of consistency across major players, which has led to confusion about the term.

“Among the main players, the concerns about the definition of ‘open’ started with OpenAI, and Meta is in the news now for their LLAMA model even though that’s ‘more open’. We need a common understanding of what an open model means. We want to watch out for any ‘open-washing,’ as we saw it with free vs open-source software.”  

One potential pitfall, Sobrier highlighted, is the increasingly common practice of “open-washing,” where organisations claim transparency while imposing restrictions.

“With cloud providers offering a paid version of open-source projects (such as databases) without contributing back, we’ve seen a shift in many open-source projects: The source code is still open, but they added many commercial restrictions.”  

“Meta and other ‘open’ LLM providers might go this route to keep their competitive advantage: more openness about the models, but preventing competitors from using them,” Sobrier warned.

DeepSeek aims to increase AI transparency

DeepSeek, one of the rising — albeit controversial — players in the AI industry, has taken steps to address some of these concerns by making portions of its models and code open-source. The move has been praised for advancing transparency while providing security insights.  

“DeepSeek has already released the models and their weights as open-source,” said Andrew Stiefel. “This next move will provide greater transparency into their hosted services, and will give visibility into how they fine-tune and run these models in production.”

Such transparency has significant benefits, noted Stiefel. “This will make it easier for the community to audit their systems for security risks and also for individuals and organisations to run their own versions of DeepSeek in production.”  

Beyond security, DeepSeek also offers a roadmap on how to manage AI infrastructure at scale.

“From a transparency side, we’ll see how DeepSeek is running their hosted services. This will help address security concerns that emerged after it was discovered they left some of their Clickhouse databases unsecured.”

Stiefel highlighted that DeepSeek’s practices with tools like Docker, Kubernetes (K8s), and other infrastructure-as-code (IaC) configurations could empower startups and hobbyists to build similar hosted instances.  

Open-source AI is hot right now

DeepSeek’s transparency initiatives align with the broader trend toward open-source AI. A report by IDC reveals that 60% of organisations are opting for open-source AI models over commercial alternatives for their generative AI (GenAI) projects.  

Endor Labs research further indicates that organisations use, on average, between seven and twenty-one open-source models per application. The reasoning is clear: leveraging the best model for specific tasks and controlling API costs.

“As of February 7th, Endor Labs found that more than 3,500 additional models have been trained or distilled from the original DeepSeek R1 model,” said Stiefel. “This shows both the energy in the open-source AI model community, and why security teams need to understand both a model’s lineage and its potential risks.”  

For Sobrier, the growing adoption of open-source AI models reinforces the need to evaluate their dependencies.

“We need to look at AI models as major dependencies that our software depends on. Companies need to ensure they are legally allowed to use these models but also that they are safe to use in terms of operational risks and supply chain risks, just like open-source libraries.”

He emphasised that any risks can extend to training data: “They need to be confident that the datasets used for training the LLM were not poisoned or had sensitive private information.”  

Building a systematic approach to AI model risk  

As open-source AI adoption accelerates, managing risk becomes ever more critical. Stiefel outlined a systematic approach centred around three key steps:  

  1. Discovery: Detect the AI models your organisation currently uses.  
  2. Evaluation: Review these models for potential risks, including security and operational concerns.  
  3. Response: Set and enforce guardrails to ensure safe and secure model adoption.  

“The key is finding the right balance between enabling innovation and managing risk,” Stiefel said. “We need to give software engineering teams latitude to experiment but must do so with full visibility. The security team needs line-of-sight and the insight to act.”  

Sobrier further argued that the community must develop best practices for safely building and adopting AI models. A shared methodology is needed to evaluate AI models across parameters such as security, quality, operational risks, and openness.

Beyond transparency: Measures for a responsible AI future  

To ensure the responsible growth of AI, the industry must adopt controls that operate across several vectors:  

  • SaaS models: Safeguarding employee use of hosted models.
  • API integrations: Developers embedding third-party APIs like DeepSeek into applications, which, through tools like OpenAI integrations, can switch deployment with just two lines of code.
  • Open-source models: Developers leveraging community-built models or creating their own models from existing foundations maintained by companies like DeepSeek.

Sobrier warned of complacency in the face of rapid AI progress. “The community needs to build best practices to develop safe and open AI models,” he advised, “and a methodology to rate them along security, quality, operational risks, and openness.”  

As Stiefel succinctly summarised: “Think about security across multiple vectors and implement the appropriate controls for each.”

See also: AI in 2025: Purpose-driven models, human integration, and more

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Endor Labs: AI transparency vs ‘open-washing’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/feed/ 0
AI in 2025: Purpose-driven models, human integration, and more https://www.artificialintelligence-news.com/news/ai-in-2025-purpose-driven-models-human-integration-and-more/ https://www.artificialintelligence-news.com/news/ai-in-2025-purpose-driven-models-human-integration-and-more/#respond Fri, 14 Feb 2025 17:16:32 +0000 https://www.artificialintelligence-news.com/?p=104468 As AI becomes increasingly embedded in our daily lives, industry leaders and experts are forecasting a transformative 2025. From groundbreaking developments to existential challenges, AI’s evolution will continue to shape industries, change workflows, and spark deeper conversations about its implications. For this article, AI News caught up with some of the world’s leading minds to […]

The post AI in 2025: Purpose-driven models, human integration, and more appeared first on AI News.

]]>
As AI becomes increasingly embedded in our daily lives, industry leaders and experts are forecasting a transformative 2025.

From groundbreaking developments to existential challenges, AI’s evolution will continue to shape industries, change workflows, and spark deeper conversations about its implications.

For this article, AI News caught up with some of the world’s leading minds to see what they envision for the year ahead.

Smaller, purpose-driven models

Grant Shipley, Senior Director of AI at Red Hat, predicts a shift away from valuing AI models by their sizeable parameter counts.

Grant Shipley, Senior Director of AI at Red Hat

“2025 will be the year when we stop using the number of parameters that models have as a metric to indicate the value of a model,” he said.  

Instead, AI will focus on specific applications. Developers will move towards chaining together smaller models in a manner akin to microservices in software development. This modular, task-based approach is likely to facilitate more efficient and bespoke applications suited to particular needs.

Open-source leading the way

Bill Higgins, VP of watsonx Platform Engineering and Open Innovation at IBM

Bill Higgins, VP of watsonx Platform Engineering and Open Innovation at IBM, expects open-source AI models will grow in popularity in 2025.

“Despite mounting pressure, many enterprises are still struggling to show measurable returns on their AI investments—and the high licensing fees of proprietary models is a major factor. In 2025, open-source AI solutions will emerge as a dominant force in closing this gap,” he explains.

Alongside the affordability of open-source AI models comes transparency and increased customisation potential, making them ideal for multi-cloud environments. With open-source models matching proprietary systems in power, they could offer a way for enterprises to move beyond experimentation and into scalability.

Nick Burling, SVP at Nasuni

This plays into a prediction from Nick Burling, SVP at Nasuni, who believes that 2025 will usher in a more measured approach to AI investments. 

“Enterprises will focus on using AI strategically, ensuring that every AI initiative is justified by clear, measurable returns,” said Burling.

Cost efficiency and edge data management will become crucial, helping organisations optimise operations while keeping budgets in check.  

Augmenting human expertise

Jonathan Siddharth, CEO of Turing

For Jonathan Siddharth, CEO of Turing, the standout feature of 2025 AI systems will be their ability to learn from human expertise at scale.

“The key advancement will come from teaching AI not just what to do, but how to approach problems with the logical reasoning that coding naturally cultivates,” he says.

Competitiveness, particularly in industries like finance and healthcare, will hinge on mastering this integration of human expertise with AI.  

Behavioural psychology will catch up

Understanding the interplay between human behaviour and AI systems is at the forefront of predictions for Niklas Mortensen, Chief Design Officer at Designit.

Niklas Mortensen, Chief Design Officer at Designit

“With so many examples of algorithmic bias leading to unwanted outputs – and humans being, well, humans – behavioural psychology will catch up to the AI train,” explained Mortensen.  

The solutions? Experimentation with ‘pause moments’ for human oversight and intentional balance between automation and human control in critical operations such as healthcare and transport.

Mortensen also believes personal AI assistants will finally prove their worth by meeting their long-touted potential in organising our lives efficiently and intuitively.

Bridge between physical and digital worlds

Andy Wilson, Senior Director at Dropbox

Andy Wilson, Senior Director at Dropbox, envisions AI becoming an indispensable part of our daily lives.

“AI will evolve from being a helpful tool to becoming an integral part of daily life and work – offering innovative ways to connect, create, and collaborate,” Wilson says.  

Mobile devices and wearables will be at the forefront of this transformation, delivering seamless AI-driven experiences.

However, Wilson warns of new questions on boundaries between personal and workplace data, spurred by such integrations.

Driving sustainability goals 

Kendra DeKeyrel, VP ESG & Asset Management at IBM

With 2030 sustainability targets looming over companies, Kendra DeKeyrel, VP ESG & Asset Management at IBM, highlights how AI can help fill the gap.

DeKeyrel calls on organisations to adopt AI-powered technologies for managing energy consumption, lifecycle performance, and data centre strain.

“These capabilities can ultimately help progress sustainability goals overall,” she explains.

Unlocking computational power and inference

James Ingram, VP Technology at Streetbees

James Ingram, VP Technology at Streetbees, foresees a shift in computational requirements as AI scales to handle increasingly complex problems.

“The focus will move from pre-training to inference compute,” he said, highlighting the importance of real-time reasoning capabilities.

Expanding context windows will also significantly enhance how AI retains and processes information, likely surpassing human efficiency in certain domains.

Rise of agentic AI and unified data foundations

Dominic Wellington, Enterprise Architect at SnapLogic

According to Dominic Wellington, Enterprise Architect at SnapLogic, “Agentic AI marks a more flexible and creative era for AI in 2025.”

However, such systems require robust data integration because siloed information risks undermining their reliability.

Wellington anticipates that 2025 will witness advanced solutions for improving data hygiene, integrity, and lineage—all vital for enabling agentic AI to thrive.  

From hype to reality

Jason Schern, Field CTO of Cognite

Jason Schern, Field CTO of Cognite, predicts that 2025 will be remembered as the year when truly transformative, validated generative AI solutions emerge.

“Through the fog of AI for AI’s sake noise, singular examples of truly transformative embedding of Gen AI into actual workflows will stand out,” predicts Schern.  

These domain-specific AI agents will revolutionise industrial workflows by offering tailored decision-making. Schern cited an example in which AI slashed time-consuming root cause analyses from months to mere minutes.

Deepfakes and crisis of trust

Siggi Stefnisson, CTO at Gen

Sophisticated generative AI threatens the authenticity of images, videos, and information, warns Siggi Stefnisson, Cyber Safety CTO at Gen.

“Even experts may not be able to tell what’s authentic,” warns Stefnisson.

Combating this crisis requires robust digital credentials for verifying authenticity and promoting trust in increasingly blurred digital realities.

2025: Foundational shifts in the AI landscape

As multiple predictions converge, it’s clear that foundational shifts are on the horizon.

The experts that contributed to this year’s industry predictions highlight smarter applications, stronger integration with human expertise, closer alignment with sustainability goals, and heightened security. However, many also foresee significant ethical challenges.

2025 represents a crucial year: a transition from the initial excitement of AI proliferation to mature and measured adoption that promises value and a more nuanced understanding of its impact.

See also: AI Action Summit: Leaders call for unity and equitable development

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in 2025: Purpose-driven models, human integration, and more appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-in-2025-purpose-driven-models-human-integration-and-more/feed/ 0
Eric Schmidt: AI misuse poses an ‘extreme risk’ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/#respond Thu, 13 Feb 2025 12:17:38 +0000 https://www.artificialintelligence-news.com/?p=104423 Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm. Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.” Schmidt expressed concern that rapid […]

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm.

Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.”

Schmidt expressed concern that rapid AI advancements could be exploited to create weapons, including biological attacks. Highlighting the dangers, he said: “The real fears that I have are not the ones that most people talk about AI, I talk about extreme risk.”

Using a chilling analogy, Schmidt referenced the al-Qaeda leader responsible for the 9/11 attacks: “I’m always worried about the Osama bin Laden scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.”

He emphasised the pace of AI development and its potential to be co-opted by nations or groups with malevolent intent.

“Think about North Korea, or Iran, or even Russia, who have some evil goal … they could misuse it and do real harm,” Schmidt warns.

Oversight without stifling innovation

Schmidt urged governments to closely monitor private tech companies pioneering AI research. While noting that tech leaders are generally aware of AI’s societal implications, they may make decisions based on different values from those of public officials.

“My experience with the tech leaders is that they do have an understanding of the impact they’re having, but they might make a different values judgement than the government would make.”

Schmidt also endorsed the export controls introduced under former US President Joe Biden last year to restrict the sale of advanced microchips. The measure is aimed at slowing the progress of geopolitical adversaries in AI research.  

Global divisions around preventing AI misuse

The tech veteran was in Paris when he made his remarks, attending the AI Action Summit, a two-day event that wrapped up on Tuesday.

The summit, attended by 57 countries, saw the announcement of an agreement on “inclusive” AI development. Signatories included major players like China, India, the EU, and the African Union.  

However, the UK and the US declined to sign the communique. The UK government said the agreement lacked “practical clarity” and failed to address critical “harder questions” surrounding national security. 

Schmidt cautioned against excessive regulation that might hinder progress in this transformative field. This was echoed by US Vice-President JD Vance who warned that heavy-handed regulation “would kill a transformative industry just as it’s taking off”.  

This reluctance to endorse sweeping international accords reflects diverging approaches to AI governance. The EU has championed a more restrictive framework for AI, prioritising consumer protections, while countries like the US and UK are opting for more agile and innovation-driven strategies. 

Schmidt pointed to the consequences of Europe’s tight regulatory stance, predicting that the region would miss out on pioneering roles in AI.

“The AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe,” he remarked.

Prioritising national and global safety

Schmidt’s comments come against a backdrop of increasing scrutiny over AI’s dual-use potential—its ability to be used for both beneficial and harmful purposes.

From deepfakes to autonomous weapons, AI poses a bevy of risks if left without measures to guard against misuse. Leaders and experts, including Schmidt, are advocating for a balanced approach that fosters innovation while addressing these dangers head-on.

While international cooperation remains a complex and contentious issue, the overarching consensus is clear: without safeguards, AI’s evolution could have unintended – and potentially catastrophic – consequences.

(Photo by Guillaume Paumier under CC BY 3.0 license. Cropped to landscape from original version.)

See also: NEPC: AI sprint risks environmental catastrophe

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/feed/ 0
Microsoft and OpenAI probe alleged data theft by DeepSeek https://www.artificialintelligence-news.com/news/microsoft-and-openai-probe-alleged-data-theft-deepseek/ https://www.artificialintelligence-news.com/news/microsoft-and-openai-probe-alleged-data-theft-deepseek/#respond Wed, 29 Jan 2025 15:28:41 +0000 https://www.artificialintelligence-news.com/?p=17009 Microsoft and OpenAI are investigating a potential breach of the AI firm’s system by a group allegedly linked to Chinese AI startup DeepSeek. According to Bloomberg, the investigation stems from suspicious data extraction activity detected in late 2024 via OpenAI’s application programming interface (API), sparking broader concerns over international AI competition. Microsoft, OpenAI’s largest financial […]

The post Microsoft and OpenAI probe alleged data theft by DeepSeek appeared first on AI News.

]]>
Microsoft and OpenAI are investigating a potential breach of the AI firm’s system by a group allegedly linked to Chinese AI startup DeepSeek.

According to Bloomberg, the investigation stems from suspicious data extraction activity detected in late 2024 via OpenAI’s application programming interface (API), sparking broader concerns over international AI competition.

Microsoft, OpenAI’s largest financial backer, first identified the large-scale data extraction and informed the ChatGPT maker of the incident. Sources believe the activity may have violated OpenAI’s terms of service, or that the group may have exploited loopholes to bypass restrictions limiting how much data they could collect.

DeepSeek has quickly risen to prominence in the competitive AI landscape, particularly with the release of its latest model, R-1, on 20 January.

Billed as a rival to OpenAI’s ChatGPT in performance but developed at a significantly lower cost, R-1 has shaken up the tech industry. Its release triggered a sharp decline in tech and AI stocks that wiped billions from US markets in a single week.

David Sacks, the White House’s newly appointed “crypto and AI czar,” alleged that DeepSeek may have employed questionable methods to achieve its AI’s capabilities. In an interview with Fox News, Sacks noted evidence suggesting that DeepSeek had used “distillation” to train its AI models using outputs from OpenAI’s systems.

“There’s substantial evidence that what DeepSeek did here is they distilled knowledge out of OpenAI’s models, and I don’t think OpenAI is very happy about this,” Sacks told the network.  

Model distillation involves training one AI system using data generated by another, potentially allowing a competitor to develop similar functionality. This method, when applied without proper authorisation, has stirred ethical and intellectual property debates as the global race for AI supremacy heats up.  

OpenAI declined to comment specifically on the accusations against DeepSeek but acknowledged the broader risk posed by model distillation, particularly by Chinese companies.  

“We know PRC-based companies — and others — are constantly trying to distill the models of leading US AI companies,” a spokesperson for OpenAI told Bloomberg.  

Geopolitical and security concerns  

Growing tensions around AI innovation now extend into national security. CNBC reported that the US Navy has banned its personnel from using DeepSeek’s products, citing fears that the Chinese government could exploit the platform to access sensitive information.

In an email dated 24 January, the Navy warned its staff against using DeepSeek AI “in any capacity” due to “potential security and ethical concerns associated with the model’s origin and usage.”

Critics have highlighted DeepSeek’s privacy policy, which permits the collection of data such as IP addresses, device information, and even keystroke patterns—a scope of data collection considered excessive by some experts.

Earlier this week, DeepSeek stated it was facing “large-scale malicious attacks” against its systems. A banner on its website informed users of a temporary sign-up restriction.

The growing competition between the US and China in particular in the AI sector has underscored wider concerns regarding technological ownership, ethical governance, and national security.  

Experts warn that as AI systems advance and become increasingly integral to global economic and strategic planning, disputes over data usage and intellectual property are only likely to intensify. Accusations such as those against DeepSeek amplify alarm over China’s rapid development in the field and its potential quest to bypass US-led safeguards through reverse engineering and other means.  

While OpenAI and Microsoft continue their investigation into the alleged misuse of OpenAI’s platform, businesses and governments alike are paying close attention. The case could set a precedent for how AI developers police model usage and enforce terms of service.

For now, the response from both US and Chinese stakeholders highlights how AI innovation has become not just a race for technological dominance, but a fraught geopolitical contest that is shaping 21st-century power dynamics.

(Image by Mohamed Hassan)

See also: Qwen 2.5-Max outperforms DeepSeek V3 in some benchmarks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft and OpenAI probe alleged data theft by DeepSeek appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/microsoft-and-openai-probe-alleged-data-theft-deepseek/feed/ 0
Cisco: Securing enterprises in the AI era https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/ https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/#respond Wed, 15 Jan 2025 16:02:18 +0000 https://www.artificialintelligence-news.com/?p=16883 As AI becomes increasingly integral to business operations, new safety concerns and security threats emerge at an unprecedented pace—outstripping the capabilities of traditional cybersecurity solutions. The stakes are high with potentially significant repercussions. According to Cisco’s 2024 AI Readiness Index, only 29% of surveyed organisations feel fully equipped to detect and prevent unauthorised tampering with […]

The post Cisco: Securing enterprises in the AI era appeared first on AI News.

]]>
As AI becomes increasingly integral to business operations, new safety concerns and security threats emerge at an unprecedented pace—outstripping the capabilities of traditional cybersecurity solutions.

The stakes are high with potentially significant repercussions. According to Cisco’s 2024 AI Readiness Index, only 29% of surveyed organisations feel fully equipped to detect and prevent unauthorised tampering with AI technologies.

Continuous model validation

DJ Sampath, Head of AI Software & Platform at Cisco, said: “When we talk about model validation, it is not just a one time thing, right? You’re doing the model validation on a continuous basis.

Headshot of DJ Sampath from Cisco for an article on securing enterprises in the AI era.

“So as you see changes happen to the model – if you’re doing any type of finetuning, or you discover new attacks that are starting to show up that you need the models to learn from – we’re constantly learning all of that information and revalidating the model to see how these models are behaving under these new attacks that we’ve discovered.

“The other very important point is that we have a really advanced threat research team which is constantly looking at these AI attacks and understanding how these attacks can further be enhanced. In fact, we’re, we’re, we’re contributing to the work groups inside of standards organisations like MITRE, OWASP, and NIST.”

Beyond preventing harmful outputs, Cisco addresses the vulnerabilities of AI models to malicious external influences that can change their behaviour. These risks include prompt injection attacks, jailbreaking, and training data poisoning—each demanding stringent preventive measures.

Evolution brings new complexities

Frank Dickson, Group VP for Security & Trust at IDC, gave his take on the evolution of cybersecurity over time and what advancements in AI mean for the industry.

“The first macro trend was that we moved from on-premise to the cloud and that introduced this whole host of new problem statements that we had to address. And then as applications move from monolithic to microservices, we saw this whole host of new problem sets.

Headshot of Frank Dickson from IDC for an article on securing enterprises in the AI era.

“AI and the addition of LLMs… same thing, whole host of new problem sets.”

The complexities of AI security are heightened as applications become multi-model. Vulnerabilities can arise at various levels – from models to apps – implicating different stakeholders such as developers, end-users, and vendors.

“Once an application moved from on-premise to the cloud, it kind of stayed there. Yes, we developed applications across multiple clouds, but once you put an application in AWS or Azure or GCP, you didn’t jump it across those various cloud environments monthly, quarterly, weekly, right?

“Once you move from monolithic application development to microservices, you stay there. Once you put an application in Kubernetes, you don’t jump back into something else.

“As you look to secure a LLM, the important thing to note is the model changes. And when we talk about model change, it’s not like it’s a revision … this week maybe [developers are] using Anthropic, next week they may be using Gemini.

“They’re completely different and the threat vectors of each model are completely different. They all have their strengths and they all have their dramatic weaknesses.”

Unlike conventional safety measures integrated into individual models, Cisco delivers controls for a multi-model environment through its newly-announced AI Defense. The solution is self-optimising, using Cisco’s proprietary machine learning algorithms to identify evolving AI safety and security concerns—informed by threat intelligence from Cisco Talos.

Adjusting to the new normal

Jeetu Patel, Executive VP and Chief Product Officer at Cisco, shared his view that major advancements in a short period of time always seem revolutionary but quickly feel normal.

Headshot of Jeetu Patel from Cisco for an article on securing enterprises in the AI era.

Waymo is, you know, self-driving cars from Google. You get in, and there’s no one sitting in the car, and it takes you from point A to point B. It feels mind-bendingly amazing, like we are living in the future. The second time, you kind of get used to it. The third time, you start complaining about the seats.

“Even how quickly we’ve gotten used to AI and ChatGPT over the course of the past couple years, I think what will happen is any major advancement will feel exceptionally progressive for a short period of time. Then there’s a normalisation that happens where everyone starts getting used to it.”

Patel believes that normalisation will happen with AGI as well. However, he notes that “you cannot underestimate the progress that these models are starting to make” and, ultimately, the kind of use cases they are going to unlock.

“No-one had thought that we would have a smartphone that’s gonna have more compute capacity than the mainframe computer at your fingertips and be able to do thousands of things on it at any point in time and now it’s just another way of life. My 14-year-old daughter doesn’t even think about it.

“We ought to make sure that we as companies get adjusted to that very quickly.”

See also: Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cisco: Securing enterprises in the AI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/feed/ 0