security Archives - AI News https://www.artificialintelligence-news.com/news/tag/security/ Artificial Intelligence News Wed, 30 Apr 2025 13:35:24 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png security Archives - AI News https://www.artificialintelligence-news.com/news/tag/security/ 32 32 Meta beefs up AI security with new Llama tools  https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/ https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/#respond Wed, 30 Apr 2025 13:35:22 +0000 https://www.artificialintelligence-news.com/?p=106233 If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools. The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to […]

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools.

The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to make developing and using AI a bit safer for everyone involved.

Developers working with the Llama family of models now have some upgraded kit to play with. You can grab these latest Llama Protection tools directly from Meta’s own Llama Protections page, or find them where many developers live: Hugging Face and GitHub.

First up is Llama Guard 4. Think of it as an evolution of Meta’s customisable safety filter for AI. The big news here is that it’s now multimodal so it can understand and apply safety rules not just to text, but to images as well. That’s crucial as AI applications get more visual. This new version is also being baked into Meta’s brand-new Llama API, which is currently in a limited preview.

Then there’s LlamaFirewall. This is a new piece of the puzzle from Meta, designed to act like a security control centre for AI systems. It helps manage different safety models working together and hooks into Meta’s other protection tools. Its job? To spot and block the kind of risks that keep AI developers up at night – things like clever ‘prompt injection’ attacks designed to trick the AI, potentially dodgy code generation, or risky behaviour from AI plug-ins.

Meta has also given its Llama Prompt Guard a tune-up. The main Prompt Guard 2 (86M) model is now better at sniffing out those pesky jailbreak attempts and prompt injections. More interestingly, perhaps, is the introduction of Prompt Guard 2 22M.

Prompt Guard 2 22M is a much smaller, nippier version. Meta reckons it can slash latency and compute costs by up to 75% compared to the bigger model, without sacrificing too much detection power. For anyone needing faster responses or working on tighter budgets, that’s a welcome addition.

But Meta isn’t just focusing on the AI builders; they’re also looking at the cyber defenders on the front lines of digital security. They’ve heard the calls for better AI-powered tools to help in the fight against cyberattacks, and they’re sharing some updates aimed at just that.

The CyberSec Eval 4 benchmark suite has been updated. This open-source toolkit helps organisations figure out how good AI systems actually are at security tasks. This latest version includes two new tools:

  • CyberSOC Eval: Built with the help of cybersecurity experts CrowdStrike, this framework specifically measures how well AI performs in a real Security Operation Centre (SOC) environment. It’s designed to give a clearer picture of AI’s effectiveness in threat detection and response. The benchmark itself is coming soon.
  • AutoPatchBench: This benchmark tests how good Llama and other AIs are at automatically finding and fixing security holes in code before the bad guys can exploit them.

To help get these kinds of tools into the hands of those who need them, Meta is kicking off the Llama Defenders Program. This seems to be about giving partner companies and developers special access to a mix of AI solutions – some open-source, some early-access, some perhaps proprietary – all geared towards different security challenges.

As part of this, Meta is sharing an AI security tool they use internally: the Automated Sensitive Doc Classification Tool. It automatically slaps security labels on documents inside an organisation. Why? To stop sensitive info from walking out the door, or to prevent it from being accidentally fed into an AI system (like in RAG setups) where it could be leaked.

They’re also tackling the problem of fake audio generated by AI, which is increasingly used in scams. The Llama Generated Audio Detector and Llama Audio Watermark Detector are being shared with partners to help them spot AI-generated voices in potential phishing calls or fraud attempts. Companies like ZenDesk, Bell Canada, and AT&T are already lined up to integrate these.

Finally, Meta gave a sneak peek at something potentially huge for user privacy: Private Processing. This is new tech they’re working on for WhatsApp. The idea is to let AI do helpful things like summarise your unread messages or help you draft replies, but without Meta or WhatsApp being able to read the content of those messages.

Meta is being quite open about the security side, even publishing their threat model and inviting security researchers to poke holes in the architecture before it ever goes live. It’s a sign they know they need to get the privacy aspect right.

Overall, it’s a broad set of AI security announcements from Meta. They’re clearly trying to put serious muscle behind securing the AI they build, while also giving the wider tech community better tools to build safely and defend effectively.

See also: Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/feed/ 0
Best data security platforms of 2025 https://www.artificialintelligence-news.com/news/best-data-security-platforms-of-2025/ https://www.artificialintelligence-news.com/news/best-data-security-platforms-of-2025/#respond Wed, 12 Mar 2025 08:44:49 +0000 https://www.artificialintelligence-news.com/?p=104737 With the rapid growth in the generation, storage, and sharing of data, ensuring its security has become both a necessity and a formidable challenge. Data breaches, cyberattacks, and insider threats are constant risks that require sophisticated solutions. This is where Data Security Platforms come into play, providing organisations with centralised tools and strategies to protect […]

The post Best data security platforms of 2025 appeared first on AI News.

]]>
With the rapid growth in the generation, storage, and sharing of data, ensuring its security has become both a necessity and a formidable challenge. Data breaches, cyberattacks, and insider threats are constant risks that require sophisticated solutions. This is where Data Security Platforms come into play, providing organisations with centralised tools and strategies to protect sensitive information and maintain compliance.

Key components of data security platforms

Effective DSPs are built on several core components that work together to protect data from unauthorised access, misuse, and theft. The components include:

1. Data discovery and classification

Before data can be secured, it needs to be classified and understood. DSPs typically include tools that automatically discover and categorize data based on its sensitivity and use. For example:

  • Personal identifiable information (PII): Names, addresses, social security numbers, etc.
  • Financial data: Credit card details, transaction records.
  • Intellectual property (IP): Trade secrets, proprietary designs.
  • Regulated data: Information governed by laws like GDPR, HIPAA, or CCPA.

By identifying data types and categorizing them by sensitivity level, organisations can prioritise their security efforts.

2. Data encryption

Encryption transforms readable data into an unreadable format, ensuring that even if unauthorised users access the data, they cannot interpret it without the decryption key. Most DSPs support various encryption methods, including:

  • At-rest encryption: Securing data stored on drives, databases, or other storage systems.
  • In-transit encryption: Protecting data as it moves between devices, networks, or applications.

Modern DSPs often deploy advanced encryption standards (AES) or bring-your-own-key (BYOK) solutions, ensuring data security even when using third-party cloud storage.

3. Access control and identity management

Managing who has access to data is a important aspect of data security. DSPs enforce robust role-based access control (RBAC), ensuring only authorised users and systems can access sensitive information. With identity and access management (IAM) integration, DSPs can enhance security by combining authentication methods like:

  • Passwords.
  • Biometrics (e.g. fingerprint or facial recognition).
  • Multi-factor authentication (MFA).
  • Behaviour-based authentication (monitoring user actions for anomalies).

4. Data loss prevention (DLP)

Data loss prevention tools in DSPs help prevent unauthorised sharing or exfiltration of sensitive data. They monitor and control data flows, blocking suspicious activity like:

  • Sending confidential information over email.
  • Transferring sensitive data to unauthorised external devices.
  • Uploading important files to unapproved cloud services.

By enforcing data-handling policies, DSPs help organisations maintain control over their sensitive information.

5. Threat detection and response

DSPs employ threat detection systems powered by machine learning, artificial intelligence (AI), and behaviour analytics to identify unauthorised or malicious activity. Common features include:

  • Anomaly detection: Identifies unusual behaviour, like accessing files outside normal business hours.
  • Insider threat detection: Monitors employees or contractors who might misuse their access to internal data.
  • Real-time alerts: Provide immediate notifications when a potential threat is detected.

Some platforms also include automated response mechanisms to isolate affected data or deactivate compromised user accounts.

6. Compliance audits and reporting

Many industries are subject to strict data protection regulations, like GDPR, HIPAA, CCPA, or PCI DSS. DSPs help organisations comply with these laws by:

  • Continuously monitoring data handling practices.
  • Generating detailed audit trails.
  • Providing pre-configured compliance templates and reporting tools.

The features simplify regulatory audits and reduce the risk of non-compliance penalties.

Best data security platforms of 2025

Whether you’re a small business or a large enterprise, these tools will help you manage risks, secure databases, and protect sensitive information.

1. Velotix

Velotix is an AI-driven data security platform focused on policy automation and intelligent data access control. It simplifies compliance with stringent data regulations like GDPR, HIPAA, and CCPA, and helps organisations strike the perfect balance between accessibility and security. Key Features:

  • AI-powered access governance: Velotix uses machine learning to ensure users only access data they need to see, based on dynamic access policies.
  • Seamless integration: It integrates smoothly with existing infrastructures across cloud and on-premises environments.
  • Compliance automation: Simplifies meeting legal and regulatory requirements by automating compliance processes.
  • Scalability: Ideal for enterprises with complex data ecosystems, supporting hundreds of terabytes of sensitive data.

Velotix stands out for its ability to reduce the complexity of data governance, making it a must-have in today’s security-first corporate world.

2. NordLayer

NordLayer, from the creators of NordVPN, offers a secure network access solution tailored for businesses. While primarily a network security tool, it doubles as a robust data security platform by ensuring end-to-end encryption for your data in transit.

Key features:

  • Zero trust security: Implements a zero trust approach, meaning users and devices must be verified every time data access is requested.
  • AES-256 encryption Standards: Protects data flows with military-grade encryption.
  • Cloud versatility: Supports hybrid and multi-cloud environments for maximum flexibility.
  • Rapid deployment: Easy to implement even for smaller teams, requiring minimal IT involvement.

NordLayer ensures secure, encrypted communications between your team and the cloud, offering peace of mind when managing sensitive data.

3. HashiCorp Vault

HashiCorp Vault is a leader in secrets management, encryption as a service, and identity-based access. Designed for developers, it simplifies access control without placing sensitive data at risk, making it important for modern application development.

Key features:

  • Secrets management: Protect sensitive credentials like API keys, tokens, and passwords.
  • Dynamic secrets: Automatically generate temporary, time-limited credentials for improved security.
  • Encryption as a service: Offers flexible tools for encrypting any data across multiple environments.
  • Audit logging: Monitor data access attempts for greater accountability and compliance.

With a strong focus on application-level security, HashiCorp Vault is ideal for organisations seeking granular control over sensitive operational data.

4. Imperva Database Risk & Compliance

Imperva is a pioneer in database security. Its Database Risk & Compliance solution combines analytics, automation, and real-time monitoring to protect sensitive data from breaches and insider threats.

Key features:

  • Database activity monitoring (DAM): Tracks database activity in real time to identify unusual patterns.
  • Vulnerability assessment: Scans databases for security weaknesses and provides actionable remediation steps.
  • Cloud and hybrid deployment: Supports flexible environments, ranging from on-premises deployments to modern cloud setups.
  • Audit preparation: Simplifies audit readiness with detailed reporting tools and predefined templates.

Imperva’s tools are trusted by enterprises to secure their most confidential databases, ensuring compliance and top-notch protection.

5. ESET

ESET, a well-known name in cybersecurity, offers an enterprise-grade security solution that includes powerful data encryption tools. Famous for its malware protection, ESET combines endpoint security with encryption to safeguard sensitive information.

Key features:

  • Endpoint encryption: Ensures data remains protected even if devices are lost or stolen.
  • Multi-platform support: Works across Windows, Mac, and Linux systems.
  • Proactive threat detection: Combines AI and machine learning to detect potential threats before they strike.
  • Ease of use: User-friendly dashboards enable intuitive management of security policies.

ESET provides an all-in-one solution for companies needing endpoint protection, encryption, and proactive threat management.

6. SQL Secure

Aimed at database administrators, SQL Secure delivers specialised tools to safeguard SQL Server environments. It allows for detailed role-based analysis, helping organisations improve their database security posture and prevent data leaks.

Key features:

  • Role analysis: Identifies and mitigates excessive or unauthorised permission assignments.
  • Dynamic data masking: Protects sensitive data by obscuring it in real-time in applications and queries.
  • Customisable alerts: Notify teams of improper database access or policy violations immediately.
  • Regulatory compliance: Predefined policies make it easy to align with GDPR, HIPAA, PCI DSS, and other regulations.

SQL Secure is a tailored solution for businesses dependent on SQL databases, providing immediate insights and action plans for tighter security.

7. Acra

Acra is a modern, developer-friendly cryptographic tool engineered for data encryption and secure data lifecycle management. It brings cryptography closer to applications, ensuring deep-rooted data protection at every level.

Key features:

  • Application-level encryption: Empowers developers to integrate customised encryption policies directly into their apps.
  • Intrusion detection: Monitors for data leaks with a robust intrusion detection mechanism.
  • End-to-end data security: Protect data at rest, in transit, and in use, making it more versatile than traditional encryption tools.
  • Open source availability: Trusted by developers thanks to its open-source model, offering transparency and flexibility.

Acra is particularly popular with startups and tech-savvy enterprises needing a lightweight, developer-first approach to securing application data.

8. BigID

BigID focuses on privacy, data discovery, and compliance by using AI to identify sensitive data across structured and unstructured environments. Known for its data intelligence capabilities, BigID is one of the most comprehensive platforms for analysing and protecting enterprise data.

Key Features:

  • Data discovery: Automatically classify sensitive data like PII (Personally Identifiable Information) and PHI (Protected Health Information).
  • Privacy-by-design: Built to streamline compliance with global privacy laws like GDPR, CCPA, and more.
  • Risk management: Assess data risks and prioritise actions based on importance.
  • Integrations: Easily integrates with other security platforms and cloud providers for a unified approach.

BigID excels at uncovering hidden risks and ensuring compliance, making it an essential tool for data-driven enterprises.

9. DataSunrise Database Security

DataSunrise specialises in database firewall protection and intrusion detection for a variety of databases, including SQL-based platforms, NoSQL setups, and cloud-hosted solutions. It focuses on safeguarding sensitive data while providing robust real-time monitoring.

Key features:

  • Database firewall: Blocks unauthorised access attempts with role-specific policies.
  • Sensitive data discovery: Identifies risky data in your database for preventative action.
  • Audit reporting: Generate detailed investigative reports about database activity.
  • Cross-platform compatibility: Works with MySQL, PostgreSQL, Oracle, Amazon Aurora, Snowflake, and more.

DataSunrise is highly configurable and scalable, making it a solid choice for organisations running diverse database environments.

10. Covax Polymer

Covax Polymer is an innovative data security platform dedicated to governing sensitive data use in cloud-based collaboration tools like Slack, Microsoft Teams, and Google Workspace. It’s perfect for businesses that rely on SaaS applications for productivity.

Key features:

  • Real-time governance: Monitors and protects data transfers occurring across cloud collaboration tools.
  • Context-aware decisions: Evaluates interactions to identify potential risks, ensuring real-time security responses.
  • Data loss prevention (DLP): Prevents sensitive information from being shared outside approved networks.
  • Comprehensive reporting: Tracks and analyses data sharing trends, offering actionable insights for compliance.

Covax Polymer addresses the growing need for securing communications and shared data in collaborative workspaces.

(Image source: Unsplash)

The post Best data security platforms of 2025 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/best-data-security-platforms-of-2025/feed/ 0
Endor Labs: AI transparency vs ‘open-washing’ https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/ https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/#respond Mon, 24 Feb 2025 18:15:45 +0000 https://www.artificialintelligence-news.com/?p=104605 As the AI industry focuses on transparency and security, debates around the true meaning of “openness” are intensifying. Experts from open-source security firm Endor Labs weighed in on these pressing topics. Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, emphasised the importance of applying lessons learned from software security to AI systems. “The US […]

The post Endor Labs: AI transparency vs ‘open-washing’ appeared first on AI News.

]]>
As the AI industry focuses on transparency and security, debates around the true meaning of “openness” are intensifying. Experts from open-source security firm Endor Labs weighed in on these pressing topics.

Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, emphasised the importance of applying lessons learned from software security to AI systems.

“The US government’s 2021 Executive Order on Improving America’s Cybersecurity includes a provision requiring organisations to produce a software bill of materials (SBOM) for each product sold to federal government agencies.”

An SBOM is essentially an inventory detailing the open-source components within a product, helping detect vulnerabilities. Stiefel argued that “applying these same principles to AI systems is the logical next step.”  

“Providing better transparency for citizens and government employees not only improves security,” he explained, “but also gives visibility into a model’s datasets, training, weights, and other components.”

What does it mean for an AI model to be “open”?  

Julien Sobrier, Senior Product Manager at Endor Labs, added crucial context to the ongoing discussion about AI transparency and “openness.” Sobrier broke down the complexity inherent in categorising AI systems as truly open.

“An AI model is made of many components: the training set, the weights, and programs to train and test the model, etc. It is important to make the whole chain available as open source to call the model ‘open’. It is a broad definition for now.”  

Sobrier noted the lack of consistency across major players, which has led to confusion about the term.

“Among the main players, the concerns about the definition of ‘open’ started with OpenAI, and Meta is in the news now for their LLAMA model even though that’s ‘more open’. We need a common understanding of what an open model means. We want to watch out for any ‘open-washing,’ as we saw it with free vs open-source software.”  

One potential pitfall, Sobrier highlighted, is the increasingly common practice of “open-washing,” where organisations claim transparency while imposing restrictions.

“With cloud providers offering a paid version of open-source projects (such as databases) without contributing back, we’ve seen a shift in many open-source projects: The source code is still open, but they added many commercial restrictions.”  

“Meta and other ‘open’ LLM providers might go this route to keep their competitive advantage: more openness about the models, but preventing competitors from using them,” Sobrier warned.

DeepSeek aims to increase AI transparency

DeepSeek, one of the rising — albeit controversial — players in the AI industry, has taken steps to address some of these concerns by making portions of its models and code open-source. The move has been praised for advancing transparency while providing security insights.  

“DeepSeek has already released the models and their weights as open-source,” said Andrew Stiefel. “This next move will provide greater transparency into their hosted services, and will give visibility into how they fine-tune and run these models in production.”

Such transparency has significant benefits, noted Stiefel. “This will make it easier for the community to audit their systems for security risks and also for individuals and organisations to run their own versions of DeepSeek in production.”  

Beyond security, DeepSeek also offers a roadmap on how to manage AI infrastructure at scale.

“From a transparency side, we’ll see how DeepSeek is running their hosted services. This will help address security concerns that emerged after it was discovered they left some of their Clickhouse databases unsecured.”

Stiefel highlighted that DeepSeek’s practices with tools like Docker, Kubernetes (K8s), and other infrastructure-as-code (IaC) configurations could empower startups and hobbyists to build similar hosted instances.  

Open-source AI is hot right now

DeepSeek’s transparency initiatives align with the broader trend toward open-source AI. A report by IDC reveals that 60% of organisations are opting for open-source AI models over commercial alternatives for their generative AI (GenAI) projects.  

Endor Labs research further indicates that organisations use, on average, between seven and twenty-one open-source models per application. The reasoning is clear: leveraging the best model for specific tasks and controlling API costs.

“As of February 7th, Endor Labs found that more than 3,500 additional models have been trained or distilled from the original DeepSeek R1 model,” said Stiefel. “This shows both the energy in the open-source AI model community, and why security teams need to understand both a model’s lineage and its potential risks.”  

For Sobrier, the growing adoption of open-source AI models reinforces the need to evaluate their dependencies.

“We need to look at AI models as major dependencies that our software depends on. Companies need to ensure they are legally allowed to use these models but also that they are safe to use in terms of operational risks and supply chain risks, just like open-source libraries.”

He emphasised that any risks can extend to training data: “They need to be confident that the datasets used for training the LLM were not poisoned or had sensitive private information.”  

Building a systematic approach to AI model risk  

As open-source AI adoption accelerates, managing risk becomes ever more critical. Stiefel outlined a systematic approach centred around three key steps:  

  1. Discovery: Detect the AI models your organisation currently uses.  
  2. Evaluation: Review these models for potential risks, including security and operational concerns.  
  3. Response: Set and enforce guardrails to ensure safe and secure model adoption.  

“The key is finding the right balance between enabling innovation and managing risk,” Stiefel said. “We need to give software engineering teams latitude to experiment but must do so with full visibility. The security team needs line-of-sight and the insight to act.”  

Sobrier further argued that the community must develop best practices for safely building and adopting AI models. A shared methodology is needed to evaluate AI models across parameters such as security, quality, operational risks, and openness.

Beyond transparency: Measures for a responsible AI future  

To ensure the responsible growth of AI, the industry must adopt controls that operate across several vectors:  

  • SaaS models: Safeguarding employee use of hosted models.
  • API integrations: Developers embedding third-party APIs like DeepSeek into applications, which, through tools like OpenAI integrations, can switch deployment with just two lines of code.
  • Open-source models: Developers leveraging community-built models or creating their own models from existing foundations maintained by companies like DeepSeek.

Sobrier warned of complacency in the face of rapid AI progress. “The community needs to build best practices to develop safe and open AI models,” he advised, “and a methodology to rate them along security, quality, operational risks, and openness.”  

As Stiefel succinctly summarised: “Think about security across multiple vectors and implement the appropriate controls for each.”

See also: AI in 2025: Purpose-driven models, human integration, and more

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Endor Labs: AI transparency vs ‘open-washing’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/feed/ 0
Eric Schmidt: AI misuse poses an ‘extreme risk’ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/#respond Thu, 13 Feb 2025 12:17:38 +0000 https://www.artificialintelligence-news.com/?p=104423 Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm. Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.” Schmidt expressed concern that rapid […]

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm.

Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.”

Schmidt expressed concern that rapid AI advancements could be exploited to create weapons, including biological attacks. Highlighting the dangers, he said: “The real fears that I have are not the ones that most people talk about AI, I talk about extreme risk.”

Using a chilling analogy, Schmidt referenced the al-Qaeda leader responsible for the 9/11 attacks: “I’m always worried about the Osama bin Laden scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.”

He emphasised the pace of AI development and its potential to be co-opted by nations or groups with malevolent intent.

“Think about North Korea, or Iran, or even Russia, who have some evil goal … they could misuse it and do real harm,” Schmidt warns.

Oversight without stifling innovation

Schmidt urged governments to closely monitor private tech companies pioneering AI research. While noting that tech leaders are generally aware of AI’s societal implications, they may make decisions based on different values from those of public officials.

“My experience with the tech leaders is that they do have an understanding of the impact they’re having, but they might make a different values judgement than the government would make.”

Schmidt also endorsed the export controls introduced under former US President Joe Biden last year to restrict the sale of advanced microchips. The measure is aimed at slowing the progress of geopolitical adversaries in AI research.  

Global divisions around preventing AI misuse

The tech veteran was in Paris when he made his remarks, attending the AI Action Summit, a two-day event that wrapped up on Tuesday.

The summit, attended by 57 countries, saw the announcement of an agreement on “inclusive” AI development. Signatories included major players like China, India, the EU, and the African Union.  

However, the UK and the US declined to sign the communique. The UK government said the agreement lacked “practical clarity” and failed to address critical “harder questions” surrounding national security. 

Schmidt cautioned against excessive regulation that might hinder progress in this transformative field. This was echoed by US Vice-President JD Vance who warned that heavy-handed regulation “would kill a transformative industry just as it’s taking off”.  

This reluctance to endorse sweeping international accords reflects diverging approaches to AI governance. The EU has championed a more restrictive framework for AI, prioritising consumer protections, while countries like the US and UK are opting for more agile and innovation-driven strategies. 

Schmidt pointed to the consequences of Europe’s tight regulatory stance, predicting that the region would miss out on pioneering roles in AI.

“The AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe,” he remarked.

Prioritising national and global safety

Schmidt’s comments come against a backdrop of increasing scrutiny over AI’s dual-use potential—its ability to be used for both beneficial and harmful purposes.

From deepfakes to autonomous weapons, AI poses a bevy of risks if left without measures to guard against misuse. Leaders and experts, including Schmidt, are advocating for a balanced approach that fosters innovation while addressing these dangers head-on.

While international cooperation remains a complex and contentious issue, the overarching consensus is clear: without safeguards, AI’s evolution could have unintended – and potentially catastrophic – consequences.

(Photo by Guillaume Paumier under CC BY 3.0 license. Cropped to landscape from original version.)

See also: NEPC: AI sprint risks environmental catastrophe

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/feed/ 0
DeepSeek ban? China data transfer boosts security concerns https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/ https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/#respond Fri, 07 Feb 2025 17:44:01 +0000 https://www.artificialintelligence-news.com/?p=104228 US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part […]

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.

DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga.

Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free.

However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the Chinese AI startup.

A threat to US AI dominance

DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the Chinese Communist Party (CCP).  

According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns.

Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight.

Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. 

Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions.

Potential ban of DeepSeek a TikTok redux?

The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by Chinese company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise.

In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat.

Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy.  

AI becomes a geopolitical battleground

The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China.

American AI firms like OpenAI have enjoyed a dominant position in recent years, but Chinese companies have poured resources into catching up and, in some cases, surpassing their US competitors.  

DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security.

China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in Chinese surveillance activities or even potential economic manipulations.

A nationwide DeepSeek ban is on the cards

If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match.  

In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools.  

DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen.

(Photo by Solen Feyissa)

See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/feed/ 0
Microsoft and OpenAI probe alleged data theft by DeepSeek https://www.artificialintelligence-news.com/news/microsoft-and-openai-probe-alleged-data-theft-deepseek/ https://www.artificialintelligence-news.com/news/microsoft-and-openai-probe-alleged-data-theft-deepseek/#respond Wed, 29 Jan 2025 15:28:41 +0000 https://www.artificialintelligence-news.com/?p=17009 Microsoft and OpenAI are investigating a potential breach of the AI firm’s system by a group allegedly linked to Chinese AI startup DeepSeek. According to Bloomberg, the investigation stems from suspicious data extraction activity detected in late 2024 via OpenAI’s application programming interface (API), sparking broader concerns over international AI competition. Microsoft, OpenAI’s largest financial […]

The post Microsoft and OpenAI probe alleged data theft by DeepSeek appeared first on AI News.

]]>
Microsoft and OpenAI are investigating a potential breach of the AI firm’s system by a group allegedly linked to Chinese AI startup DeepSeek.

According to Bloomberg, the investigation stems from suspicious data extraction activity detected in late 2024 via OpenAI’s application programming interface (API), sparking broader concerns over international AI competition.

Microsoft, OpenAI’s largest financial backer, first identified the large-scale data extraction and informed the ChatGPT maker of the incident. Sources believe the activity may have violated OpenAI’s terms of service, or that the group may have exploited loopholes to bypass restrictions limiting how much data they could collect.

DeepSeek has quickly risen to prominence in the competitive AI landscape, particularly with the release of its latest model, R-1, on 20 January.

Billed as a rival to OpenAI’s ChatGPT in performance but developed at a significantly lower cost, R-1 has shaken up the tech industry. Its release triggered a sharp decline in tech and AI stocks that wiped billions from US markets in a single week.

David Sacks, the White House’s newly appointed “crypto and AI czar,” alleged that DeepSeek may have employed questionable methods to achieve its AI’s capabilities. In an interview with Fox News, Sacks noted evidence suggesting that DeepSeek had used “distillation” to train its AI models using outputs from OpenAI’s systems.

“There’s substantial evidence that what DeepSeek did here is they distilled knowledge out of OpenAI’s models, and I don’t think OpenAI is very happy about this,” Sacks told the network.  

Model distillation involves training one AI system using data generated by another, potentially allowing a competitor to develop similar functionality. This method, when applied without proper authorisation, has stirred ethical and intellectual property debates as the global race for AI supremacy heats up.  

OpenAI declined to comment specifically on the accusations against DeepSeek but acknowledged the broader risk posed by model distillation, particularly by Chinese companies.  

“We know PRC-based companies — and others — are constantly trying to distill the models of leading US AI companies,” a spokesperson for OpenAI told Bloomberg.  

Geopolitical and security concerns  

Growing tensions around AI innovation now extend into national security. CNBC reported that the US Navy has banned its personnel from using DeepSeek’s products, citing fears that the Chinese government could exploit the platform to access sensitive information.

In an email dated 24 January, the Navy warned its staff against using DeepSeek AI “in any capacity” due to “potential security and ethical concerns associated with the model’s origin and usage.”

Critics have highlighted DeepSeek’s privacy policy, which permits the collection of data such as IP addresses, device information, and even keystroke patterns—a scope of data collection considered excessive by some experts.

Earlier this week, DeepSeek stated it was facing “large-scale malicious attacks” against its systems. A banner on its website informed users of a temporary sign-up restriction.

The growing competition between the US and China in particular in the AI sector has underscored wider concerns regarding technological ownership, ethical governance, and national security.  

Experts warn that as AI systems advance and become increasingly integral to global economic and strategic planning, disputes over data usage and intellectual property are only likely to intensify. Accusations such as those against DeepSeek amplify alarm over China’s rapid development in the field and its potential quest to bypass US-led safeguards through reverse engineering and other means.  

While OpenAI and Microsoft continue their investigation into the alleged misuse of OpenAI’s platform, businesses and governments alike are paying close attention. The case could set a precedent for how AI developers police model usage and enforce terms of service.

For now, the response from both US and Chinese stakeholders highlights how AI innovation has become not just a race for technological dominance, but a fraught geopolitical contest that is shaping 21st-century power dynamics.

(Image by Mohamed Hassan)

See also: Qwen 2.5-Max outperforms DeepSeek V3 in some benchmarks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft and OpenAI probe alleged data theft by DeepSeek appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/microsoft-and-openai-probe-alleged-data-theft-deepseek/feed/ 0
Cisco: Securing enterprises in the AI era https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/ https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/#respond Wed, 15 Jan 2025 16:02:18 +0000 https://www.artificialintelligence-news.com/?p=16883 As AI becomes increasingly integral to business operations, new safety concerns and security threats emerge at an unprecedented pace—outstripping the capabilities of traditional cybersecurity solutions. The stakes are high with potentially significant repercussions. According to Cisco’s 2024 AI Readiness Index, only 29% of surveyed organisations feel fully equipped to detect and prevent unauthorised tampering with […]

The post Cisco: Securing enterprises in the AI era appeared first on AI News.

]]>
As AI becomes increasingly integral to business operations, new safety concerns and security threats emerge at an unprecedented pace—outstripping the capabilities of traditional cybersecurity solutions.

The stakes are high with potentially significant repercussions. According to Cisco’s 2024 AI Readiness Index, only 29% of surveyed organisations feel fully equipped to detect and prevent unauthorised tampering with AI technologies.

Continuous model validation

DJ Sampath, Head of AI Software & Platform at Cisco, said: “When we talk about model validation, it is not just a one time thing, right? You’re doing the model validation on a continuous basis.

Headshot of DJ Sampath from Cisco for an article on securing enterprises in the AI era.

“So as you see changes happen to the model – if you’re doing any type of finetuning, or you discover new attacks that are starting to show up that you need the models to learn from – we’re constantly learning all of that information and revalidating the model to see how these models are behaving under these new attacks that we’ve discovered.

“The other very important point is that we have a really advanced threat research team which is constantly looking at these AI attacks and understanding how these attacks can further be enhanced. In fact, we’re, we’re, we’re contributing to the work groups inside of standards organisations like MITRE, OWASP, and NIST.”

Beyond preventing harmful outputs, Cisco addresses the vulnerabilities of AI models to malicious external influences that can change their behaviour. These risks include prompt injection attacks, jailbreaking, and training data poisoning—each demanding stringent preventive measures.

Evolution brings new complexities

Frank Dickson, Group VP for Security & Trust at IDC, gave his take on the evolution of cybersecurity over time and what advancements in AI mean for the industry.

“The first macro trend was that we moved from on-premise to the cloud and that introduced this whole host of new problem statements that we had to address. And then as applications move from monolithic to microservices, we saw this whole host of new problem sets.

Headshot of Frank Dickson from IDC for an article on securing enterprises in the AI era.

“AI and the addition of LLMs… same thing, whole host of new problem sets.”

The complexities of AI security are heightened as applications become multi-model. Vulnerabilities can arise at various levels – from models to apps – implicating different stakeholders such as developers, end-users, and vendors.

“Once an application moved from on-premise to the cloud, it kind of stayed there. Yes, we developed applications across multiple clouds, but once you put an application in AWS or Azure or GCP, you didn’t jump it across those various cloud environments monthly, quarterly, weekly, right?

“Once you move from monolithic application development to microservices, you stay there. Once you put an application in Kubernetes, you don’t jump back into something else.

“As you look to secure a LLM, the important thing to note is the model changes. And when we talk about model change, it’s not like it’s a revision … this week maybe [developers are] using Anthropic, next week they may be using Gemini.

“They’re completely different and the threat vectors of each model are completely different. They all have their strengths and they all have their dramatic weaknesses.”

Unlike conventional safety measures integrated into individual models, Cisco delivers controls for a multi-model environment through its newly-announced AI Defense. The solution is self-optimising, using Cisco’s proprietary machine learning algorithms to identify evolving AI safety and security concerns—informed by threat intelligence from Cisco Talos.

Adjusting to the new normal

Jeetu Patel, Executive VP and Chief Product Officer at Cisco, shared his view that major advancements in a short period of time always seem revolutionary but quickly feel normal.

Headshot of Jeetu Patel from Cisco for an article on securing enterprises in the AI era.

Waymo is, you know, self-driving cars from Google. You get in, and there’s no one sitting in the car, and it takes you from point A to point B. It feels mind-bendingly amazing, like we are living in the future. The second time, you kind of get used to it. The third time, you start complaining about the seats.

“Even how quickly we’ve gotten used to AI and ChatGPT over the course of the past couple years, I think what will happen is any major advancement will feel exceptionally progressive for a short period of time. Then there’s a normalisation that happens where everyone starts getting used to it.”

Patel believes that normalisation will happen with AGI as well. However, he notes that “you cannot underestimate the progress that these models are starting to make” and, ultimately, the kind of use cases they are going to unlock.

“No-one had thought that we would have a smartphone that’s gonna have more compute capacity than the mainframe computer at your fingertips and be able to do thousands of things on it at any point in time and now it’s just another way of life. My 14-year-old daughter doesn’t even think about it.

“We ought to make sure that we as companies get adjusted to that very quickly.”

See also: Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cisco: Securing enterprises in the AI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/feed/ 0
Rethinking video surveillance: The case for smarter, more flexible solutions https://www.artificialintelligence-news.com/news/rethinking-video-surveillance-the-case-for-smarter-more-flexible-solutions/ https://www.artificialintelligence-news.com/news/rethinking-video-surveillance-the-case-for-smarter-more-flexible-solutions/#respond Thu, 02 Jan 2025 10:17:18 +0000 https://www.artificialintelligence-news.com/?p=16758 Video surveillance has come a long way from simple CCTV setups. Today’s businesses demand more – smarter analytics, enhanced security, and seamless scalability. As organisations adopt AI and automation across their operations, video management systems (VMS) face new challenges: These questions are not hypothetical. They represent real obstacles businesses face when managing video surveillance systems. […]

The post Rethinking video surveillance: The case for smarter, more flexible solutions appeared first on AI News.

]]>
Video surveillance has come a long way from simple CCTV setups. Today’s businesses demand more – smarter analytics, enhanced security, and seamless scalability. As organisations adopt AI and automation across their operations, video management systems (VMS) face new challenges:

  • How to keep video surveillance scalable and easy to manage?
  • Can AI analytics like face recognition or behaviour detection be integrated without breaking the budget?
  • Is my current system prepared for modern security risks?

These questions are not hypothetical. They represent real obstacles businesses face when managing video surveillance systems. Solving them requires innovative thinking, flexible tools, and a smarter approach to how systems are designed and operated.

The Shift to Smarter Surveillance

Traditional video surveillance systems often fail to meet the needs of dynamic, modern environments. Whether it’s a retail chain looking to analyse customer behaviour or a factory monitoring equipment safety, the tools of yesterday aren’t enough to address today’s demands.

The shift towards smarter surveillance involves integrating modular, AI-driven systems that:

  • Adapt to your specific needs,
  • Automate tedious tasks like footage analysis,
  • Offer advanced analytics, like emotion detection or license plate recognition,
  • Remain accessible to both tech-savvy professionals and beginners.

This isn’t just a technical shift; it’s a shift in mindset. Businesses now see surveillance not only as a security measure but as a strategic tool for operational insight.

Meet Xeoma: The modular approach to smarter surveillance

At the forefront of this smarter surveillance revolution is Xeoma, a modular, AI-powered video surveillance software that provides various solutions to challenges of modern businesses:

Modularity for customisation. Xeoma’s plug-and-play structure allows businesses to tailor their surveillance systems. Whether you need facial recognition, vehicle detection, or heatmaps of customer activity, Xeoma makes it easy to add or remove modules as needed.

AI-powered analytics: Xeoma offers cutting-edge features like:

  • Object recognition: Detect and classify objects like people, animals, and vehicles,
  • Voice-to-text: Transcribe spoken words into text,
  • Fire detection: Detect the presence of fire or smoke,
  • Licence plate recognition: Automatically read and record vehicle licence plates,
  • Age and gender recognition: Determine the age range and gender of individuals.

Ease of use: Unlike many systems with steep learning curves, Xeoma is designed to be user-friendly. Its intuitive interface ensures that even non-technical users can quickly set up and operate the software.

Seamless integration: Xeoma integrates with IoT devices, access control systems, and other third-party tools, making it an ideal choice for businesses looking to enhance their existing setups.

Cost efficiency: With Xeoma, you only pay once thanks to the lifetime licences. The pricing structure ensures that businesses of all sizes, from startups to enterprises, can find a solution that fits their budgets.

Unlimited scalability: Xeoma has no limitations in number of cameras it can work with. Either the system has tens, hundreds or thousands of cameras – Xeoma will handle them all

Encrypted communication: Xeoma uses secure communication protocols (HTTPS, SSL/TLS) to encrypt data transmitted between the server, cameras, and clients. The prevents unauthorised access during data transmission.

Xeoma’s flexible design and robust features allow it to be tailored to a wide range of scenarios, empowering organisations to meet their unique challenges while staying efficient, secure, and scalable.

How Xeoma benefits your business: Scenarios

Xeoma isn’t just a tool for security – it’s a versatile platform that adapts to your environment, whether you run a small retail store, manage a factory floor, or oversee an entire urban surveillance network.

Retail: Elevating customer experience

Picture this: You manage a busy store where you need to understand peak traffic hours and monitor for shoplifting. With Xeoma one can:

  • Deploy AI-based ‘face recognition’ to discreetly flag known shoplifters or VIP customers to enhance service,
  • Use ‘visitors counter’ and ‘crowd detector’ to identify when foot traffic is highest and allocate staff accordingly,
  • Analyse heatmaps to see which areas of the store attract the most attention, optimising product placement,
  • Add ‘unique visitors counter’ module to your system to group people by frequency of attendance. At the same time, age and gender recognition will assist you in tailoring your promo more accurately,
  • Enhance the results of your marketing efforts with eye tracking by getting insights into human psychology.

Manufacturing: Ensuring workplace safety

On a bustling factory floor, every second matters, and safety is critical. Xeoma can help by:

  • Detecting if workers are in restricted zones using ‘cross-line detector,’
  • Monitoring compliance with safety protocols with helmet and mask detectors.
  • Sending real-time alerts to supervisors about potential hazards, like machinery malfunctions or unauthorised access, via a plethora of means from push notifications to personalised alerts,
  • Elevating trust and satisfaction levels with timelapse and streaming to YouTube.

Urban surveillance: Protecting communities

If you’re part of a city planning team or law enforcement agency, Xeoma scales effortlessly to monitor entire districts:

  • Use licence plate recognition to track vehicles entering and exiting restricted areas,
  • Automate responses to emergencies, from traffic incidents and rule violations (for example, speeding, passing on red traffic light or illegal parking detectors) to public safety threats,
  • Identify suspicious behaviour in crowded public spaces using ‘loitering detector,’
  • Detect graffiti and ads that have prohibited words like “drugs” with text recognition,
  • Recognise faces to find wanted or missing people with face identification.

Education: Safeguarding schools

For schools and universities, safety is a top priority. Xeoma provides:

  • AI alerts with ‘detector of abandoned objects’ and ‘sound detector’ for detecting unattended bags or abnormal behaviour, ensuring quick response times,
  • Smoke and fire detection that allows you to prevent or promptly respond to the body of fire.
  • Smart automated verification with ‘smart-card reader’ and ‘face ID’ that help to avoid the penetration by unauthorised persons,
  • Integration with existing access control systems via API or HTTP protocol for a seamless security solution,
  • Live streaming to your educational entity website or YouTube can enhance parental engagement or build a positive image, while eye tracking serves as an effective anti-cheat solution in monitoring systems.

Hospitality: Enhancing guest experiences

In the hospitality industry, guest satisfaction is everything. Xeoma helps you:

  • Monitor entrances and exits with access control integration for smooth check-ins and check-outs,
  • Use ’emotion detector’ to gauge customer satisfaction in common areas,
  • Ensure staff compliance with protocols to maintain service quality with ‘voice-to-text’ module.

Conclusion: Connecting Xeoma to your vision

Every business has its unique challenges, and Xeoma’s versatility means it can be the solution you need to overcome yours. Imagine running a business where:

  • Your team has actionable insights at their fingertips,
  • Potential threats are flagged before they escalate,
  • Your surveillance system doesn’t just protect – it empowers decision-making and growth.

Xeoma isn’t just about surveillance; it’s about giving you peace of mind, actionable intelligence, and the flexibility to focus on what matters most – your people, your customers, and your vision for the future.

Whether you’re securing a retail space, safeguarding a factory, or protecting an entire community, Xeoma’s modular, AI-powered platform adapts to your goals and grows alongside you.

Ready to see how Xeoma can transform your video surveillance strategy? Explore a free demo and start building your ideal system today.

The post Rethinking video surveillance: The case for smarter, more flexible solutions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/rethinking-video-surveillance-the-case-for-smarter-more-flexible-solutions/feed/ 0
CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/ https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/#respond Tue, 17 Dec 2024 13:00:13 +0000 https://www.artificialintelligence-news.com/?p=16724 CrowdStrike commissioned a survey of 1,022 cybersecurity professionals worldwide to assess their views on generative AI (GenAI) adoption and its implications. The findings reveal enthusiasm for GenAI’s potential to bolster defences against increasingly sophisticated threats, but also trepidation over risks such as data exposure and attacks on GenAI systems. While much has been speculated about […]

The post CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools appeared first on AI News.

]]>
CrowdStrike commissioned a survey of 1,022 cybersecurity professionals worldwide to assess their views on generative AI (GenAI) adoption and its implications.

The findings reveal enthusiasm for GenAI’s potential to bolster defences against increasingly sophisticated threats, but also trepidation over risks such as data exposure and attacks on GenAI systems.

While much has been speculated about the transformative impact of GenAI, the survey’s results paint a clearer picture of how practitioners are thinking about its role in cybersecurity.

According to the report, “We’re entering the era of GenAI in cybersecurity.” However, as organisations adopt this promising technology, their success will hinge on ensuring the safe, responsible, and industry-specific deployment of GenAI tools.

CrowdStrike’s research reveals five pivotal findings that shape the current state of GenAI in cybersecurity:

  1. Platform-based GenAI is favoured 

80% of respondents indicated a preference for GenAI delivered through integrated cybersecurity platforms rather than standalone tools. Seamless integration is cited as a crucial factor, with many preferring tools that work cohesively with existing systems. “GenAI’s value is linked to how well it works within the broader technology ecosystem,” the report states. 

Moreover, almost two-thirds (63%) of those surveyed expressed willingness to switch security vendors to access GenAI capabilities from competitors. The survey underscores the industry’s readiness for unified platforms that streamline operations and reduce the complexity of adopting new point solutions.

  1. GenAI built by cybersecurity experts is a must

Security teams believe GenAI tools should be specifically designed for cybersecurity, not general-purpose systems. 83% of respondents reported they would not trust tools that provide “unsuitable or ill-advised security guidance.”

Breach prevention remains a key motivator, with 74% stating they had faced breaches within the past 18 months or were concerned about vulnerabilities. Respondents prioritised tools from vendors with proven expertise in cybersecurity, incident response, and threat intelligence over suppliers with broad AI leadership alone. 

As CrowdStrike summarised, “The emphasis on breach prevention and vendor expertise suggests security teams would avoid domain-agnostic GenAI tools.”

  1. Augmentation, not replacement 

Despite growing fears of automation replacing jobs in many industries, the survey’s findings indicate minimal concerns about job displacement in cybersecurity. Instead, respondents expect GenAI to empower security analysts by automating repetitive tasks, reducing burnout, onboarding new personnel faster, and accelerating decision-making.

GenAI’s potential for augmenting analysts’ workflows was underscored by its most requested applications: threat intelligence analysis, assistance with investigations, and automated response mechanisms. As noted in the report, “Respondents overwhelmingly believe GenAI will ultimately optimise the analyst experience, not replace human labour.”

  1. ROI outweighs cost concerns  

For organisations evaluating GenAI investments, measurable return on investment (ROI) is the paramount concern, ahead of licensing costs or pricing model confusion. Respondents expect platform-led GenAI deployments to deliver faster results, thanks to cost savings from reduced tool management burdens, streamlined training, and fewer security incidents.

According to the survey data, the expected ROI breakdown includes 31% from cost optimisation and more efficient tools, 30% from fewer incidents, and 26% from reduced management time. Security leaders are clearly focused on ensuring the financial justification for GenAI investments.

  1. Guardrails and safety are crucial 

GenAI adoption is tempered by concerns around safety and privacy, with 87% of organisations either implementing or planning new security policies to oversee GenAI use. Key risks include exposing sensitive data to large language models (LLMs) and adversarial attacks on GenAI tools. Respondents rank safety and privacy controls among their most desired GenAI features, highlighting the need for responsible implementation.

Reflecting the cautious optimism of practitioners, only 39% of respondents firmly believed that the rewards of GenAI outweigh its risks. Meanwhile, 40% considered the risks and rewards “comparable.”

Current state of GenAI adoption in cybersecurity

GenAI adoption remains in its early stages, but interest is growing. 64% of respondents are actively researching or have already invested in GenAI tools, and 69% of those currently evaluating their options plan to make a purchase within the year. 

Security teams are primarily driven by three concerns: improving attack detection and response, enhancing operational efficiency, and mitigating the impact of staff shortages. Among economic considerations, the top priority is ROI – a sign that security leaders are keen to demonstrate tangible benefits to justify their spending.

CrowdStrike emphasises the importance of a platform-based approach, where GenAI is integrated into a unified system. Such platforms enable seamless adoption, measurable benefits, and safety guardrails for responsible usage. According to the report, “The future of GenAI in cybersecurity will be defined by tools that not only advance security but also uphold the highest standards of safety and privacy.”

The CrowdStrike survey concludes by affirming that “GenAI is not a silver bullet” but has tremendous potential to improve cybersecurity outcomes. As organisations evaluate its adoption, they will prioritise tools that integrate seamlessly with existing platforms, deliver faster response times, and ensure safety and privacy compliance.

With threats becoming more sophisticated, the role of GenAI in enabling security teams to work faster and smarter could prove indispensable. While still in its infancy, GenAI in cybersecurity is poised to shift from early adoption to mainstream deployment, provided organisations and vendors address its risks responsibly.

See also: Keys to AI success: Security, sustainability, and overcoming silos

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/feed/ 0
NJ cops demand protections against data brokers https://www.artificialintelligence-news.com/news/nj-cops-demand-protections-against-data-brokers/ https://www.artificialintelligence-news.com/news/nj-cops-demand-protections-against-data-brokers/#respond Mon, 16 Dec 2024 18:25:08 +0000 https://www.artificialintelligence-news.com/?p=16711 Privacy laws in the United States are a patchwork at best. More often than not, they miss the mark, leaving most people with little actual privacy. When such laws are enacted, they can seem tailored to protect those in positions of power. Even laws designed to protect crime victims might end up protecting the names […]

The post NJ cops demand protections against data brokers appeared first on AI News.

]]>
Privacy laws in the United States are a patchwork at best. More often than not, they miss the mark, leaving most people with little actual privacy. When such laws are enacted, they can seem tailored to protect those in positions of power.

Even laws designed to protect crime victims might end up protecting the names of abusive officers by labelling them as victims of crime in cases like resisting arrest or assaulting an officer. Such accusations are often used in cases of excessive force, keeping cops’ names out of the spotlight.

For example, a recent New Jersey law emerged from a tragic event in which a government employee faced violence, sparking a legislative response. Known as “Daniel’s Law,” it was created after the personal information of a federal judge’s family was used by a murderer to track them down. Instead of a broader privacy law that could protect all residents of New Jersey, it focused exclusively on safeguarding certain public employees.

Under the law, judges, prosecutors, and police officers can request that their personal information (addresses and phone numbers, for example) be scrubbed from public databases. Popular services that people use to look up information, such as Whitepages or Spokeo, must comply. While this sounds like a win for privacy, the protections stop there. The average citizen is still left exposed, with no legal recourse if their personal data is misused or sold.

At the centre of the debate is a lawyer who’s taken up the cause of protecting cops’ personal data. He’s suing numerous companies for making this type of information accessible. While noble at first glance, a deeper look raises questions.

It transpires that the lawyer’s company has previously collected and monetised personal data. And when a data service responded to his demands by freezing access to some of the firm’s databases, he and his clients cried foul — despite specifically requesting restrictions on how their information could be used.

It’s also worth noting how unevenly data protection measures are to be applied. Cops, for instance, frequently rely on the same tools and databases they’re now asking to be restricted. These services have long been used by law enforcement for investigations and running background checks. Yet, when law enforcement data appears in such systems, special treatment is required.

A recent anecdote involved a police union leader who was shown a simple property record pulled from an online database. The record displayed basic details like his home address and his property’s square footage — information anyone could find with a few clicks. His reaction was one of shock and anger – an obvious disconnect.

For everyday citizens, this level of data exposure is a given. But for law enforcement, it requires a level of granular exclusion that’s not practical.

Perhaps everyone, including law enforcement personnel deserves better safeguards against data harvesting and misuse? But what Daniel’s law and later events involving police officers point to is the need for the type of improvements to the way data is treated for all, not just one group of society.

Instead of expanding privacy rights to all New Jersey residents, the law carves out exceptions for the powerful — leaving the rest of the population as vulnerable as ever.

(Photo by Unsplash)

See also: EU AI legislation sparks controversy over data transparency

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NJ cops demand protections against data brokers appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nj-cops-demand-protections-against-data-brokers/feed/ 0
Keys to AI success: Security, sustainability, and overcoming silos https://www.artificialintelligence-news.com/news/keys-ai-success-security-sustainability-overcoming-silos/ https://www.artificialintelligence-news.com/news/keys-ai-success-security-sustainability-overcoming-silos/#respond Wed, 11 Dec 2024 12:06:10 +0000 https://www.artificialintelligence-news.com/?p=16687 NetApp has shed light on the pressing issues faced by organisations globally as they strive to optimise their strategies for AI success. “2025 is shaping up to be a defining year for AI, as organisations transition from experimentation to scaling their AI capabilities,” said Gabie Boko, NetApp’s Chief Marketing Officer. “Businesses are making significant investments […]

The post Keys to AI success: Security, sustainability, and overcoming silos appeared first on AI News.

]]>
NetApp has shed light on the pressing issues faced by organisations globally as they strive to optimise their strategies for AI success.

“2025 is shaping up to be a defining year for AI, as organisations transition from experimentation to scaling their AI capabilities,” said Gabie Boko, NetApp’s Chief Marketing Officer.

“Businesses are making significant investments to drive innovation and efficiency, but these efforts will succeed only if global tech executives can address the mounting challenges of data complexity, security, and sustainability.”

The findings of NetApp’s latest Data Complexity Report paints a detailed picture of where businesses currently stand on their AI journeys and the key trends that will shape the technology’s future.

Cost of transformation

Two-thirds of businesses worldwide claim their data is “fully or mostly optimised” for AI purposes, highlighting vast improvements in making data accessible, accurate, and well-documented. Yet, the study reveals that the journey towards AI maturity requires further significant investment.

A striking 40% of global technology executives anticipate “unprecedented investment” will be necessary in 2025 just to enhance AI and data management capabilities.

While considerable progress has been made, achieving impactful breakthroughs demands an even greater commitment in financial and infrastructural resources. Catching up with AI’s potential might not come cheap, but leaders prepared to invest could reap significant rewards in innovation and efficiency.

Data silos impede AI success

One of the principal barriers identified in the report is the fragmentation of data. An overwhelming 79% of global tech executives state that unifying their data, reducing silos and ensuring smooth interconnectedness, is key to unlocking AI’s full potential.

Companies that have embraced unified data storage are better placed to overcome this hurdle. By connecting data regardless of its type or location (across hybrid multi-cloud environments,) they ensure constant accessibility and minimise fragmentation.

The report indicates that organisations prioritising data unification are significantly more likely to meet their AI goals in 2025. Nearly one-third (30%) of businesses failing to prioritise unification foresee missing their targets, compared to just 23% for those placing this at the heart of their strategy.

Executives have doubled down on data management and infrastructure as top priorities, increasingly recognising that optimising their capacity to gather, store, and process information is essential for AI maturity. Companies refusing to tackle these data challenges risk falling behind in an intensely competitive global market.

Scaling risks of AI

As businesses accelerate their AI adoption, the associated risks – particularly around security – are becoming more acute. More than two-fifths (41%) of global tech executives predict a stark rise in security threats by 2025 as AI becomes integral to more facets of their operations.

AI’s rapid rise has expanded attack surfaces, exposing data sets to new vulnerabilities and creating unique challenges such as protecting sensitive AI models. Countries leading the AI race, including India, the US, and Japan, are nearly twice as likely to encounter escalating security concerns compared to less AI-advanced nations like Germany, France, and Spain.

Increased awareness of AI-driven security challenges is reflected in business priorities. Over half (59%) of global executives name cybersecurity as one of the top stressors confronting organisations today.

However, progress is being made. Despite elevated concerns, the report suggests that effective security measures are yielding results. Since 2023, the number of executives ranking cybersecurity and ransomware protection as their top priority has fallen by 17%, signalling optimism in combating these risks effectively.

Limiting AI’s environmental costs

Beyond security risks, AI’s growth is raising urgent questions of sustainability. Over one-third of global technology executives (34%) predict that AI advancements will drive significant changes to corporate sustainability practices. Meanwhile, 33% foresee new government policies and investments targeting energy usage.

The infrastructure powering AI and transforming raw data into business value demands significant energy, counteracting organisational sustainability targets. AI-heavy nations often feel the environmental impact more acutely than their less AI-focused counterparts.

While 72% of businesses still prioritise carbon footprint reduction, the report notes a decline from 84% in 2023, pointing to increasing tension between sustainability commitments and the relentless march of innovation. For organisations to scale AI without causing irreparable damage to the planet, maintaining environmental responsibility alongside technological growth will be paramount in coming years.

Krish Vitaldevara, SVP and GM at NetApp, commented: “The organisations leading in advanced analytics and AI are those that have unified and well-cataloged data, robust security and compliance for sensitive information, and a clear understanding of how data evolves.

“By tackling these challenges, they can drive innovation while ensuring resilience, responsibility, and timely insights in the new AI era.”

You can find a full copy of NetApp’s report here (PDF)

(Photo by Chunli Ju)

See also: New AI training techniques aim to overcome current challenges

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Keys to AI success: Security, sustainability, and overcoming silos appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/keys-ai-success-security-sustainability-overcoming-silos/feed/ 0
UK establishes LASR to counter AI security threats https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/ https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/#respond Mon, 25 Nov 2024 11:31:13 +0000 https://www.artificialintelligence-news.com/?p=16550 The UK is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.” The laboratory – which will receive an initial government funding of £8.22 million – aims to bring together experts from industry, academia, and government to […]

The post UK establishes LASR to counter AI security threats appeared first on AI News.

]]>
The UK is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.”

The laboratory – which will receive an initial government funding of £8.22 million – aims to bring together experts from industry, academia, and government to assess AI’s impact on national security. The announcement comes as part of a broader strategy to strengthen the UK’s cyber defence capabilities.

Speaking at the NATO Cyber Defence Conference at Lancaster House, the Chancellor of the Duchy of Lancaster said: “NATO needs to continue to adapt to the world of AI, because as the tech evolves, the threat evolves.

“NATO has stayed relevant over the last seven decades by constantly adapting to new threats. It has navigated the worlds of nuclear proliferation and militant nationalism. The move from cold warfare to drone warfare.”

The Chancellor painted a stark picture of the current cyber security landscape, stating: “Cyber war is now a daily reality. One where our defences are constantly being tested. The extent of the threat must be matched by the strength of our resolve to combat it and to protect our citizens and systems.”

The new laboratory will operate under a ‘catalytic’ model, designed to attract additional investment and collaboration from industry partners.

Key stakeholders in the new lab include GCHQ, the National Cyber Security Centre, the MOD’s Defence Science and Technology Laboratory, and prestigious academic institutions such as the University of Oxford and Queen’s University Belfast.

In a direct warning about Russia’s activities, the Chancellor declared: “Be in no doubt: the United Kingdom and others in this room are watching Russia. We know exactly what they are doing, and we are countering their attacks both publicly and behind the scenes.

“We know from history that appeasing dictators engaged in aggression against their neighbours only encourages them. Britain learned long ago the importance of standing strong in the face of such actions.”

Reaffirming support for Ukraine, he added, “Putin is a man who wants destruction, not peace. He is trying to deter our support for Ukraine with his threats. He will not be successful.”

The new lab follows recent concerns about state actors using AI to bolster existing security threats.

“Last year, we saw the US for the first time publicly call out a state for using AI to aid its malicious cyber activity,” the Chancellor noted, referring to North Korea’s attempts to use AI for malware development and vulnerability scanning.

Stephen Doughty, Minister for Europe, North America and UK Overseas Territories, highlighted the dual nature of AI technology: “AI has enormous potential. To ensure it remains a force for good in the world, we need to understand its threats and its opportunities.”

Alongside LASR, the government announced a new £1 million incident response project to enhance collaborative cyber defence capabilities among allies. The laboratory will prioritise collaboration with Five Eyes countries and NATO allies, building on the UK’s historical strength in computing, dating back to Alan Turing’s groundbreaking work.

The initiative forms part of the government’s comprehensive approach to cybersecurity, which includes the upcoming Cyber Security and Resilience Bill and the recent classification of data centres as critical national infrastructure.

(Photo by Erik Mclean)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK establishes LASR to counter AI security threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/feed/ 0