infosec Archives - AI News https://www.artificialintelligence-news.com/news/tag/infosec/ Artificial Intelligence News Wed, 30 Apr 2025 13:35:24 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png infosec Archives - AI News https://www.artificialintelligence-news.com/news/tag/infosec/ 32 32 Meta beefs up AI security with new Llama tools  https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/ https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/#respond Wed, 30 Apr 2025 13:35:22 +0000 https://www.artificialintelligence-news.com/?p=106233 If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools. The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to […]

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools.

The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to make developing and using AI a bit safer for everyone involved.

Developers working with the Llama family of models now have some upgraded kit to play with. You can grab these latest Llama Protection tools directly from Meta’s own Llama Protections page, or find them where many developers live: Hugging Face and GitHub.

First up is Llama Guard 4. Think of it as an evolution of Meta’s customisable safety filter for AI. The big news here is that it’s now multimodal so it can understand and apply safety rules not just to text, but to images as well. That’s crucial as AI applications get more visual. This new version is also being baked into Meta’s brand-new Llama API, which is currently in a limited preview.

Then there’s LlamaFirewall. This is a new piece of the puzzle from Meta, designed to act like a security control centre for AI systems. It helps manage different safety models working together and hooks into Meta’s other protection tools. Its job? To spot and block the kind of risks that keep AI developers up at night – things like clever ‘prompt injection’ attacks designed to trick the AI, potentially dodgy code generation, or risky behaviour from AI plug-ins.

Meta has also given its Llama Prompt Guard a tune-up. The main Prompt Guard 2 (86M) model is now better at sniffing out those pesky jailbreak attempts and prompt injections. More interestingly, perhaps, is the introduction of Prompt Guard 2 22M.

Prompt Guard 2 22M is a much smaller, nippier version. Meta reckons it can slash latency and compute costs by up to 75% compared to the bigger model, without sacrificing too much detection power. For anyone needing faster responses or working on tighter budgets, that’s a welcome addition.

But Meta isn’t just focusing on the AI builders; they’re also looking at the cyber defenders on the front lines of digital security. They’ve heard the calls for better AI-powered tools to help in the fight against cyberattacks, and they’re sharing some updates aimed at just that.

The CyberSec Eval 4 benchmark suite has been updated. This open-source toolkit helps organisations figure out how good AI systems actually are at security tasks. This latest version includes two new tools:

  • CyberSOC Eval: Built with the help of cybersecurity experts CrowdStrike, this framework specifically measures how well AI performs in a real Security Operation Centre (SOC) environment. It’s designed to give a clearer picture of AI’s effectiveness in threat detection and response. The benchmark itself is coming soon.
  • AutoPatchBench: This benchmark tests how good Llama and other AIs are at automatically finding and fixing security holes in code before the bad guys can exploit them.

To help get these kinds of tools into the hands of those who need them, Meta is kicking off the Llama Defenders Program. This seems to be about giving partner companies and developers special access to a mix of AI solutions – some open-source, some early-access, some perhaps proprietary – all geared towards different security challenges.

As part of this, Meta is sharing an AI security tool they use internally: the Automated Sensitive Doc Classification Tool. It automatically slaps security labels on documents inside an organisation. Why? To stop sensitive info from walking out the door, or to prevent it from being accidentally fed into an AI system (like in RAG setups) where it could be leaked.

They’re also tackling the problem of fake audio generated by AI, which is increasingly used in scams. The Llama Generated Audio Detector and Llama Audio Watermark Detector are being shared with partners to help them spot AI-generated voices in potential phishing calls or fraud attempts. Companies like ZenDesk, Bell Canada, and AT&T are already lined up to integrate these.

Finally, Meta gave a sneak peek at something potentially huge for user privacy: Private Processing. This is new tech they’re working on for WhatsApp. The idea is to let AI do helpful things like summarise your unread messages or help you draft replies, but without Meta or WhatsApp being able to read the content of those messages.

Meta is being quite open about the security side, even publishing their threat model and inviting security researchers to poke holes in the architecture before it ever goes live. It’s a sign they know they need to get the privacy aspect right.

Overall, it’s a broad set of AI security announcements from Meta. They’re clearly trying to put serious muscle behind securing the AI they build, while also giving the wider tech community better tools to build safely and defend effectively.

See also: Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/feed/ 0
DeepSeek ban? China data transfer boosts security concerns https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/ https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/#respond Fri, 07 Feb 2025 17:44:01 +0000 https://www.artificialintelligence-news.com/?p=104228 US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part […]

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.

DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga.

Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free.

However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the Chinese AI startup.

A threat to US AI dominance

DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the Chinese Communist Party (CCP).  

According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns.

Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight.

Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. 

Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions.

Potential ban of DeepSeek a TikTok redux?

The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by Chinese company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise.

In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat.

Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy.  

AI becomes a geopolitical battleground

The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China.

American AI firms like OpenAI have enjoyed a dominant position in recent years, but Chinese companies have poured resources into catching up and, in some cases, surpassing their US competitors.  

DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security.

China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in Chinese surveillance activities or even potential economic manipulations.

A nationwide DeepSeek ban is on the cards

If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match.  

In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools.  

DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen.

(Photo by Solen Feyissa)

See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/feed/ 0
Cisco: Securing enterprises in the AI era https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/ https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/#respond Wed, 15 Jan 2025 16:02:18 +0000 https://www.artificialintelligence-news.com/?p=16883 As AI becomes increasingly integral to business operations, new safety concerns and security threats emerge at an unprecedented pace—outstripping the capabilities of traditional cybersecurity solutions. The stakes are high with potentially significant repercussions. According to Cisco’s 2024 AI Readiness Index, only 29% of surveyed organisations feel fully equipped to detect and prevent unauthorised tampering with […]

The post Cisco: Securing enterprises in the AI era appeared first on AI News.

]]>
As AI becomes increasingly integral to business operations, new safety concerns and security threats emerge at an unprecedented pace—outstripping the capabilities of traditional cybersecurity solutions.

The stakes are high with potentially significant repercussions. According to Cisco’s 2024 AI Readiness Index, only 29% of surveyed organisations feel fully equipped to detect and prevent unauthorised tampering with AI technologies.

Continuous model validation

DJ Sampath, Head of AI Software & Platform at Cisco, said: “When we talk about model validation, it is not just a one time thing, right? You’re doing the model validation on a continuous basis.

Headshot of DJ Sampath from Cisco for an article on securing enterprises in the AI era.

“So as you see changes happen to the model – if you’re doing any type of finetuning, or you discover new attacks that are starting to show up that you need the models to learn from – we’re constantly learning all of that information and revalidating the model to see how these models are behaving under these new attacks that we’ve discovered.

“The other very important point is that we have a really advanced threat research team which is constantly looking at these AI attacks and understanding how these attacks can further be enhanced. In fact, we’re, we’re, we’re contributing to the work groups inside of standards organisations like MITRE, OWASP, and NIST.”

Beyond preventing harmful outputs, Cisco addresses the vulnerabilities of AI models to malicious external influences that can change their behaviour. These risks include prompt injection attacks, jailbreaking, and training data poisoning—each demanding stringent preventive measures.

Evolution brings new complexities

Frank Dickson, Group VP for Security & Trust at IDC, gave his take on the evolution of cybersecurity over time and what advancements in AI mean for the industry.

“The first macro trend was that we moved from on-premise to the cloud and that introduced this whole host of new problem statements that we had to address. And then as applications move from monolithic to microservices, we saw this whole host of new problem sets.

Headshot of Frank Dickson from IDC for an article on securing enterprises in the AI era.

“AI and the addition of LLMs… same thing, whole host of new problem sets.”

The complexities of AI security are heightened as applications become multi-model. Vulnerabilities can arise at various levels – from models to apps – implicating different stakeholders such as developers, end-users, and vendors.

“Once an application moved from on-premise to the cloud, it kind of stayed there. Yes, we developed applications across multiple clouds, but once you put an application in AWS or Azure or GCP, you didn’t jump it across those various cloud environments monthly, quarterly, weekly, right?

“Once you move from monolithic application development to microservices, you stay there. Once you put an application in Kubernetes, you don’t jump back into something else.

“As you look to secure a LLM, the important thing to note is the model changes. And when we talk about model change, it’s not like it’s a revision … this week maybe [developers are] using Anthropic, next week they may be using Gemini.

“They’re completely different and the threat vectors of each model are completely different. They all have their strengths and they all have their dramatic weaknesses.”

Unlike conventional safety measures integrated into individual models, Cisco delivers controls for a multi-model environment through its newly-announced AI Defense. The solution is self-optimising, using Cisco’s proprietary machine learning algorithms to identify evolving AI safety and security concerns—informed by threat intelligence from Cisco Talos.

Adjusting to the new normal

Jeetu Patel, Executive VP and Chief Product Officer at Cisco, shared his view that major advancements in a short period of time always seem revolutionary but quickly feel normal.

Headshot of Jeetu Patel from Cisco for an article on securing enterprises in the AI era.

Waymo is, you know, self-driving cars from Google. You get in, and there’s no one sitting in the car, and it takes you from point A to point B. It feels mind-bendingly amazing, like we are living in the future. The second time, you kind of get used to it. The third time, you start complaining about the seats.

“Even how quickly we’ve gotten used to AI and ChatGPT over the course of the past couple years, I think what will happen is any major advancement will feel exceptionally progressive for a short period of time. Then there’s a normalisation that happens where everyone starts getting used to it.”

Patel believes that normalisation will happen with AGI as well. However, he notes that “you cannot underestimate the progress that these models are starting to make” and, ultimately, the kind of use cases they are going to unlock.

“No-one had thought that we would have a smartphone that’s gonna have more compute capacity than the mainframe computer at your fingertips and be able to do thousands of things on it at any point in time and now it’s just another way of life. My 14-year-old daughter doesn’t even think about it.

“We ought to make sure that we as companies get adjusted to that very quickly.”

See also: Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cisco: Securing enterprises in the AI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/feed/ 0
CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/ https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/#respond Tue, 17 Dec 2024 13:00:13 +0000 https://www.artificialintelligence-news.com/?p=16724 CrowdStrike commissioned a survey of 1,022 cybersecurity professionals worldwide to assess their views on generative AI (GenAI) adoption and its implications. The findings reveal enthusiasm for GenAI’s potential to bolster defences against increasingly sophisticated threats, but also trepidation over risks such as data exposure and attacks on GenAI systems. While much has been speculated about […]

The post CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools appeared first on AI News.

]]>
CrowdStrike commissioned a survey of 1,022 cybersecurity professionals worldwide to assess their views on generative AI (GenAI) adoption and its implications.

The findings reveal enthusiasm for GenAI’s potential to bolster defences against increasingly sophisticated threats, but also trepidation over risks such as data exposure and attacks on GenAI systems.

While much has been speculated about the transformative impact of GenAI, the survey’s results paint a clearer picture of how practitioners are thinking about its role in cybersecurity.

According to the report, “We’re entering the era of GenAI in cybersecurity.” However, as organisations adopt this promising technology, their success will hinge on ensuring the safe, responsible, and industry-specific deployment of GenAI tools.

CrowdStrike’s research reveals five pivotal findings that shape the current state of GenAI in cybersecurity:

  1. Platform-based GenAI is favoured 

80% of respondents indicated a preference for GenAI delivered through integrated cybersecurity platforms rather than standalone tools. Seamless integration is cited as a crucial factor, with many preferring tools that work cohesively with existing systems. “GenAI’s value is linked to how well it works within the broader technology ecosystem,” the report states. 

Moreover, almost two-thirds (63%) of those surveyed expressed willingness to switch security vendors to access GenAI capabilities from competitors. The survey underscores the industry’s readiness for unified platforms that streamline operations and reduce the complexity of adopting new point solutions.

  1. GenAI built by cybersecurity experts is a must

Security teams believe GenAI tools should be specifically designed for cybersecurity, not general-purpose systems. 83% of respondents reported they would not trust tools that provide “unsuitable or ill-advised security guidance.”

Breach prevention remains a key motivator, with 74% stating they had faced breaches within the past 18 months or were concerned about vulnerabilities. Respondents prioritised tools from vendors with proven expertise in cybersecurity, incident response, and threat intelligence over suppliers with broad AI leadership alone. 

As CrowdStrike summarised, “The emphasis on breach prevention and vendor expertise suggests security teams would avoid domain-agnostic GenAI tools.”

  1. Augmentation, not replacement 

Despite growing fears of automation replacing jobs in many industries, the survey’s findings indicate minimal concerns about job displacement in cybersecurity. Instead, respondents expect GenAI to empower security analysts by automating repetitive tasks, reducing burnout, onboarding new personnel faster, and accelerating decision-making.

GenAI’s potential for augmenting analysts’ workflows was underscored by its most requested applications: threat intelligence analysis, assistance with investigations, and automated response mechanisms. As noted in the report, “Respondents overwhelmingly believe GenAI will ultimately optimise the analyst experience, not replace human labour.”

  1. ROI outweighs cost concerns  

For organisations evaluating GenAI investments, measurable return on investment (ROI) is the paramount concern, ahead of licensing costs or pricing model confusion. Respondents expect platform-led GenAI deployments to deliver faster results, thanks to cost savings from reduced tool management burdens, streamlined training, and fewer security incidents.

According to the survey data, the expected ROI breakdown includes 31% from cost optimisation and more efficient tools, 30% from fewer incidents, and 26% from reduced management time. Security leaders are clearly focused on ensuring the financial justification for GenAI investments.

  1. Guardrails and safety are crucial 

GenAI adoption is tempered by concerns around safety and privacy, with 87% of organisations either implementing or planning new security policies to oversee GenAI use. Key risks include exposing sensitive data to large language models (LLMs) and adversarial attacks on GenAI tools. Respondents rank safety and privacy controls among their most desired GenAI features, highlighting the need for responsible implementation.

Reflecting the cautious optimism of practitioners, only 39% of respondents firmly believed that the rewards of GenAI outweigh its risks. Meanwhile, 40% considered the risks and rewards “comparable.”

Current state of GenAI adoption in cybersecurity

GenAI adoption remains in its early stages, but interest is growing. 64% of respondents are actively researching or have already invested in GenAI tools, and 69% of those currently evaluating their options plan to make a purchase within the year. 

Security teams are primarily driven by three concerns: improving attack detection and response, enhancing operational efficiency, and mitigating the impact of staff shortages. Among economic considerations, the top priority is ROI – a sign that security leaders are keen to demonstrate tangible benefits to justify their spending.

CrowdStrike emphasises the importance of a platform-based approach, where GenAI is integrated into a unified system. Such platforms enable seamless adoption, measurable benefits, and safety guardrails for responsible usage. According to the report, “The future of GenAI in cybersecurity will be defined by tools that not only advance security but also uphold the highest standards of safety and privacy.”

The CrowdStrike survey concludes by affirming that “GenAI is not a silver bullet” but has tremendous potential to improve cybersecurity outcomes. As organisations evaluate its adoption, they will prioritise tools that integrate seamlessly with existing platforms, deliver faster response times, and ensure safety and privacy compliance.

With threats becoming more sophisticated, the role of GenAI in enabling security teams to work faster and smarter could prove indispensable. While still in its infancy, GenAI in cybersecurity is poised to shift from early adoption to mainstream deployment, provided organisations and vendors address its risks responsibly.

See also: Keys to AI success: Security, sustainability, and overcoming silos

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/feed/ 0
UK establishes LASR to counter AI security threats https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/ https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/#respond Mon, 25 Nov 2024 11:31:13 +0000 https://www.artificialintelligence-news.com/?p=16550 The UK is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.” The laboratory – which will receive an initial government funding of £8.22 million – aims to bring together experts from industry, academia, and government to […]

The post UK establishes LASR to counter AI security threats appeared first on AI News.

]]>
The UK is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.”

The laboratory – which will receive an initial government funding of £8.22 million – aims to bring together experts from industry, academia, and government to assess AI’s impact on national security. The announcement comes as part of a broader strategy to strengthen the UK’s cyber defence capabilities.

Speaking at the NATO Cyber Defence Conference at Lancaster House, the Chancellor of the Duchy of Lancaster said: “NATO needs to continue to adapt to the world of AI, because as the tech evolves, the threat evolves.

“NATO has stayed relevant over the last seven decades by constantly adapting to new threats. It has navigated the worlds of nuclear proliferation and militant nationalism. The move from cold warfare to drone warfare.”

The Chancellor painted a stark picture of the current cyber security landscape, stating: “Cyber war is now a daily reality. One where our defences are constantly being tested. The extent of the threat must be matched by the strength of our resolve to combat it and to protect our citizens and systems.”

The new laboratory will operate under a ‘catalytic’ model, designed to attract additional investment and collaboration from industry partners.

Key stakeholders in the new lab include GCHQ, the National Cyber Security Centre, the MOD’s Defence Science and Technology Laboratory, and prestigious academic institutions such as the University of Oxford and Queen’s University Belfast.

In a direct warning about Russia’s activities, the Chancellor declared: “Be in no doubt: the United Kingdom and others in this room are watching Russia. We know exactly what they are doing, and we are countering their attacks both publicly and behind the scenes.

“We know from history that appeasing dictators engaged in aggression against their neighbours only encourages them. Britain learned long ago the importance of standing strong in the face of such actions.”

Reaffirming support for Ukraine, he added, “Putin is a man who wants destruction, not peace. He is trying to deter our support for Ukraine with his threats. He will not be successful.”

The new lab follows recent concerns about state actors using AI to bolster existing security threats.

“Last year, we saw the US for the first time publicly call out a state for using AI to aid its malicious cyber activity,” the Chancellor noted, referring to North Korea’s attempts to use AI for malware development and vulnerability scanning.

Stephen Doughty, Minister for Europe, North America and UK Overseas Territories, highlighted the dual nature of AI technology: “AI has enormous potential. To ensure it remains a force for good in the world, we need to understand its threats and its opportunities.”

Alongside LASR, the government announced a new £1 million incident response project to enhance collaborative cyber defence capabilities among allies. The laboratory will prioritise collaboration with Five Eyes countries and NATO allies, building on the UK’s historical strength in computing, dating back to Alan Turing’s groundbreaking work.

The initiative forms part of the government’s comprehensive approach to cybersecurity, which includes the upcoming Cyber Security and Resilience Bill and the recent classification of data centres as critical national infrastructure.

(Photo by Erik Mclean)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK establishes LASR to counter AI security threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/feed/ 0
Many organisations unprepared for AI cybersecurity threats https://www.artificialintelligence-news.com/news/many-organisations-unprepared-ai-cybersecurity-threats/ https://www.artificialintelligence-news.com/news/many-organisations-unprepared-ai-cybersecurity-threats/#respond Thu, 10 Oct 2024 09:59:12 +0000 https://www.artificialintelligence-news.com/?p=16268 While AI improves the detection of cybersecurity threats, it simultaneously ushers in more advanced challenges. Research from Keeper Security finds that, despite the implementation of AI-related policies, many organisations remain inadequately prepared for AI-powered threats. 84% of IT and security leaders find AI-enhanced tools have exacerbated the challenge of detecting phishing and smishing attacks, which […]

The post Many organisations unprepared for AI cybersecurity threats appeared first on AI News.

]]>
While AI improves the detection of cybersecurity threats, it simultaneously ushers in more advanced challenges.

Research from Keeper Security finds that, despite the implementation of AI-related policies, many organisations remain inadequately prepared for AI-powered threats.

84% of IT and security leaders find AI-enhanced tools have exacerbated the challenge of detecting phishing and smishing attacks, which were already significant threats. In response, 81% of organisations have enacted AI usage policies for employees. Confidence in these measures runs high, with 77% of leaders expressing familiarity with best practices for AI security.

Gap between AI cybersecurity policy and threats preparedness

More than half (51%) of security leaders view AI-driven attacks as the most severe threat to their organisations. Alarmingly, 35% of respondents feel ill-prepared to address these attacks compared to other cyber threats.

Organisations are deploying several key strategies to meet these emerging challenges:

  • Data encryption: Utilised by 51% of IT leaders, encryption serves as a crucial defence against unauthorised access and is vital against AI-fuelled attacks.
  • Employee training and awareness: With 45% of organisations prioritising enhanced training programmes, there is a focused effort to equip employees to recognise and counter AI-driven phishing and smishing intrusions.
  • Advanced threat detection systems: 41% of organisations are investing in these systems, underscoring the need for improved detection and response to sophisticated AI threats.

The advent of AI-driven cyber threats undeniably presents new challenges. Nevertheless, fundamental cybersecurity practices – such as data encryption, employee education, and advanced threat detection – continue to be essential. Organisations must ensure these essential measures are consistently re-evaluated and adjusted to counter emerging threats.

In addition to these core practices, advanced security frameworks like zero trust and Privileged Access Management (PAM) solutions can bolster an organisation’s resilience.

Zero trust demands continuous verification of all users, devices, and applications, reducing the risk of unauthorised access and minimising potential damage during an attack. PAM offers targeted security for an organisation’s most sensitive accounts, crucial for defending against complex AI-driven threats that aim at high-level credentials.

Darren Guccione, CEO and Co-Founder of Keeper Security, commented: “AI-driven attacks are a formidable challenge, but by reinforcing our cybersecurity fundamentals and adopting advanced security measures, we can build resilient defences against these evolving threats.”

Proactivity is also key for organisations—regularly reviewing security policies, performing routine audits, and fostering a culture of cybersecurity awareness are all essential.

While organisations are advancing, cybersecurity requires perpetual vigilance. Merging traditional practices with modern approaches like zero trust and PAM will empower organisations to maintain an edge over developing AI-powered threats.

(Photo by Growtika)

See also: King’s Business School: How AI is transforming problem-solving

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Many organisations unprepared for AI cybersecurity threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/many-organisations-unprepared-ai-cybersecurity-threats/feed/ 0
PSA Certified: AI growth outpacing security measures https://www.artificialintelligence-news.com/news/psa-certified-ai-growth-outpacing-security-measures/ https://www.artificialintelligence-news.com/news/psa-certified-ai-growth-outpacing-security-measures/#respond Thu, 15 Aug 2024 08:20:44 +0000 https://www.artificialintelligence-news.com/?p=15750 While the industry acknowledges the need for robust security measures, research from PSA Certified suggests that investment and best practices are struggling to keep pace with AI’s rapid growth. The survey of 1,260 global technology decision-makers revealed that two-thirds (68%) are concerned that the speed of AI advancements is outstripping the industry’s ability to safeguard […]

The post PSA Certified: AI growth outpacing security measures appeared first on AI News.

]]>
While the industry acknowledges the need for robust security measures, research from PSA Certified suggests that investment and best practices are struggling to keep pace with AI’s rapid growth.

The survey of 1,260 global technology decision-makers revealed that two-thirds (68%) are concerned that the speed of AI advancements is outstripping the industry’s ability to safeguard products, devices, and services. This apprehension is driving a surge in edge computing adoption, with 85% believing that security concerns will push more AI use cases to the edge.

Edge computing – which processes data locally on devices instead of relying on centralised cloud systems – offers inherent advantages in efficiency, security, and privacy. However, this shift to the edge necessitates a heightened focus on device security.

“There is an important interconnect between AI and security: one doesn’t scale without the other,” cautions David Maidment, Senior Director, Market Strategy at Arm (a PSA Certified co-founder). “While AI is a huge opportunity, its proliferation also offers that same opportunity to bad actors.”

Despite recognising security as paramount, a significant disconnect exists between awareness and action. Only half (50%) of those surveyed believe their current security investments are sufficient.  Furthermore, essential security practices, such as independent certifications and threat modelling, are being neglected by a substantial portion of respondents.

“It’s more imperative than ever that those in the connected device ecosystem don’t skip best practice security in the hunt for AI features,” emphasises Maidment. “The entire value chain needs to take collective responsibility and ensure that consumer trust in AI driven services is maintained.”

The report highlights the need for a holistic approach to security, embedded throughout the entire AI lifecycle, from device deployment to the management of AI models operating at the edge. This proactive approach, incorporating security-by-design principles, is deemed essential to building consumer trust and mitigating the escalating security risks.

Despite the concerns, a sense of optimism prevails within the industry. A majority (67%) of decisionmakers believe their organisations are equipped to handle the potential security risks associated with AI’s surge. There is a growing recognition of the need to prioritise security investment – 46% are focused on bolstering security, compared to 39% prioritising AI readiness.

“Those looking to unleash the full potential of AI must ensure they are taking the right steps to mitigate potential security risks,” says Maidment. “As stakeholders in the connected device ecosystem rapidly embrace a new set of AI-enabled use cases, it’s crucial that they do not simply forge ahead with AI regardless of security implications.”

(Photo by Braden Collum)

See also: The AI revolution: Reshaping data centres and the digital landscape 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post PSA Certified: AI growth outpacing security measures appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/psa-certified-ai-growth-outpacing-security-measures/feed/ 0
NCSC: AI to significantly boost cyber threats over next two years https://www.artificialintelligence-news.com/news/ncsc-ai-significantly-boost-cyber-threats-next-two-years/ https://www.artificialintelligence-news.com/news/ncsc-ai-significantly-boost-cyber-threats-next-two-years/#respond Wed, 24 Jan 2024 16:50:10 +0000 https://www.artificialintelligence-news.com/?p=14257 A report published by the UK’s National Cyber Security Centre (NCSC) warns that AI will substantially increase cyber threats over the next two years.  The centre warns of a surge in ransomware attacks in particular; involving hackers deploying malicious software to encrypt a victim’s files or entire system and demanding a ransom payment for the […]

The post NCSC: AI to significantly boost cyber threats over next two years appeared first on AI News.

]]>
A report published by the UK’s National Cyber Security Centre (NCSC) warns that AI will substantially increase cyber threats over the next two years. 

The centre warns of a surge in ransomware attacks in particular; involving hackers deploying malicious software to encrypt a victim’s files or entire system and demanding a ransom payment for the decryption key.

The NCSC assessment predicts AI will enhance threat actors’ capabilities mainly in carrying out more persuasive phishing attacks that trick individuals into providing sensitive information or clicking on malicious links.

“Generative AI can already create convincing interactions like documents that fool people, free of the translation and grammatical errors common in phishing emails,” the report states. 

The advent of generative AI, capable of creating convincing interactions and documents free of common phishing red flags, is identified as a key contributor to the rising threat landscape over the next two years.

The NCSC assessment identifies challenges in cyber resilience, citing the difficulty in verifying the legitimacy of emails and password reset requests due to generative AI and large language models. The shrinking time window between security updates and threat exploitation further complicates rapid vulnerability patching for network managers.

James Babbage, director general for threats at the National Crime Agency, commented: “AI services lower barriers to entry, increasing the number of cyber criminals, and will boost their capability by improving the scale, speed, and effectiveness of existing attack methods.”

However, the NCSC report also outlined how AI could bolster cybersecurity through improved attack detection and system design. It calls for further research on how developments in defensive AI solutions can mitigate evolving threats.

Access to quality data, skills, tools, and time makes advanced AI-powered cyber operations feasible mainly for highly capable state actors currently. But the NCSC warns these barriers to entry will progressively fall as capable groups monetise and sell AI-enabled hacking tools.

Extent of capability uplift by AI over next two years:

(Credit: NCSC)

Lindy Cameron, CEO of the NCSC, stated: “We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat.”

The UK government has allocated £2.6 billion under its Cyber Security Strategy 2022 to strengthen the country’s resilience to emerging high-tech threats.

AI is positioned to substantially change the cyber risk landscape in the near future. Continuous investment in defensive capabilities and research will be vital to counteract its potential to empower attackers.

A full copy of the NCSC’s report can be found here.

(Photo by Muha Ajjan on Unsplash)

See also: AI-generated Biden robocall urges Democrats not to vote

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NCSC: AI to significantly boost cyber threats over next two years appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ncsc-ai-significantly-boost-cyber-threats-next-two-years/feed/ 0
McAfee unveils AI-powered deepfake audio detection https://www.artificialintelligence-news.com/news/mcafee-unveils-ai-powered-deepfake-audio-detection/ https://www.artificialintelligence-news.com/news/mcafee-unveils-ai-powered-deepfake-audio-detection/#respond Mon, 08 Jan 2024 10:49:16 +0000 https://www.artificialintelligence-news.com/?p=14161 McAfee has revealed a pioneering AI-powered deepfake audio detection technology, Project Mockingbird, during CES 2024. This proprietary technology aims to defend consumers against the rising menace of cybercriminals employing fabricated, AI-generated audio for scams, cyberbullying, and manipulation of public figures’ images. Generative AI tools have enabled cybercriminals to craft convincing scams, including voice cloning to […]

The post McAfee unveils AI-powered deepfake audio detection appeared first on AI News.

]]>
McAfee has revealed a pioneering AI-powered deepfake audio detection technology, Project Mockingbird, during CES 2024. This proprietary technology aims to defend consumers against the rising menace of cybercriminals employing fabricated, AI-generated audio for scams, cyberbullying, and manipulation of public figures’ images.

Generative AI tools have enabled cybercriminals to craft convincing scams, including voice cloning to impersonate family members seeking money or manipulating authentic videos with “cheapfakes.” These tactics manipulate content to deceive individuals, creating a heightened challenge for consumers to discern between real and manipulated information.

In response to this challenge, McAfee Labs developed an industry-leading AI model, part of the Project Mockingbird technology, to detect AI-generated audio. This technology employs contextual, behavioural, and categorical detection models, achieving an impressive 90 percent accuracy rate.

Steve Grobman, CTO at McAfee, said: “Much like a weather forecast indicating a 70 percent chance of rain helps you plan your day, our technology equips you with insights to make educated decisions about whether content is what it appears to be.”

Project Mockingbird offers diverse applications, from countering AI-generated scams to tackling disinformation. By empowering consumers to distinguish between authentic and manipulated content, McAfee aims to protect users from falling victim to fraudulent schemes and ensure a secure digital experience.

Deep concerns about deepfakes

As deepfake technology becomes more sophisticated, consumer concerns are on the rise. McAfee’s December 2023 Deepfakes Survey highlights:

  • 84% of Americans are concerned about deepfake usage in 2024
  • 68% are more concerned than a year ago
  • 33% have experienced or witnessed a deepfake scam, with 40% prevalent among 18–34 year-olds
  • Top concerns include election influence (52%), undermining public trust in media (48%), impersonation of public figures (49%), proliferation of scams (57%), cyberbullying (44%), and sexually explicit content creation (37%)

McAfee’s unveiling of Project Mockingbird marks a significant leap in the ongoing battle against AI-generated threats. As countries like the US and UK enter a pivotal election year, it’s crucial that consumers are given the best chance possible at grappling with the pervasive influence of deepfake technology.

(Photo by Markus Spiske on Unsplash)

See also: MyShell releases OpenVoice voice cloning AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post McAfee unveils AI-powered deepfake audio detection appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/mcafee-unveils-ai-powered-deepfake-audio-detection/feed/ 0
OpenAI battles DDoS against its API and ChatGPT services https://www.artificialintelligence-news.com/news/openai-battles-ddos-against-api-chatgpt-services/ https://www.artificialintelligence-news.com/news/openai-battles-ddos-against-api-chatgpt-services/#respond Thu, 09 Nov 2023 15:50:14 +0000 https://www.artificialintelligence-news.com/?p=13866 OpenAI has been grappling with a series of distributed denial-of-service (DDoS) attacks targeting its API and ChatGPT services over the past 24 hours. While the company has not yet disclosed specific details about the source of these attacks, OpenAI acknowledged that they are dealing with “periodic outages due to an abnormal traffic pattern reflective of […]

The post OpenAI battles DDoS against its API and ChatGPT services appeared first on AI News.

]]>
OpenAI has been grappling with a series of distributed denial-of-service (DDoS) attacks targeting its API and ChatGPT services over the past 24 hours.

While the company has not yet disclosed specific details about the source of these attacks, OpenAI acknowledged that they are dealing with “periodic outages due to an abnormal traffic pattern reflective of a DDoS attack.”

Users affected by these incidents reported encountering errors such as “something seems to have gone wrong” and “There was an error generating a response” when accessing ChatGPT.

This recent wave of attacks follows a major outage that impacted ChatGPT and its API on Wednesday, along with partial ChatGPT outages on Tuesday, and elevated error rates in Dall-E on Monday.

OpenAI displayed a banner across ChatGPT’s interface, attributing the disruptions to “exceptionally high demand” and reassuring users that efforts were underway to scale their systems.

Threat actor group Anonymous Sudan has claimed responsibility for the DDoS attacks on OpenAI. According to the group, the attacks are in response to OpenAI’s perceived bias towards Israel and against Palestine.

The attackers utilised the SkyNet botnet, which recently incorporated support for application layer attacks or Layer 7 (L7) DDoS attacks. In Layer 7 attacks, threat actors overwhelm services at the application level with a massive volume of requests to strain the targets’ server and network resources.

Brad Freeman, Director of Technology at SenseOn, commented:

“Distributed denial of service attacks are internet vandalism. Low effort, complexity, and in most cases more of a nuisance than a long-term threat to a business. Often DDOS attacks target services with high volumes of traffic which can be ’off-ramped, by their cloud or Internet service provider.

However, as the attacks are on Layer 7 they will be targeting the application itself, therefore OpenAI will need to make some changes to mitigate the attack. It’s likely the threat actor is sending complex queries to OpenAI to overload it, I wonder if they are using AI-generated content to attack AI content generation.”

However, the attribution of these attacks to Anonymous Sudan has raised suspicions among cybersecurity researchers. Some experts suggest that this could be a false flag operation and the group might have connections to Russia instead which, along with Iran, is suspected of stoking the bloodshed and international outrage to benefit its domestic interests.

The situation once again highlights the ongoing challenges faced by organisations dealing with DDoS attacks and the complexities of accurately identifying the perpetrators.

(Photo by Johann Walter Bantz on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI battles DDoS against its API and ChatGPT services appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-battles-ddos-against-api-chatgpt-services/feed/ 0
GitLab: Developers view AI as ‘essential’ despite concerns https://www.artificialintelligence-news.com/news/gitlab-developers-ai-essential-despite-concerns/ https://www.artificialintelligence-news.com/news/gitlab-developers-ai-essential-despite-concerns/#respond Wed, 06 Sep 2023 09:48:08 +0000 https://www.artificialintelligence-news.com/?p=13564 A survey by GitLab has shed light on the views of developers on the landscape of AI in software development. The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals. The report reveals a complex relationship between enthusiasm for AI […]

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
A survey by GitLab has shed light on the views of developers on the landscape of AI in software development.

The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals.

The report reveals a complex relationship between enthusiasm for AI adoption and concerns about data privacy, intellectual property, and security.

“Enterprises are seeking out platforms that allow them to harness the power of AI while addressing potential privacy and security risks,” said Alexander Johnston, Research Analyst in the Data, AI & Analytics Channel at 451 Research, a part of S&P Global Market Intelligence.

While 83 percent of the survey’s respondents view AI implementation as essential to stay competitive, a significant 79 percent expressed worries about AI tools accessing sensitive information and intellectual property.

Impact on developer productivity

AI is perceived as a boon for developer productivity, with 51 percent of all respondents citing it as a key benefit of AI implementation. However, security professionals are apprehensive that AI-generated code might lead to an increase in security vulnerabilities, potentially creating more work for them.

Only seven percent of developers’ time is currently spent identifying and mitigating security vulnerabilities, compared to 11 percent allocated to testing code. This raises questions about the widening gap between developers and security professionals in the AI era.

Privacy and intellectual property concerns

The survey underscores the paramount importance of data privacy and intellectual property protection when selecting AI tools. 95 percent of senior technology executives prioritise these aspects when choosing AI solutions.

Moreover, 32 percent of respondents admitted to being “very” or “extremely” concerned about introducing AI into the software development lifecycle. Within this group, 39 percent cited worries about AI-generated code introducing security vulnerabilities, and 48 percent expressed concerns that AI-generated code may not receive the same copyright protection as code produced by humans.

AI skills gap

Despite optimism about AI’s potential, the report identifies a disconnect between organisations’ provision of AI training resources and practitioners’ satisfaction with them. 

While 75 percent of respondents stated that their organisations offer training and resources for using AI, an equivalent proportion expressed the need to seek resources independently—suggesting that the available training may be insufficient.

A striking 81 percent of respondents said they require more training to effectively utilise AI in their daily work. Furthermore, 65 percent of those planning to use AI for software development indicated that their organsations plan to hire new talent to manage AI implementation.

David DeSanto, Chief Product Officer at GitLab, said:

“According to the GitLab Global DevSecOps Report, only 25 percent of developers’ time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60 percent of developers’ day-to-day work.

To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software – not just developers – to benefit from the efficiency boost.” 

While AI holds immense promise for the software development industry, GitLab’s report makes it clear that addressing cybersecurity and privacy concerns, bridging the skills gap, and fostering collaboration between developers and security professionals are pivotal to successful AI adoption.

(Photo by Luca Bravo on Unsplash)

See also: UK government outlines AI Safety Summit plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/gitlab-developers-ai-essential-despite-concerns/feed/ 0
NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk https://www.artificialintelligence-news.com/news/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/ https://www.artificialintelligence-news.com/news/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/#respond Wed, 30 Aug 2023 10:50:59 +0000 https://www.artificialintelligence-news.com/?p=13544 The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences. The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language […]

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences.

The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language models that underpin chatbots.

Chatbots have become integral in various applications such as online banking and shopping due to their capacity to handle simple requests. Large language models (LLMs) – including those powering OpenAI’s ChatGPT and Google’s AI chatbot Bard – have been trained extensively on datasets that enable them to generate human-like responses to user prompts.

The NCSC has highlighted the escalating risks associated with malicious prompt injection, as chatbots often facilitate the exchange of data with third-party applications and services.

“Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC explained.

“They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it. Similar caution should apply to LLMs.”

If users input unfamiliar statements or exploit word combinations to override a model’s original script, the model can execute unintended actions. This could potentially lead to the generation of offensive content, unauthorised access to confidential information, or even data breaches.

Oseloka Obiora, CTO at RiverSafe, said: “The race to embrace AI will have disastrous consequences if businesses fail to implement basic necessary due diligence checks. 

“Chatbots have already been proven to be susceptible to manipulation and hijacking for rogue commands, a fact which could lead to a sharp rise in fraud, illegal transactions, and data breaches.”

Microsoft’s release of a new version of its Bing search engine and conversational bot drew attention to these risks.

A Stanford University student, Kevin Liu, successfully employed prompt injection to expose Bing Chat’s initial prompt. Additionally, security researcher Johann Rehberger discovered that ChatGPT could be manipulated to respond to prompts from unintended sources, opening up possibilities for indirect prompt injection vulnerabilities.

The NCSC advises that while prompt injection attacks can be challenging to detect and mitigate, a holistic system design that considers the risks associated with machine learning components can help prevent the exploitation of vulnerabilities.

A rules-based system is suggested to be implemented alongside the machine learning model to counteract potentially damaging actions. By fortifying the entire system’s security architecture, it becomes possible to thwart malicious prompt injections.

The NCSC emphasises that mitigating cyberattacks stemming from machine learning vulnerabilities necessitates understanding the techniques used by attackers and prioritising security in the design process.

Jake Moore, Global Cybersecurity Advisor at ESET, commented: “When developing applications with security in mind and understanding the methods attackers use to take advantage of the weaknesses in machine learning algorithms, it’s possible to reduce the impact of cyberattacks stemming from AI and machine learning.

“Unfortunately, speed to launch or cost savings can typically overwrite standard and future-proofing security programming, leaving people and their data at risk of unknown attacks. It is vital that people are aware that what they input into chatbots is not always protected.”

As chatbots continue to play an integral role in various online interactions and transactions, the NCSC’s warning serves as a timely reminder of the imperative to guard against evolving cybersecurity threats.

(Photo by Google DeepMind on Unsplash)

See also: OpenAI launches ChatGPT Enterprise to accelerate business operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/feed/ 0