cybersecurity Archives - AI News https://www.artificialintelligence-news.com/news/tag/cybersecurity/ Artificial Intelligence News Wed, 30 Apr 2025 13:35:24 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png cybersecurity Archives - AI News https://www.artificialintelligence-news.com/news/tag/cybersecurity/ 32 32 Meta beefs up AI security with new Llama tools  https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/ https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/#respond Wed, 30 Apr 2025 13:35:22 +0000 https://www.artificialintelligence-news.com/?p=106233 If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools. The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to […]

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools.

The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to make developing and using AI a bit safer for everyone involved.

Developers working with the Llama family of models now have some upgraded kit to play with. You can grab these latest Llama Protection tools directly from Meta’s own Llama Protections page, or find them where many developers live: Hugging Face and GitHub.

First up is Llama Guard 4. Think of it as an evolution of Meta’s customisable safety filter for AI. The big news here is that it’s now multimodal so it can understand and apply safety rules not just to text, but to images as well. That’s crucial as AI applications get more visual. This new version is also being baked into Meta’s brand-new Llama API, which is currently in a limited preview.

Then there’s LlamaFirewall. This is a new piece of the puzzle from Meta, designed to act like a security control centre for AI systems. It helps manage different safety models working together and hooks into Meta’s other protection tools. Its job? To spot and block the kind of risks that keep AI developers up at night – things like clever ‘prompt injection’ attacks designed to trick the AI, potentially dodgy code generation, or risky behaviour from AI plug-ins.

Meta has also given its Llama Prompt Guard a tune-up. The main Prompt Guard 2 (86M) model is now better at sniffing out those pesky jailbreak attempts and prompt injections. More interestingly, perhaps, is the introduction of Prompt Guard 2 22M.

Prompt Guard 2 22M is a much smaller, nippier version. Meta reckons it can slash latency and compute costs by up to 75% compared to the bigger model, without sacrificing too much detection power. For anyone needing faster responses or working on tighter budgets, that’s a welcome addition.

But Meta isn’t just focusing on the AI builders; they’re also looking at the cyber defenders on the front lines of digital security. They’ve heard the calls for better AI-powered tools to help in the fight against cyberattacks, and they’re sharing some updates aimed at just that.

The CyberSec Eval 4 benchmark suite has been updated. This open-source toolkit helps organisations figure out how good AI systems actually are at security tasks. This latest version includes two new tools:

  • CyberSOC Eval: Built with the help of cybersecurity experts CrowdStrike, this framework specifically measures how well AI performs in a real Security Operation Centre (SOC) environment. It’s designed to give a clearer picture of AI’s effectiveness in threat detection and response. The benchmark itself is coming soon.
  • AutoPatchBench: This benchmark tests how good Llama and other AIs are at automatically finding and fixing security holes in code before the bad guys can exploit them.

To help get these kinds of tools into the hands of those who need them, Meta is kicking off the Llama Defenders Program. This seems to be about giving partner companies and developers special access to a mix of AI solutions – some open-source, some early-access, some perhaps proprietary – all geared towards different security challenges.

As part of this, Meta is sharing an AI security tool they use internally: the Automated Sensitive Doc Classification Tool. It automatically slaps security labels on documents inside an organisation. Why? To stop sensitive info from walking out the door, or to prevent it from being accidentally fed into an AI system (like in RAG setups) where it could be leaked.

They’re also tackling the problem of fake audio generated by AI, which is increasingly used in scams. The Llama Generated Audio Detector and Llama Audio Watermark Detector are being shared with partners to help them spot AI-generated voices in potential phishing calls or fraud attempts. Companies like ZenDesk, Bell Canada, and AT&T are already lined up to integrate these.

Finally, Meta gave a sneak peek at something potentially huge for user privacy: Private Processing. This is new tech they’re working on for WhatsApp. The idea is to let AI do helpful things like summarise your unread messages or help you draft replies, but without Meta or WhatsApp being able to read the content of those messages.

Meta is being quite open about the security side, even publishing their threat model and inviting security researchers to poke holes in the architecture before it ever goes live. It’s a sign they know they need to get the privacy aspect right.

Overall, it’s a broad set of AI security announcements from Meta. They’re clearly trying to put serious muscle behind securing the AI they build, while also giving the wider tech community better tools to build safely and defend effectively.

See also: Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta beefs up AI security with new Llama tools  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-beefs-up-ai-security-new-llama-tools/feed/ 0
AI strategies for cybersecurity press releases that get coverage https://www.artificialintelligence-news.com/news/ai-strategies-for-cybersecurity-press-releases-that-get-coverage/ https://www.artificialintelligence-news.com/news/ai-strategies-for-cybersecurity-press-releases-that-get-coverage/#respond Mon, 28 Apr 2025 14:26:27 +0000 https://www.artificialintelligence-news.com/?p=106172 If you’ve ever tried to get your cybersecurity news picked up by media outlets, you’ll know just how much of a challenge (and how disheartening) it can be. You pour hours into what you think is an excellent announcement about your new security tool, threat research, or vulnerability discovery, only to watch it disappear into […]

The post AI strategies for cybersecurity press releases that get coverage appeared first on AI News.

]]>
If you’ve ever tried to get your cybersecurity news picked up by media outlets, you’ll know just how much of a challenge (and how disheartening) it can be. You pour hours into what you think is an excellent announcement about your new security tool, threat research, or vulnerability discovery, only to watch it disappear into journalists’ overflowing inboxes without a trace.

The cyber PR space is brutally competitive. Reporters at top publications receive tens, if not hundreds, of pitches each day, and they have no choice but to be highly selective about which releases they choose to cover and which to discard. Your challenge then isn’t just creating a good press release, it’s making one that grabs attention and stands out in an industry drowning in technical jargon and “revolutionary” solutions.

Why most cybersecurity press releases fall flat

Let’s first look at some of the main reasons why many cyber press releases fail:

  • They’re too complex from the start, losing non-technical reporters
  • They bury the actual news under corporate marketing speak.
  • They focus on product features rather than the real-world impact or problems they solve.
  • They lack credible data or specific research findings that journalists can cite as support.

Most of these problems have one main theme: Journalists aren’t interested in promoting your product or your business. They are looking after their interests and seeking newsworthy stories their audiences care about. Keep this in mind and make their job easier by showing them exactly why your announcement matters.

Learning how to write a cybersecurity press release

What does a well-written press release look like? Alongside the reasons listed above, many companies make the mistake of submitting poorly formatted releases that journalists will be unlikely to spend time reading.

It’s worth learning how to write a cybersecurity press release properly, including the preferred structure (headline, subheader, opening paragraph, boilerplate, etc). And, be sure to review some examples of high-quality press releases as well.

AI strategies that transform your press release process

Let’s examine how AI tools can significantly enhance your cyber PR at every stage.

1. Research Enhancement

Use AI tools to track media coverage patterns and identify emerging trends in cybersecurity news. You can analyse which types of security stories gain traction, and this can help you position your announcement in that context.

Another idea is to use LLMs (like Google’s Gemini or OpenAI’s ChatGPT) to analyse hundreds of successful cybersecurity press releases in a niche similar to yours. Ask it to identify common elements in those that generated significant coverage, and then use these same features in your cyber PR efforts.

To take this a step further, AI-powered sentiment analysis can help you understand how different audience segments receive specific cybersecurity topics. The intelligence can help you tailor your messaging to address current concerns and capitalise on positive industry momentum.

2. Writing assistance

If you struggle to convey complex ideas and terminology in more accessible language, consider asking the LLM to help simplify your messaging. This can help transform technical specifications into clear, accessible language that non-technical journalists can understand.

Since the headline is the most important part of your release, use an LLM to generate a handful of options based on your core announcement, then select the best one based on clarity and impact. Once your press release is complete, run it through an LLM to identify and replace jargon that might be second nature to your security team but may be confusing to general tech reporters.

3. Visual storytelling

If you are struggling to find ways to explain your product or service in accessible language, visuals can help. AI image generation tools, like Midjourney, create custom visuals based on prompts that help illustrate your message. The latest models can handle highly complex tasks.

With a bit of prompt engineering (and by incorporating the press release you want help with), you should be able to create accompanying images and infographics that bring your message to life.

4. Video content

Going one step further than a static image, a brief AI-generated explainer video can sit alongside your press release, providing journalists with ready-to-use content that explains complex security concepts. Some ideas include:

  • Short Explainer Videos: Use text-to-video tools to turn essential sections of your press release into a brief (60 seconds or less) animated or stock-footage-based video. You can usually use narration and text overlays directly on the AI platforms as well.
  • AI Avatar Summaries: Several tools now enable you to create a brief video featuring an AI avatar that presents the core message of the press release. A human-looking avatar reads out the content and delivers an audio and video component for your release.
  • Data Visualisation Videos: Use AI tools to animate key statistics or processes described in the release for enhanced clarity.

Final word

Even as you use the AI tools you have at your disposal, remember that the most effective cybersecurity press releases still require that all-important human insight and expertise. Your goal isn’t to automate the entire process. Instead, use AI to enhance your cyber PR efforts and make your releases stand out from the crowd.

AI should help emphasise, not replace, the human elements that make security stories so engaging and compelling. Be sure to shine a spotlight on the researchers who made the discovery, the real-world implications of any threat vulnerabilities you uncover, and the people security measures ultimately protect.

Combine this human-focused storytelling with the power of AI automation, and you’ll ensure that your press releases and cyber PR campaigns get the maximum mileage.

The post AI strategies for cybersecurity press releases that get coverage appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-strategies-for-cybersecurity-press-releases-that-get-coverage/feed/ 0
DeepSeek ban? China data transfer boosts security concerns https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/ https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/#respond Fri, 07 Feb 2025 17:44:01 +0000 https://www.artificialintelligence-news.com/?p=104228 US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part […]

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.

DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga.

Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free.

However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the Chinese AI startup.

A threat to US AI dominance

DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the Chinese Communist Party (CCP).  

According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns.

Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight.

Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. 

Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions.

Potential ban of DeepSeek a TikTok redux?

The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by Chinese company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise.

In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat.

Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy.  

AI becomes a geopolitical battleground

The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China.

American AI firms like OpenAI have enjoyed a dominant position in recent years, but Chinese companies have poured resources into catching up and, in some cases, surpassing their US competitors.  

DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security.

China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in Chinese surveillance activities or even potential economic manipulations.

A nationwide DeepSeek ban is on the cards

If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match.  

In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools.  

DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen.

(Photo by Solen Feyissa)

See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/feed/ 0
Cisco: Securing enterprises in the AI era https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/ https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/#respond Wed, 15 Jan 2025 16:02:18 +0000 https://www.artificialintelligence-news.com/?p=16883 As AI becomes increasingly integral to business operations, new safety concerns and security threats emerge at an unprecedented pace—outstripping the capabilities of traditional cybersecurity solutions. The stakes are high with potentially significant repercussions. According to Cisco’s 2024 AI Readiness Index, only 29% of surveyed organisations feel fully equipped to detect and prevent unauthorised tampering with […]

The post Cisco: Securing enterprises in the AI era appeared first on AI News.

]]>
As AI becomes increasingly integral to business operations, new safety concerns and security threats emerge at an unprecedented pace—outstripping the capabilities of traditional cybersecurity solutions.

The stakes are high with potentially significant repercussions. According to Cisco’s 2024 AI Readiness Index, only 29% of surveyed organisations feel fully equipped to detect and prevent unauthorised tampering with AI technologies.

Continuous model validation

DJ Sampath, Head of AI Software & Platform at Cisco, said: “When we talk about model validation, it is not just a one time thing, right? You’re doing the model validation on a continuous basis.

Headshot of DJ Sampath from Cisco for an article on securing enterprises in the AI era.

“So as you see changes happen to the model – if you’re doing any type of finetuning, or you discover new attacks that are starting to show up that you need the models to learn from – we’re constantly learning all of that information and revalidating the model to see how these models are behaving under these new attacks that we’ve discovered.

“The other very important point is that we have a really advanced threat research team which is constantly looking at these AI attacks and understanding how these attacks can further be enhanced. In fact, we’re, we’re, we’re contributing to the work groups inside of standards organisations like MITRE, OWASP, and NIST.”

Beyond preventing harmful outputs, Cisco addresses the vulnerabilities of AI models to malicious external influences that can change their behaviour. These risks include prompt injection attacks, jailbreaking, and training data poisoning—each demanding stringent preventive measures.

Evolution brings new complexities

Frank Dickson, Group VP for Security & Trust at IDC, gave his take on the evolution of cybersecurity over time and what advancements in AI mean for the industry.

“The first macro trend was that we moved from on-premise to the cloud and that introduced this whole host of new problem statements that we had to address. And then as applications move from monolithic to microservices, we saw this whole host of new problem sets.

Headshot of Frank Dickson from IDC for an article on securing enterprises in the AI era.

“AI and the addition of LLMs… same thing, whole host of new problem sets.”

The complexities of AI security are heightened as applications become multi-model. Vulnerabilities can arise at various levels – from models to apps – implicating different stakeholders such as developers, end-users, and vendors.

“Once an application moved from on-premise to the cloud, it kind of stayed there. Yes, we developed applications across multiple clouds, but once you put an application in AWS or Azure or GCP, you didn’t jump it across those various cloud environments monthly, quarterly, weekly, right?

“Once you move from monolithic application development to microservices, you stay there. Once you put an application in Kubernetes, you don’t jump back into something else.

“As you look to secure a LLM, the important thing to note is the model changes. And when we talk about model change, it’s not like it’s a revision … this week maybe [developers are] using Anthropic, next week they may be using Gemini.

“They’re completely different and the threat vectors of each model are completely different. They all have their strengths and they all have their dramatic weaknesses.”

Unlike conventional safety measures integrated into individual models, Cisco delivers controls for a multi-model environment through its newly-announced AI Defense. The solution is self-optimising, using Cisco’s proprietary machine learning algorithms to identify evolving AI safety and security concerns—informed by threat intelligence from Cisco Talos.

Adjusting to the new normal

Jeetu Patel, Executive VP and Chief Product Officer at Cisco, shared his view that major advancements in a short period of time always seem revolutionary but quickly feel normal.

Headshot of Jeetu Patel from Cisco for an article on securing enterprises in the AI era.

Waymo is, you know, self-driving cars from Google. You get in, and there’s no one sitting in the car, and it takes you from point A to point B. It feels mind-bendingly amazing, like we are living in the future. The second time, you kind of get used to it. The third time, you start complaining about the seats.

“Even how quickly we’ve gotten used to AI and ChatGPT over the course of the past couple years, I think what will happen is any major advancement will feel exceptionally progressive for a short period of time. Then there’s a normalisation that happens where everyone starts getting used to it.”

Patel believes that normalisation will happen with AGI as well. However, he notes that “you cannot underestimate the progress that these models are starting to make” and, ultimately, the kind of use cases they are going to unlock.

“No-one had thought that we would have a smartphone that’s gonna have more compute capacity than the mainframe computer at your fingertips and be able to do thousands of things on it at any point in time and now it’s just another way of life. My 14-year-old daughter doesn’t even think about it.

“We ought to make sure that we as companies get adjusted to that very quickly.”

See also: Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cisco: Securing enterprises in the AI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/feed/ 0
CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/ https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/#respond Tue, 17 Dec 2024 13:00:13 +0000 https://www.artificialintelligence-news.com/?p=16724 CrowdStrike commissioned a survey of 1,022 cybersecurity professionals worldwide to assess their views on generative AI (GenAI) adoption and its implications. The findings reveal enthusiasm for GenAI’s potential to bolster defences against increasingly sophisticated threats, but also trepidation over risks such as data exposure and attacks on GenAI systems. While much has been speculated about […]

The post CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools appeared first on AI News.

]]>
CrowdStrike commissioned a survey of 1,022 cybersecurity professionals worldwide to assess their views on generative AI (GenAI) adoption and its implications.

The findings reveal enthusiasm for GenAI’s potential to bolster defences against increasingly sophisticated threats, but also trepidation over risks such as data exposure and attacks on GenAI systems.

While much has been speculated about the transformative impact of GenAI, the survey’s results paint a clearer picture of how practitioners are thinking about its role in cybersecurity.

According to the report, “We’re entering the era of GenAI in cybersecurity.” However, as organisations adopt this promising technology, their success will hinge on ensuring the safe, responsible, and industry-specific deployment of GenAI tools.

CrowdStrike’s research reveals five pivotal findings that shape the current state of GenAI in cybersecurity:

  1. Platform-based GenAI is favoured 

80% of respondents indicated a preference for GenAI delivered through integrated cybersecurity platforms rather than standalone tools. Seamless integration is cited as a crucial factor, with many preferring tools that work cohesively with existing systems. “GenAI’s value is linked to how well it works within the broader technology ecosystem,” the report states. 

Moreover, almost two-thirds (63%) of those surveyed expressed willingness to switch security vendors to access GenAI capabilities from competitors. The survey underscores the industry’s readiness for unified platforms that streamline operations and reduce the complexity of adopting new point solutions.

  1. GenAI built by cybersecurity experts is a must

Security teams believe GenAI tools should be specifically designed for cybersecurity, not general-purpose systems. 83% of respondents reported they would not trust tools that provide “unsuitable or ill-advised security guidance.”

Breach prevention remains a key motivator, with 74% stating they had faced breaches within the past 18 months or were concerned about vulnerabilities. Respondents prioritised tools from vendors with proven expertise in cybersecurity, incident response, and threat intelligence over suppliers with broad AI leadership alone. 

As CrowdStrike summarised, “The emphasis on breach prevention and vendor expertise suggests security teams would avoid domain-agnostic GenAI tools.”

  1. Augmentation, not replacement 

Despite growing fears of automation replacing jobs in many industries, the survey’s findings indicate minimal concerns about job displacement in cybersecurity. Instead, respondents expect GenAI to empower security analysts by automating repetitive tasks, reducing burnout, onboarding new personnel faster, and accelerating decision-making.

GenAI’s potential for augmenting analysts’ workflows was underscored by its most requested applications: threat intelligence analysis, assistance with investigations, and automated response mechanisms. As noted in the report, “Respondents overwhelmingly believe GenAI will ultimately optimise the analyst experience, not replace human labour.”

  1. ROI outweighs cost concerns  

For organisations evaluating GenAI investments, measurable return on investment (ROI) is the paramount concern, ahead of licensing costs or pricing model confusion. Respondents expect platform-led GenAI deployments to deliver faster results, thanks to cost savings from reduced tool management burdens, streamlined training, and fewer security incidents.

According to the survey data, the expected ROI breakdown includes 31% from cost optimisation and more efficient tools, 30% from fewer incidents, and 26% from reduced management time. Security leaders are clearly focused on ensuring the financial justification for GenAI investments.

  1. Guardrails and safety are crucial 

GenAI adoption is tempered by concerns around safety and privacy, with 87% of organisations either implementing or planning new security policies to oversee GenAI use. Key risks include exposing sensitive data to large language models (LLMs) and adversarial attacks on GenAI tools. Respondents rank safety and privacy controls among their most desired GenAI features, highlighting the need for responsible implementation.

Reflecting the cautious optimism of practitioners, only 39% of respondents firmly believed that the rewards of GenAI outweigh its risks. Meanwhile, 40% considered the risks and rewards “comparable.”

Current state of GenAI adoption in cybersecurity

GenAI adoption remains in its early stages, but interest is growing. 64% of respondents are actively researching or have already invested in GenAI tools, and 69% of those currently evaluating their options plan to make a purchase within the year. 

Security teams are primarily driven by three concerns: improving attack detection and response, enhancing operational efficiency, and mitigating the impact of staff shortages. Among economic considerations, the top priority is ROI – a sign that security leaders are keen to demonstrate tangible benefits to justify their spending.

CrowdStrike emphasises the importance of a platform-based approach, where GenAI is integrated into a unified system. Such platforms enable seamless adoption, measurable benefits, and safety guardrails for responsible usage. According to the report, “The future of GenAI in cybersecurity will be defined by tools that not only advance security but also uphold the highest standards of safety and privacy.”

The CrowdStrike survey concludes by affirming that “GenAI is not a silver bullet” but has tremendous potential to improve cybersecurity outcomes. As organisations evaluate its adoption, they will prioritise tools that integrate seamlessly with existing platforms, deliver faster response times, and ensure safety and privacy compliance.

With threats becoming more sophisticated, the role of GenAI in enabling security teams to work faster and smarter could prove indispensable. While still in its infancy, GenAI in cybersecurity is poised to shift from early adoption to mainstream deployment, provided organisations and vendors address its risks responsibly.

See also: Keys to AI success: Security, sustainability, and overcoming silos

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/feed/ 0
UK establishes LASR to counter AI security threats https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/ https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/#respond Mon, 25 Nov 2024 11:31:13 +0000 https://www.artificialintelligence-news.com/?p=16550 The UK is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.” The laboratory – which will receive an initial government funding of £8.22 million – aims to bring together experts from industry, academia, and government to […]

The post UK establishes LASR to counter AI security threats appeared first on AI News.

]]>
The UK is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.”

The laboratory – which will receive an initial government funding of £8.22 million – aims to bring together experts from industry, academia, and government to assess AI’s impact on national security. The announcement comes as part of a broader strategy to strengthen the UK’s cyber defence capabilities.

Speaking at the NATO Cyber Defence Conference at Lancaster House, the Chancellor of the Duchy of Lancaster said: “NATO needs to continue to adapt to the world of AI, because as the tech evolves, the threat evolves.

“NATO has stayed relevant over the last seven decades by constantly adapting to new threats. It has navigated the worlds of nuclear proliferation and militant nationalism. The move from cold warfare to drone warfare.”

The Chancellor painted a stark picture of the current cyber security landscape, stating: “Cyber war is now a daily reality. One where our defences are constantly being tested. The extent of the threat must be matched by the strength of our resolve to combat it and to protect our citizens and systems.”

The new laboratory will operate under a ‘catalytic’ model, designed to attract additional investment and collaboration from industry partners.

Key stakeholders in the new lab include GCHQ, the National Cyber Security Centre, the MOD’s Defence Science and Technology Laboratory, and prestigious academic institutions such as the University of Oxford and Queen’s University Belfast.

In a direct warning about Russia’s activities, the Chancellor declared: “Be in no doubt: the United Kingdom and others in this room are watching Russia. We know exactly what they are doing, and we are countering their attacks both publicly and behind the scenes.

“We know from history that appeasing dictators engaged in aggression against their neighbours only encourages them. Britain learned long ago the importance of standing strong in the face of such actions.”

Reaffirming support for Ukraine, he added, “Putin is a man who wants destruction, not peace. He is trying to deter our support for Ukraine with his threats. He will not be successful.”

The new lab follows recent concerns about state actors using AI to bolster existing security threats.

“Last year, we saw the US for the first time publicly call out a state for using AI to aid its malicious cyber activity,” the Chancellor noted, referring to North Korea’s attempts to use AI for malware development and vulnerability scanning.

Stephen Doughty, Minister for Europe, North America and UK Overseas Territories, highlighted the dual nature of AI technology: “AI has enormous potential. To ensure it remains a force for good in the world, we need to understand its threats and its opportunities.”

Alongside LASR, the government announced a new £1 million incident response project to enhance collaborative cyber defence capabilities among allies. The laboratory will prioritise collaboration with Five Eyes countries and NATO allies, building on the UK’s historical strength in computing, dating back to Alan Turing’s groundbreaking work.

The initiative forms part of the government’s comprehensive approach to cybersecurity, which includes the upcoming Cyber Security and Resilience Bill and the recent classification of data centres as critical national infrastructure.

(Photo by Erik Mclean)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK establishes LASR to counter AI security threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/feed/ 0
AI sector study: Record growth masks serious challenges https://www.artificialintelligence-news.com/news/ai-sector-study-record-growth-masks-serious-challenges/ https://www.artificialintelligence-news.com/news/ai-sector-study-record-growth-masks-serious-challenges/#respond Thu, 24 Oct 2024 14:31:34 +0000 https://www.artificialintelligence-news.com/?p=16382 A comprehensive AI sector study – conducted by the Department for Science, Innovation and Technology (DSIT) in collaboration with Perspective Economics, Ipsos, and glass.ai – provides a detailed overview of the industry’s current state and its future prospects. In this article, we delve deeper into the key findings and implications—drawing on additional sources to enhance […]

The post AI sector study: Record growth masks serious challenges appeared first on AI News.

]]>
A comprehensive AI sector study – conducted by the Department for Science, Innovation and Technology (DSIT) in collaboration with Perspective Economics, Ipsos, and glass.ai – provides a detailed overview of the industry’s current state and its future prospects.

In this article, we delve deeper into the key findings and implications—drawing on additional sources to enhance our understanding.

Thriving industry with significant growth

The study highlights the remarkable growth of the UK’s AI sector. With over 3,170 active AI companies, these firms have generated £10.6 billion in AI-related revenues and employed more than 50,000 people in AI-related roles. This significant contribution to GVA (Gross Value Added) underscores the sector’s transformative potential in driving the UK’s economic growth.

Mark Boost, CEO of Civo, said: “In a space that’s been dominated by US companies for too long, it’s promising to see the government now stepping up to help support the UK AI sector on the global stage.”

The study shows that AI activity is dispersed across various regions of the UK, with notable concentrations in London, the South East, and Scotland. This regional dispersion indicates a broad scope for the development of AI technology applications across different sectors and regions.

Investment and funding

Investment in the AI sector has been a key driver of growth. In 2022, £18.8 billion was secured in private investment since 2016, with investments made in 52 unique industry sectors compared to 35 sectors in 2016.

The government’s commitment to supporting AI is evident through significant investments. In 2022, the UK government unveiled a National AI Strategy and Action Plan—committing over £1.3 billion in support for the sector, complementing the £2.8 billion already invested.

However, as Boost cautions, “Major players like AWS are locking AI startups into their ecosystems with offerings like $500k cloud credits, ensuring that emerging companies start their journey reliant on their infrastructure. This not only hinders competition and promotes vendor lock-in but also risks stifling innovation across the broader UK AI ecosystem.”

Addressing bottlenecks

Despite the growth and investment, several bottlenecks must be addressed to fully harness the potential of AI:

  • Infrastructure: The UK’s digital technology infrastructure is less advanced than many other countries. This bottleneck includes inadequate data centre infrastructure and a dependent supply of powerful GPU computer chips. Boost emphasises this concern, stating “It would be dangerous for the government to ignore the immense compute power that AI relies on. We need to consider where this power is coming from and the impact it’s having on both the already over-concentrated cloud market and the environment.”
  • Commercial awareness: Many SMEs lack familiarity with digital technology. Almost a third (31%) of SMEs have yet to adopt the cloud, and nearly half (47%) do not currently use AI tools or applications.
  • Skills shortage: Two-fifths of businesses struggle to find staff with good digital skills, including traditional digital roles like data analytics or IT. There is a rising need for workers with new AI-specific skills, such as prompt engineering, that will require retraining and upskilling opportunities.

To address these bottlenecks, the government has implemented several initiatives:

  • Private sector investment: Microsoft has announced a £2.5 billion investment in AI skills, security, and data centre infrastructure, aiming to procure more than 20,000 of the most advanced GPUs by 2026.
  • Government support: The government has invested £1.5 billion in computing capacity and committed to building three new supercomputers by 2025. This support aims to enhance the UK’s infrastructure to stay competitive in the AI market.
  • Public sector integration: The UK Government Digital Service (GDS) is working to improve efficiency using predictive algorithms for future pension scheme behaviour. HMRC uses AI to help identify call centre priorities, demonstrating how AI solutions can address complex public sector challenges.

Future prospects and challenges

The future of the UK AI sector is both promising and challenging. While significant economic gains are predicted, including boosting GDP by £550 billion by 2035, delays in AI roll-out could cost the UK £150 billion over the same period. Ensuring a balanced approach between innovation and regulation will be crucial.

Boost emphasises the importance of data sovereignty and privacy: “Businesses have grown increasingly wary of how their data is collected, stored, and used by the likes of ChatGPT. The government has a real opportunity to enable the UK AI sector to offer viable alternatives.

“The forthcoming AI Action Plan will be another opportunity to identify how AI can drive economic growth and better support the UK tech sector.”

  • AI Safety Summit: The AI Safety Summit at Bletchley Park highlighted the need for responsible AI development. The “Bletchley Declaration on AI Safety” emphasises the importance of ensuring AI tools are transparent, fair, and free from bias to maintain public trust and realise AI’s benefits in public services.
  • Cybersecurity challenges: As AI systems handle sensitive or personal information, ensuring their security is paramount. This involves protecting against cyber threats, securing algorithms from manipulation, safeguarding data centres and hardware, and ensuring supply chain security.

The AI sector study underscores a thriving industry with significant growth potential. However, it also highlights several bottlenecks that must be addressed – infrastructure gaps, lack of commercial awareness, and skills shortages – to fully harness the sector’s potential.

(Photo by John Noonan)

See also: EU AI Act: Early prep could give businesses competitive edge

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI sector study: Record growth masks serious challenges appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-sector-study-record-growth-masks-serious-challenges/feed/ 0
Many organisations unprepared for AI cybersecurity threats https://www.artificialintelligence-news.com/news/many-organisations-unprepared-ai-cybersecurity-threats/ https://www.artificialintelligence-news.com/news/many-organisations-unprepared-ai-cybersecurity-threats/#respond Thu, 10 Oct 2024 09:59:12 +0000 https://www.artificialintelligence-news.com/?p=16268 While AI improves the detection of cybersecurity threats, it simultaneously ushers in more advanced challenges. Research from Keeper Security finds that, despite the implementation of AI-related policies, many organisations remain inadequately prepared for AI-powered threats. 84% of IT and security leaders find AI-enhanced tools have exacerbated the challenge of detecting phishing and smishing attacks, which […]

The post Many organisations unprepared for AI cybersecurity threats appeared first on AI News.

]]>
While AI improves the detection of cybersecurity threats, it simultaneously ushers in more advanced challenges.

Research from Keeper Security finds that, despite the implementation of AI-related policies, many organisations remain inadequately prepared for AI-powered threats.

84% of IT and security leaders find AI-enhanced tools have exacerbated the challenge of detecting phishing and smishing attacks, which were already significant threats. In response, 81% of organisations have enacted AI usage policies for employees. Confidence in these measures runs high, with 77% of leaders expressing familiarity with best practices for AI security.

Gap between AI cybersecurity policy and threats preparedness

More than half (51%) of security leaders view AI-driven attacks as the most severe threat to their organisations. Alarmingly, 35% of respondents feel ill-prepared to address these attacks compared to other cyber threats.

Organisations are deploying several key strategies to meet these emerging challenges:

  • Data encryption: Utilised by 51% of IT leaders, encryption serves as a crucial defence against unauthorised access and is vital against AI-fuelled attacks.
  • Employee training and awareness: With 45% of organisations prioritising enhanced training programmes, there is a focused effort to equip employees to recognise and counter AI-driven phishing and smishing intrusions.
  • Advanced threat detection systems: 41% of organisations are investing in these systems, underscoring the need for improved detection and response to sophisticated AI threats.

The advent of AI-driven cyber threats undeniably presents new challenges. Nevertheless, fundamental cybersecurity practices – such as data encryption, employee education, and advanced threat detection – continue to be essential. Organisations must ensure these essential measures are consistently re-evaluated and adjusted to counter emerging threats.

In addition to these core practices, advanced security frameworks like zero trust and Privileged Access Management (PAM) solutions can bolster an organisation’s resilience.

Zero trust demands continuous verification of all users, devices, and applications, reducing the risk of unauthorised access and minimising potential damage during an attack. PAM offers targeted security for an organisation’s most sensitive accounts, crucial for defending against complex AI-driven threats that aim at high-level credentials.

Darren Guccione, CEO and Co-Founder of Keeper Security, commented: “AI-driven attacks are a formidable challenge, but by reinforcing our cybersecurity fundamentals and adopting advanced security measures, we can build resilient defences against these evolving threats.”

Proactivity is also key for organisations—regularly reviewing security policies, performing routine audits, and fostering a culture of cybersecurity awareness are all essential.

While organisations are advancing, cybersecurity requires perpetual vigilance. Merging traditional practices with modern approaches like zero trust and PAM will empower organisations to maintain an edge over developing AI-powered threats.

(Photo by Growtika)

See also: King’s Business School: How AI is transforming problem-solving

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Many organisations unprepared for AI cybersecurity threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/many-organisations-unprepared-ai-cybersecurity-threats/feed/ 0
Kunal Anand, F5: Shaping AI-optimised networks and enhancing security https://www.artificialintelligence-news.com/news/f5-shaping-ai-optimised-networks-and-enhancing-security/ https://www.artificialintelligence-news.com/news/f5-shaping-ai-optimised-networks-and-enhancing-security/#respond Tue, 24 Sep 2024 06:49:22 +0000 https://www.artificialintelligence-news.com/?p=16144 As AI applications evolve, they place greater demands on network infrastructure, particularly in terms of latency and connectivity. Supporting large-scale AI deployments introduces new issues, and analysts predict that AI-related traffic will soon account for a major portion of total network traffic. The industry must be prepared to handle this surge effectively. F5 is adapting […]

The post Kunal Anand, F5: Shaping AI-optimised networks and enhancing security appeared first on AI News.

]]>
As AI applications evolve, they place greater demands on network infrastructure, particularly in terms of latency and connectivity.

Supporting large-scale AI deployments introduces new issues, and analysts predict that AI-related traffic will soon account for a major portion of total network traffic. The industry must be prepared to handle this surge effectively. F5 is adapting its solutions to manage the complexity of AI workloads, and its technology now includes real-time processing of multimodal data.

Kunal Anand, Chief Technology and AI Officer at F5
Kunal Anand, Chief Technology and AI Officer at F5 (Source – F5)

AI presents both opportunities and risks in security, as it has the capability to enhance protection while also enabling AI-driven cyber threats. Collaboration among hyperscalers, telcos, and technology companies is critical for establishing AI-optimised networks. Collaboration and innovation continue to change the AI networking landscape, and F5 is dedicated to driving progress in this area.

Ahead of AI & Big Data Expo Europe, Kunal Anand, Chief Technology and AI Officer at F5, discusses the company’s role and initiatives to stay at the forefront of AI-enabled networking solutions.

AI News: As AI applications evolve, the demands on network infrastructure are becoming more complex. What key challenges does the industry face regarding latency and connectivity in supporting large-scale AI deployments?

Anand: F5 discovered that AI has drastically transformed application architectures. Some companies are investing billions of dollars in AI factories – massive GPU clusters – while others prefer cloud-based solutions or small language models (SLMs) as less expensive alternatives.

Network architectures are evolving to address these challenges. AI factories operate on distinct networking stacks, like InfiniBand with specific GPUs like the H100s or NVIDIA’s upcoming Blackwell series. At the same time, cloud-based technologies and GPU clouds are advancing.

A major trend is data gravity, where organisations’ data is locked in specific environments. This has driven the evolution of multi-cloud architectures, allowing workloads to link with data across environments for retrieval-augmented generation (RAG).

As RAG demands rise, organisations face higher latency because of limited resources, whether from heavily used data stores or limited sets of GPU servers.

AI News: As analysts predict AI-related traffic will soon make up a significant portion of network traffic. What unique challenges does this influx of AI-generated traffic pose for existing network infrastructure, and how do you see the industry preparing for it?

Anand: F5 believes that by the end of the decade, most applications will be AI-powered or AI-driven, necessitating augmentation across the network services chain. These applications will use APIs to communicate with AI factories and third-party services, access data for RAG, and potentially expose their own APIs. Essentially, APIs will be the glue holding this ecosystem together, as analysts have suggested.

Looking ahead, AI-related traffic is expected to dominate network traffic as AI becomes increasingly integrated into applications and APIs. As AI becomes central to practically all applications, AI-related traffic will naturally increase.

AI News: With AI applications becoming more complex and processing multimodal data in real time, how is F5 adapting its solutions to ensure networks can efficiently manage these dynamic workloads?

Anand: F5 looks at this from many angles. In the case of RAG, when data – whether images, binary streams, or text – must be retrieved from a data storage, the method is the same regardless of data format. Customers often want quick Layer 4 load balancing, traffic management, and steering capabilities, all of which F5 excels at. The company provides organisations with load balancing, traffic management, and security services, guaranteeing RAG has efficient data access. F5 has also enabled load balancing among AI factories.

In some cases, large organisations manage massive GPU clusters with tens of thousands of GPUs. Since AI workloads are unpredictable, these GPUs may be available or unavailable depending on the workload. F5 ensures efficient traffic routing, mitigating the unpredictability of AI workloads.

F5 improves performance, increases throughput, and adds security capabilities for organisations building AI factories and clusters.

AI News: As AI enhances security while also posing AI-driven cyber threats, what approaches is F5 taking to strengthen network security and resilience against these evolving challenges?

Anand: There are many different AI-related challenges on the way. Attackers are already employing AI to generate new payloads, find loopholes, and launch unique attacks. For example, ChatGPT and visual transformers have the ability to break CAPTCHAs, especially interactive ones. Recent demonstrations have shown the sophistication of these attacks.

As seen in past security patterns, every time attackers gain an advantage with new technology, defenders must rise to the challenge. This often necessitates reconsidering security models, like shifting from “allow everything, deny some” to “allow some, deny everything.” Many organisations are exploring solutions to combat AI-driven threats.

F5 is making big investments to keep ahead of AI-driven threats. As part of its F5 intelligence programme, the company is developing, training, and deploying models, which are supported by its AI Center of Excellence.

Earlier this year, F5 launched an AI data fabric, with a team dedicated to developing models that serve the entire business, from policy creation to insight delivery. F5 feels it is well placed to face these rising issues.

AI News: What role do partnerships play in developing the next generation of AI-optimised networks, especially between hyperscalers, telcos, and tech companies?

Anand: Partnerships are important for AI development. The AI stack is complex and involves several components, including electricity, data centres, hardware, servers, GPUs, memory, computational power, and a networking stack, all of which must function together. It is unusual for a single organisation to oversee everything from start to finish.

F5 focuses on establishing and maintaining the necessary partnerships in computation, networking, and storage to support AI.

AI News: How does F5 view its role in advancing AI networking, and what initiatives are you focusing on to stay at the forefront of AI-enabled networking solutions?

Anand: F5 is committed to developing its technology platform. The AI Data Fabric, launched earlier this year, will work with the AI Center of Excellence to prepare the organisation for the future.

F5 is also forming strong partnerships, with announcements to come. The company is excited about its work and the rapid pace of global change. F5’s unique vantage point – processing worldwide traffic – enables it to correlate data insights with industry trends. F5 also intends to be more forthcoming about its research and models, with some open-source contributions coming soon.

Overall, F5 is incredibly optimistic about the future. The transformative impact of AI is remarkable, and it is an exciting time to be part of this shift.

(Image by Lucent_Designs_dinoson20)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Kunal Anand, F5: Shaping AI-optimised networks and enhancing security appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/f5-shaping-ai-optimised-networks-and-enhancing-security/feed/ 0
PSA Certified: AI growth outpacing security measures https://www.artificialintelligence-news.com/news/psa-certified-ai-growth-outpacing-security-measures/ https://www.artificialintelligence-news.com/news/psa-certified-ai-growth-outpacing-security-measures/#respond Thu, 15 Aug 2024 08:20:44 +0000 https://www.artificialintelligence-news.com/?p=15750 While the industry acknowledges the need for robust security measures, research from PSA Certified suggests that investment and best practices are struggling to keep pace with AI’s rapid growth. The survey of 1,260 global technology decision-makers revealed that two-thirds (68%) are concerned that the speed of AI advancements is outstripping the industry’s ability to safeguard […]

The post PSA Certified: AI growth outpacing security measures appeared first on AI News.

]]>
While the industry acknowledges the need for robust security measures, research from PSA Certified suggests that investment and best practices are struggling to keep pace with AI’s rapid growth.

The survey of 1,260 global technology decision-makers revealed that two-thirds (68%) are concerned that the speed of AI advancements is outstripping the industry’s ability to safeguard products, devices, and services. This apprehension is driving a surge in edge computing adoption, with 85% believing that security concerns will push more AI use cases to the edge.

Edge computing – which processes data locally on devices instead of relying on centralised cloud systems – offers inherent advantages in efficiency, security, and privacy. However, this shift to the edge necessitates a heightened focus on device security.

“There is an important interconnect between AI and security: one doesn’t scale without the other,” cautions David Maidment, Senior Director, Market Strategy at Arm (a PSA Certified co-founder). “While AI is a huge opportunity, its proliferation also offers that same opportunity to bad actors.”

Despite recognising security as paramount, a significant disconnect exists between awareness and action. Only half (50%) of those surveyed believe their current security investments are sufficient.  Furthermore, essential security practices, such as independent certifications and threat modelling, are being neglected by a substantial portion of respondents.

“It’s more imperative than ever that those in the connected device ecosystem don’t skip best practice security in the hunt for AI features,” emphasises Maidment. “The entire value chain needs to take collective responsibility and ensure that consumer trust in AI driven services is maintained.”

The report highlights the need for a holistic approach to security, embedded throughout the entire AI lifecycle, from device deployment to the management of AI models operating at the edge. This proactive approach, incorporating security-by-design principles, is deemed essential to building consumer trust and mitigating the escalating security risks.

Despite the concerns, a sense of optimism prevails within the industry. A majority (67%) of decisionmakers believe their organisations are equipped to handle the potential security risks associated with AI’s surge. There is a growing recognition of the need to prioritise security investment – 46% are focused on bolstering security, compared to 39% prioritising AI readiness.

“Those looking to unleash the full potential of AI must ensure they are taking the right steps to mitigate potential security risks,” says Maidment. “As stakeholders in the connected device ecosystem rapidly embrace a new set of AI-enabled use cases, it’s crucial that they do not simply forge ahead with AI regardless of security implications.”

(Photo by Braden Collum)

See also: The AI revolution: Reshaping data centres and the digital landscape 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post PSA Certified: AI growth outpacing security measures appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/psa-certified-ai-growth-outpacing-security-measures/feed/ 0
Microsoft details ‘Skeleton Key’ AI jailbreak https://www.artificialintelligence-news.com/news/microsoft-details-skeleton-key-ai-jailbreak/ https://www.artificialintelligence-news.com/news/microsoft-details-skeleton-key-ai-jailbreak/#respond Fri, 28 Jun 2024 12:24:24 +0000 https://www.artificialintelligence-news.com/?p=15146 Microsoft has disclosed a new type of AI jailbreak attack dubbed “Skeleton Key,” which can bypass responsible AI guardrails in multiple generative AI models. This technique, capable of subverting most safety measures built into AI systems, highlights the critical need for robust security measures across all layers of the AI stack. The Skeleton Key jailbreak […]

The post Microsoft details ‘Skeleton Key’ AI jailbreak appeared first on AI News.

]]>
Microsoft has disclosed a new type of AI jailbreak attack dubbed “Skeleton Key,” which can bypass responsible AI guardrails in multiple generative AI models. This technique, capable of subverting most safety measures built into AI systems, highlights the critical need for robust security measures across all layers of the AI stack.

The Skeleton Key jailbreak employs a multi-turn strategy to convince an AI model to ignore its built-in safeguards. Once successful, the model becomes unable to distinguish between malicious or unsanctioned requests and legitimate ones, effectively giving attackers full control over the AI’s output.

Microsoft’s research team successfully tested the Skeleton Key technique on several prominent AI models, including Meta’s Llama3-70b-instruct, Google’s Gemini Pro, OpenAI’s GPT-3.5 Turbo and GPT-4, Mistral Large, Anthropic’s Claude 3 Opus, and Cohere Commander R Plus.

All of the affected models complied fully with requests across various risk categories, including explosives, bioweapons, political content, self-harm, racism, drugs, graphic sex, and violence.

The attack works by instructing the model to augment its behaviour guidelines, convincing it to respond to any request for information or content while providing a warning if the output might be considered offensive, harmful, or illegal. This approach, known as “Explicit: forced instruction-following,” proved effective across multiple AI systems.

“In bypassing safeguards, Skeleton Key allows the user to cause the model to produce ordinarily forbidden behaviours, which could range from production of harmful content to overriding its usual decision-making rules,” explained Microsoft.

In response to this discovery, Microsoft has implemented several protective measures in its AI offerings, including Copilot AI assistants.

Microsoft says that it has also shared its findings with other AI providers through responsible disclosure procedures and updated its Azure AI-managed models to detect and block this type of attack using Prompt Shields.

To mitigate the risks associated with Skeleton Key and similar jailbreak techniques, Microsoft recommends a multi-layered approach for AI system designers:

  • Input filtering to detect and block potentially harmful or malicious inputs
  • Careful prompt engineering of system messages to reinforce appropriate behaviour
  • Output filtering to prevent the generation of content that breaches safety criteria
  • Abuse monitoring systems trained on adversarial examples to detect and mitigate recurring problematic content or behaviours

Microsoft has also updated its PyRIT (Python Risk Identification Toolkit) to include Skeleton Key, enabling developers and security teams to test their AI systems against this new threat.

The discovery of the Skeleton Key jailbreak technique underscores the ongoing challenges in securing AI systems as they become more prevalent in various applications.

(Photo by Matt Artz)

See also: Think tank calls for AI incident reporting system

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft details ‘Skeleton Key’ AI jailbreak appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/microsoft-details-skeleton-key-ai-jailbreak/feed/ 0
Gil Pekelman, Atera: How businesses can harness the power of AI https://www.artificialintelligence-news.com/news/gil-pekelman-atera-how-businesses-can-harness-the-power-of-ai/ https://www.artificialintelligence-news.com/news/gil-pekelman-atera-how-businesses-can-harness-the-power-of-ai/#respond Tue, 28 May 2024 15:32:37 +0000 https://www.artificialintelligence-news.com/?p=14888 TechForge recently caught up with Gil Pekelman, CEO of all-in-one IT management platform, Atera, to discuss how AI is becoming the IT professionals’ number one companion. Can you tell us a little bit about Atera and what it does? We launched the Atera all-in-one platform for IT management in 2016, so quite a few years […]

The post Gil Pekelman, Atera: How businesses can harness the power of AI appeared first on AI News.

]]>
TechForge recently caught up with Gil Pekelman, CEO of all-in-one IT management platform, Atera, to discuss how AI is becoming the IT professionals’ number one companion.

Can you tell us a little bit about Atera and what it does?

We launched the Atera all-in-one platform for IT management in 2016, so quite a few years ago. And it’s very broad. It’s everything from technical things like patching and security to ongoing support, alerts, automations, ticket management, reports, and analytics, etc. 

Atera is a single platform that manages all your IT in a single pane of glass. The power of it – and we’re the only company that does this – is it’s a single codebase and single database for all of that. The alternative, for many years now, has been to buy four or five different products, and have them all somehow connected, which is usually very difficult. 

Here, the fact is it’s a single codebase and a single database. Everything is connected and streamlined and very intuitive. So, in essence, you sign up or start a trial and within five minutes, you’re already running with it and onboarding. It’s that intuitive.

We have 12,000+ customers in 120 countries around the world. The UK is our second-largest country in terms of business, currently. The US is the first, but the UK is right behind them.

What are the latest trends you’re seeing develop in AI this year?

From the start, we’ve been dedicated to integrating AI into our company’s DNA. Our goal has always been to use data to identify problems and alert humans so they can fix or avoid issues. Initially, we focused on leveraging data to provide solutions.

Over the past nine years, we’ve aimed to let AI handle mundane IT tasks, freeing up professionals for more engaging work. With early access to Chat GPT and Open AI tools a year and a half ago, we’ve been pioneering a new trend we call Action AI.

Unlike generic Generative AI, which creates content like songs or emails, Action AI operates in the real world, interacting with hardware and software to perform tasks autonomously. Our AI can understand IT problems and resolve them on its own, moving beyond mere dialogue to real-world action.

Atera offers Copilot and Autopilot. Could you explain what these are?

Autopilot is autonomous. It understands a problem you might have on your computer. It’s a widget on your computer, and it will communicate with you and fix the problem autonomously. However, it has boundaries on what it’s allowed to fix and what it’s not allowed to fix. And everything it’s allowed to deal with has to be bulletproof. 100% secure or private. No opportunity to do any damage or anything like that. 

So if a ticket is opened up, or a complaint is raised, if it’s outside of these boundaries, it will then activate the Copilot. The Copilot augments the IT professional.

They’re both companions. The Autopilot is a companion that takes away password resets, printer issues, installs software, etc. – mundane and repetitive issues – and the Copilot is a companion that will help the IT professional deal with the issues they deal with on a day-to-day basis. And it has all kinds of different tools. 

The Copilot is very elaborate. If you have a problem, you can ask it and it will not only give you an answer like ChatGPT, but it will research and run all kinds of tests on the network, the computer, and the printer, and it will come to a conclusion, and create the action that is required to solve it. But it won’t solve it. It will still leave that to the IT professional to think about the different information and decide what they want to do. 

Copilot can save IT professionals nearly half of their workday. While it’s been tested in the field for some time, we’re excited to officially launch it now. Meanwhile, Autopilot is still in the beta phase.

What advice would you give to any companies that are thinking about integrating AI technologies into their business operations?

I strongly recommend that companies begin integrating AI technologies immediately, but it is crucial to research and select the right and secure generative AI tools. Incorporating AI offers numerous advantages: it automates routine tasks, enhances efficiency and productivity, improves accuracy by reducing human error, and speeds up problem resolution. That being said, it’s important to pick the right generative AI tool to help you reap the benefits without compromising on security. For example, with our collaboration with Microsoft, our customers’ data is secure—it stays within the system, and the AI doesn’t use it for training or expanding its database. This ensures safety while delivering substantial benefits.

Our incorporation of AI into our product focuses on two key aspects. First, your IT team no longer has to deal with mundane, frustrating tasks. Second, for end users, issues like non-working printers, forgotten passwords, or slow internet are resolved in seconds or minutes instead of hours. This provides a measurable and significant improvement in efficiency.

There are all kinds of AIs out there. Some of them are more beneficial, some are less. Some are just Chat GPT in disguise, and it’s a very thin layer. What we do literally changes the whole interaction with IT. And we know, when IT has a problem things stop working, and you stop working. Our solution ensures everything keeps running smoothly.

What can we expect from AI over the next few years?

AI is set to become significantly more intelligent and aware. One remarkable development is its growing ability to reason, predict, and understand data. This capability enables AI to foresee issues and autonomously resolve them, showcasing an astonishing level of reasoning.

We anticipate a dual advancement: a rapid acceleration in AI’s intelligence and a substantial enhancement in its empathetic interactions, as demonstrated in the latest OpenAI release. This evolution will transform how humans engage with AI.

Our work exemplifies this shift. When non-technical users interact with our software to solve problems, AI responds with a highly empathetic, human-like approach. Users feel as though they are speaking to a real IT professional, ensuring a seamless and comforting experience.

As AI continues to evolve, it will become increasingly powerful and capable. Recent breakthroughs in understanding AI’s mechanisms will not only enhance its functionality but also ensure its security and ethical use, reinforcing its role as a force for good.

What plans does Atera have for the next year?

We are excited to announce the upcoming launch of Autopilot, scheduled for release in a few months. While Copilot, our comprehensive suite of advanced tools designed specifically for IT professionals, has already been instrumental in enhancing efficiency and effectiveness, Autopilot represents the next significant advancement.

Currently in beta so whoever wants to try it already can, Autopilot directly interacts with end users, automating and resolving common IT issues that typically burden IT staff, such as password resets and printer malfunctions. By addressing these routine tasks, Autopilot allows IT professionals to focus on more strategic and rewarding activities, ultimately improving overall productivity and job satisfaction.

For more information, visit atera.com

Atera is a sponsor of TechEx North America 2024 on June 5-6 in Santa Clara, US. Visit the Atera team at booth 237 for a personalised demo, or to test your IT skills with the company’s first-of-kind AIT game, APOLLO IT, for a chance to win a prize.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Gil Pekelman, Atera: How businesses can harness the power of AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/gil-pekelman-atera-how-businesses-can-harness-the-power-of-ai/feed/ 0