cybersecurity Archives - AI News https://www.artificialintelligence-news.com/news/tag/cyber-security/ Artificial Intelligence News Thu, 24 Apr 2025 11:40:59 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png cybersecurity Archives - AI News https://www.artificialintelligence-news.com/news/tag/cyber-security/ 32 32 DeepSeek ban? China data transfer boosts security concerns https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/ https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/#respond Fri, 07 Feb 2025 17:44:01 +0000 https://www.artificialintelligence-news.com/?p=104228 US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part […]

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.

DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga.

Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free.

However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the Chinese AI startup.

A threat to US AI dominance

DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the Chinese Communist Party (CCP).  

According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns.

Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight.

Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. 

Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions.

Potential ban of DeepSeek a TikTok redux?

The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by Chinese company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise.

In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat.

Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy.  

AI becomes a geopolitical battleground

The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China.

American AI firms like OpenAI have enjoyed a dominant position in recent years, but Chinese companies have poured resources into catching up and, in some cases, surpassing their US competitors.  

DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security.

China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in Chinese surveillance activities or even potential economic manipulations.

A nationwide DeepSeek ban is on the cards

If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match.  

In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools.  

DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen.

(Photo by Solen Feyissa)

See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/feed/ 0
Cisco: Securing enterprises in the AI era https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/ https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/#respond Wed, 15 Jan 2025 16:02:18 +0000 https://www.artificialintelligence-news.com/?p=16883 As AI becomes increasingly integral to business operations, new safety concerns and security threats emerge at an unprecedented pace—outstripping the capabilities of traditional cybersecurity solutions. The stakes are high with potentially significant repercussions. According to Cisco’s 2024 AI Readiness Index, only 29% of surveyed organisations feel fully equipped to detect and prevent unauthorised tampering with […]

The post Cisco: Securing enterprises in the AI era appeared first on AI News.

]]>
As AI becomes increasingly integral to business operations, new safety concerns and security threats emerge at an unprecedented pace—outstripping the capabilities of traditional cybersecurity solutions.

The stakes are high with potentially significant repercussions. According to Cisco’s 2024 AI Readiness Index, only 29% of surveyed organisations feel fully equipped to detect and prevent unauthorised tampering with AI technologies.

Continuous model validation

DJ Sampath, Head of AI Software & Platform at Cisco, said: “When we talk about model validation, it is not just a one time thing, right? You’re doing the model validation on a continuous basis.

Headshot of DJ Sampath from Cisco for an article on securing enterprises in the AI era.

“So as you see changes happen to the model – if you’re doing any type of finetuning, or you discover new attacks that are starting to show up that you need the models to learn from – we’re constantly learning all of that information and revalidating the model to see how these models are behaving under these new attacks that we’ve discovered.

“The other very important point is that we have a really advanced threat research team which is constantly looking at these AI attacks and understanding how these attacks can further be enhanced. In fact, we’re, we’re, we’re contributing to the work groups inside of standards organisations like MITRE, OWASP, and NIST.”

Beyond preventing harmful outputs, Cisco addresses the vulnerabilities of AI models to malicious external influences that can change their behaviour. These risks include prompt injection attacks, jailbreaking, and training data poisoning—each demanding stringent preventive measures.

Evolution brings new complexities

Frank Dickson, Group VP for Security & Trust at IDC, gave his take on the evolution of cybersecurity over time and what advancements in AI mean for the industry.

“The first macro trend was that we moved from on-premise to the cloud and that introduced this whole host of new problem statements that we had to address. And then as applications move from monolithic to microservices, we saw this whole host of new problem sets.

Headshot of Frank Dickson from IDC for an article on securing enterprises in the AI era.

“AI and the addition of LLMs… same thing, whole host of new problem sets.”

The complexities of AI security are heightened as applications become multi-model. Vulnerabilities can arise at various levels – from models to apps – implicating different stakeholders such as developers, end-users, and vendors.

“Once an application moved from on-premise to the cloud, it kind of stayed there. Yes, we developed applications across multiple clouds, but once you put an application in AWS or Azure or GCP, you didn’t jump it across those various cloud environments monthly, quarterly, weekly, right?

“Once you move from monolithic application development to microservices, you stay there. Once you put an application in Kubernetes, you don’t jump back into something else.

“As you look to secure a LLM, the important thing to note is the model changes. And when we talk about model change, it’s not like it’s a revision … this week maybe [developers are] using Anthropic, next week they may be using Gemini.

“They’re completely different and the threat vectors of each model are completely different. They all have their strengths and they all have their dramatic weaknesses.”

Unlike conventional safety measures integrated into individual models, Cisco delivers controls for a multi-model environment through its newly-announced AI Defense. The solution is self-optimising, using Cisco’s proprietary machine learning algorithms to identify evolving AI safety and security concerns—informed by threat intelligence from Cisco Talos.

Adjusting to the new normal

Jeetu Patel, Executive VP and Chief Product Officer at Cisco, shared his view that major advancements in a short period of time always seem revolutionary but quickly feel normal.

Headshot of Jeetu Patel from Cisco for an article on securing enterprises in the AI era.

Waymo is, you know, self-driving cars from Google. You get in, and there’s no one sitting in the car, and it takes you from point A to point B. It feels mind-bendingly amazing, like we are living in the future. The second time, you kind of get used to it. The third time, you start complaining about the seats.

“Even how quickly we’ve gotten used to AI and ChatGPT over the course of the past couple years, I think what will happen is any major advancement will feel exceptionally progressive for a short period of time. Then there’s a normalisation that happens where everyone starts getting used to it.”

Patel believes that normalisation will happen with AGI as well. However, he notes that “you cannot underestimate the progress that these models are starting to make” and, ultimately, the kind of use cases they are going to unlock.

“No-one had thought that we would have a smartphone that’s gonna have more compute capacity than the mainframe computer at your fingertips and be able to do thousands of things on it at any point in time and now it’s just another way of life. My 14-year-old daughter doesn’t even think about it.

“We ought to make sure that we as companies get adjusted to that very quickly.”

See also: Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cisco: Securing enterprises in the AI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/cisco-securing-enterprises-in-the-ai-era/feed/ 0
CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/ https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/#respond Tue, 17 Dec 2024 13:00:13 +0000 https://www.artificialintelligence-news.com/?p=16724 CrowdStrike commissioned a survey of 1,022 cybersecurity professionals worldwide to assess their views on generative AI (GenAI) adoption and its implications. The findings reveal enthusiasm for GenAI’s potential to bolster defences against increasingly sophisticated threats, but also trepidation over risks such as data exposure and attacks on GenAI systems. While much has been speculated about […]

The post CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools appeared first on AI News.

]]>
CrowdStrike commissioned a survey of 1,022 cybersecurity professionals worldwide to assess their views on generative AI (GenAI) adoption and its implications.

The findings reveal enthusiasm for GenAI’s potential to bolster defences against increasingly sophisticated threats, but also trepidation over risks such as data exposure and attacks on GenAI systems.

While much has been speculated about the transformative impact of GenAI, the survey’s results paint a clearer picture of how practitioners are thinking about its role in cybersecurity.

According to the report, “We’re entering the era of GenAI in cybersecurity.” However, as organisations adopt this promising technology, their success will hinge on ensuring the safe, responsible, and industry-specific deployment of GenAI tools.

CrowdStrike’s research reveals five pivotal findings that shape the current state of GenAI in cybersecurity:

  1. Platform-based GenAI is favoured 

80% of respondents indicated a preference for GenAI delivered through integrated cybersecurity platforms rather than standalone tools. Seamless integration is cited as a crucial factor, with many preferring tools that work cohesively with existing systems. “GenAI’s value is linked to how well it works within the broader technology ecosystem,” the report states. 

Moreover, almost two-thirds (63%) of those surveyed expressed willingness to switch security vendors to access GenAI capabilities from competitors. The survey underscores the industry’s readiness for unified platforms that streamline operations and reduce the complexity of adopting new point solutions.

  1. GenAI built by cybersecurity experts is a must

Security teams believe GenAI tools should be specifically designed for cybersecurity, not general-purpose systems. 83% of respondents reported they would not trust tools that provide “unsuitable or ill-advised security guidance.”

Breach prevention remains a key motivator, with 74% stating they had faced breaches within the past 18 months or were concerned about vulnerabilities. Respondents prioritised tools from vendors with proven expertise in cybersecurity, incident response, and threat intelligence over suppliers with broad AI leadership alone. 

As CrowdStrike summarised, “The emphasis on breach prevention and vendor expertise suggests security teams would avoid domain-agnostic GenAI tools.”

  1. Augmentation, not replacement 

Despite growing fears of automation replacing jobs in many industries, the survey’s findings indicate minimal concerns about job displacement in cybersecurity. Instead, respondents expect GenAI to empower security analysts by automating repetitive tasks, reducing burnout, onboarding new personnel faster, and accelerating decision-making.

GenAI’s potential for augmenting analysts’ workflows was underscored by its most requested applications: threat intelligence analysis, assistance with investigations, and automated response mechanisms. As noted in the report, “Respondents overwhelmingly believe GenAI will ultimately optimise the analyst experience, not replace human labour.”

  1. ROI outweighs cost concerns  

For organisations evaluating GenAI investments, measurable return on investment (ROI) is the paramount concern, ahead of licensing costs or pricing model confusion. Respondents expect platform-led GenAI deployments to deliver faster results, thanks to cost savings from reduced tool management burdens, streamlined training, and fewer security incidents.

According to the survey data, the expected ROI breakdown includes 31% from cost optimisation and more efficient tools, 30% from fewer incidents, and 26% from reduced management time. Security leaders are clearly focused on ensuring the financial justification for GenAI investments.

  1. Guardrails and safety are crucial 

GenAI adoption is tempered by concerns around safety and privacy, with 87% of organisations either implementing or planning new security policies to oversee GenAI use. Key risks include exposing sensitive data to large language models (LLMs) and adversarial attacks on GenAI tools. Respondents rank safety and privacy controls among their most desired GenAI features, highlighting the need for responsible implementation.

Reflecting the cautious optimism of practitioners, only 39% of respondents firmly believed that the rewards of GenAI outweigh its risks. Meanwhile, 40% considered the risks and rewards “comparable.”

Current state of GenAI adoption in cybersecurity

GenAI adoption remains in its early stages, but interest is growing. 64% of respondents are actively researching or have already invested in GenAI tools, and 69% of those currently evaluating their options plan to make a purchase within the year. 

Security teams are primarily driven by three concerns: improving attack detection and response, enhancing operational efficiency, and mitigating the impact of staff shortages. Among economic considerations, the top priority is ROI – a sign that security leaders are keen to demonstrate tangible benefits to justify their spending.

CrowdStrike emphasises the importance of a platform-based approach, where GenAI is integrated into a unified system. Such platforms enable seamless adoption, measurable benefits, and safety guardrails for responsible usage. According to the report, “The future of GenAI in cybersecurity will be defined by tools that not only advance security but also uphold the highest standards of safety and privacy.”

The CrowdStrike survey concludes by affirming that “GenAI is not a silver bullet” but has tremendous potential to improve cybersecurity outcomes. As organisations evaluate its adoption, they will prioritise tools that integrate seamlessly with existing platforms, deliver faster response times, and ensure safety and privacy compliance.

With threats becoming more sophisticated, the role of GenAI in enabling security teams to work faster and smarter could prove indispensable. While still in its infancy, GenAI in cybersecurity is poised to shift from early adoption to mainstream deployment, provided organisations and vendors address its risks responsibly.

See also: Keys to AI success: Security, sustainability, and overcoming silos

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/crowdstrike-cybersecurity-pros-safer-specialist-genai-tools/feed/ 0
UK establishes LASR to counter AI security threats https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/ https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/#respond Mon, 25 Nov 2024 11:31:13 +0000 https://www.artificialintelligence-news.com/?p=16550 The UK is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.” The laboratory – which will receive an initial government funding of £8.22 million – aims to bring together experts from industry, academia, and government to […]

The post UK establishes LASR to counter AI security threats appeared first on AI News.

]]>
The UK is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.”

The laboratory – which will receive an initial government funding of £8.22 million – aims to bring together experts from industry, academia, and government to assess AI’s impact on national security. The announcement comes as part of a broader strategy to strengthen the UK’s cyber defence capabilities.

Speaking at the NATO Cyber Defence Conference at Lancaster House, the Chancellor of the Duchy of Lancaster said: “NATO needs to continue to adapt to the world of AI, because as the tech evolves, the threat evolves.

“NATO has stayed relevant over the last seven decades by constantly adapting to new threats. It has navigated the worlds of nuclear proliferation and militant nationalism. The move from cold warfare to drone warfare.”

The Chancellor painted a stark picture of the current cyber security landscape, stating: “Cyber war is now a daily reality. One where our defences are constantly being tested. The extent of the threat must be matched by the strength of our resolve to combat it and to protect our citizens and systems.”

The new laboratory will operate under a ‘catalytic’ model, designed to attract additional investment and collaboration from industry partners.

Key stakeholders in the new lab include GCHQ, the National Cyber Security Centre, the MOD’s Defence Science and Technology Laboratory, and prestigious academic institutions such as the University of Oxford and Queen’s University Belfast.

In a direct warning about Russia’s activities, the Chancellor declared: “Be in no doubt: the United Kingdom and others in this room are watching Russia. We know exactly what they are doing, and we are countering their attacks both publicly and behind the scenes.

“We know from history that appeasing dictators engaged in aggression against their neighbours only encourages them. Britain learned long ago the importance of standing strong in the face of such actions.”

Reaffirming support for Ukraine, he added, “Putin is a man who wants destruction, not peace. He is trying to deter our support for Ukraine with his threats. He will not be successful.”

The new lab follows recent concerns about state actors using AI to bolster existing security threats.

“Last year, we saw the US for the first time publicly call out a state for using AI to aid its malicious cyber activity,” the Chancellor noted, referring to North Korea’s attempts to use AI for malware development and vulnerability scanning.

Stephen Doughty, Minister for Europe, North America and UK Overseas Territories, highlighted the dual nature of AI technology: “AI has enormous potential. To ensure it remains a force for good in the world, we need to understand its threats and its opportunities.”

Alongside LASR, the government announced a new £1 million incident response project to enhance collaborative cyber defence capabilities among allies. The laboratory will prioritise collaboration with Five Eyes countries and NATO allies, building on the UK’s historical strength in computing, dating back to Alan Turing’s groundbreaking work.

The initiative forms part of the government’s comprehensive approach to cybersecurity, which includes the upcoming Cyber Security and Resilience Bill and the recent classification of data centres as critical national infrastructure.

(Photo by Erik Mclean)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK establishes LASR to counter AI security threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/feed/ 0
Many organisations unprepared for AI cybersecurity threats https://www.artificialintelligence-news.com/news/many-organisations-unprepared-ai-cybersecurity-threats/ https://www.artificialintelligence-news.com/news/many-organisations-unprepared-ai-cybersecurity-threats/#respond Thu, 10 Oct 2024 09:59:12 +0000 https://www.artificialintelligence-news.com/?p=16268 While AI improves the detection of cybersecurity threats, it simultaneously ushers in more advanced challenges. Research from Keeper Security finds that, despite the implementation of AI-related policies, many organisations remain inadequately prepared for AI-powered threats. 84% of IT and security leaders find AI-enhanced tools have exacerbated the challenge of detecting phishing and smishing attacks, which […]

The post Many organisations unprepared for AI cybersecurity threats appeared first on AI News.

]]>
While AI improves the detection of cybersecurity threats, it simultaneously ushers in more advanced challenges.

Research from Keeper Security finds that, despite the implementation of AI-related policies, many organisations remain inadequately prepared for AI-powered threats.

84% of IT and security leaders find AI-enhanced tools have exacerbated the challenge of detecting phishing and smishing attacks, which were already significant threats. In response, 81% of organisations have enacted AI usage policies for employees. Confidence in these measures runs high, with 77% of leaders expressing familiarity with best practices for AI security.

Gap between AI cybersecurity policy and threats preparedness

More than half (51%) of security leaders view AI-driven attacks as the most severe threat to their organisations. Alarmingly, 35% of respondents feel ill-prepared to address these attacks compared to other cyber threats.

Organisations are deploying several key strategies to meet these emerging challenges:

  • Data encryption: Utilised by 51% of IT leaders, encryption serves as a crucial defence against unauthorised access and is vital against AI-fuelled attacks.
  • Employee training and awareness: With 45% of organisations prioritising enhanced training programmes, there is a focused effort to equip employees to recognise and counter AI-driven phishing and smishing intrusions.
  • Advanced threat detection systems: 41% of organisations are investing in these systems, underscoring the need for improved detection and response to sophisticated AI threats.

The advent of AI-driven cyber threats undeniably presents new challenges. Nevertheless, fundamental cybersecurity practices – such as data encryption, employee education, and advanced threat detection – continue to be essential. Organisations must ensure these essential measures are consistently re-evaluated and adjusted to counter emerging threats.

In addition to these core practices, advanced security frameworks like zero trust and Privileged Access Management (PAM) solutions can bolster an organisation’s resilience.

Zero trust demands continuous verification of all users, devices, and applications, reducing the risk of unauthorised access and minimising potential damage during an attack. PAM offers targeted security for an organisation’s most sensitive accounts, crucial for defending against complex AI-driven threats that aim at high-level credentials.

Darren Guccione, CEO and Co-Founder of Keeper Security, commented: “AI-driven attacks are a formidable challenge, but by reinforcing our cybersecurity fundamentals and adopting advanced security measures, we can build resilient defences against these evolving threats.”

Proactivity is also key for organisations—regularly reviewing security policies, performing routine audits, and fostering a culture of cybersecurity awareness are all essential.

While organisations are advancing, cybersecurity requires perpetual vigilance. Merging traditional practices with modern approaches like zero trust and PAM will empower organisations to maintain an edge over developing AI-powered threats.

(Photo by Growtika)

See also: King’s Business School: How AI is transforming problem-solving

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Many organisations unprepared for AI cybersecurity threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/many-organisations-unprepared-ai-cybersecurity-threats/feed/ 0
PSA Certified: AI growth outpacing security measures https://www.artificialintelligence-news.com/news/psa-certified-ai-growth-outpacing-security-measures/ https://www.artificialintelligence-news.com/news/psa-certified-ai-growth-outpacing-security-measures/#respond Thu, 15 Aug 2024 08:20:44 +0000 https://www.artificialintelligence-news.com/?p=15750 While the industry acknowledges the need for robust security measures, research from PSA Certified suggests that investment and best practices are struggling to keep pace with AI’s rapid growth. The survey of 1,260 global technology decision-makers revealed that two-thirds (68%) are concerned that the speed of AI advancements is outstripping the industry’s ability to safeguard […]

The post PSA Certified: AI growth outpacing security measures appeared first on AI News.

]]>
While the industry acknowledges the need for robust security measures, research from PSA Certified suggests that investment and best practices are struggling to keep pace with AI’s rapid growth.

The survey of 1,260 global technology decision-makers revealed that two-thirds (68%) are concerned that the speed of AI advancements is outstripping the industry’s ability to safeguard products, devices, and services. This apprehension is driving a surge in edge computing adoption, with 85% believing that security concerns will push more AI use cases to the edge.

Edge computing – which processes data locally on devices instead of relying on centralised cloud systems – offers inherent advantages in efficiency, security, and privacy. However, this shift to the edge necessitates a heightened focus on device security.

“There is an important interconnect between AI and security: one doesn’t scale without the other,” cautions David Maidment, Senior Director, Market Strategy at Arm (a PSA Certified co-founder). “While AI is a huge opportunity, its proliferation also offers that same opportunity to bad actors.”

Despite recognising security as paramount, a significant disconnect exists between awareness and action. Only half (50%) of those surveyed believe their current security investments are sufficient.  Furthermore, essential security practices, such as independent certifications and threat modelling, are being neglected by a substantial portion of respondents.

“It’s more imperative than ever that those in the connected device ecosystem don’t skip best practice security in the hunt for AI features,” emphasises Maidment. “The entire value chain needs to take collective responsibility and ensure that consumer trust in AI driven services is maintained.”

The report highlights the need for a holistic approach to security, embedded throughout the entire AI lifecycle, from device deployment to the management of AI models operating at the edge. This proactive approach, incorporating security-by-design principles, is deemed essential to building consumer trust and mitigating the escalating security risks.

Despite the concerns, a sense of optimism prevails within the industry. A majority (67%) of decisionmakers believe their organisations are equipped to handle the potential security risks associated with AI’s surge. There is a growing recognition of the need to prioritise security investment – 46% are focused on bolstering security, compared to 39% prioritising AI readiness.

“Those looking to unleash the full potential of AI must ensure they are taking the right steps to mitigate potential security risks,” says Maidment. “As stakeholders in the connected device ecosystem rapidly embrace a new set of AI-enabled use cases, it’s crucial that they do not simply forge ahead with AI regardless of security implications.”

(Photo by Braden Collum)

See also: The AI revolution: Reshaping data centres and the digital landscape 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post PSA Certified: AI growth outpacing security measures appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/psa-certified-ai-growth-outpacing-security-measures/feed/ 0
Microsoft details ‘Skeleton Key’ AI jailbreak https://www.artificialintelligence-news.com/news/microsoft-details-skeleton-key-ai-jailbreak/ https://www.artificialintelligence-news.com/news/microsoft-details-skeleton-key-ai-jailbreak/#respond Fri, 28 Jun 2024 12:24:24 +0000 https://www.artificialintelligence-news.com/?p=15146 Microsoft has disclosed a new type of AI jailbreak attack dubbed “Skeleton Key,” which can bypass responsible AI guardrails in multiple generative AI models. This technique, capable of subverting most safety measures built into AI systems, highlights the critical need for robust security measures across all layers of the AI stack. The Skeleton Key jailbreak […]

The post Microsoft details ‘Skeleton Key’ AI jailbreak appeared first on AI News.

]]>
Microsoft has disclosed a new type of AI jailbreak attack dubbed “Skeleton Key,” which can bypass responsible AI guardrails in multiple generative AI models. This technique, capable of subverting most safety measures built into AI systems, highlights the critical need for robust security measures across all layers of the AI stack.

The Skeleton Key jailbreak employs a multi-turn strategy to convince an AI model to ignore its built-in safeguards. Once successful, the model becomes unable to distinguish between malicious or unsanctioned requests and legitimate ones, effectively giving attackers full control over the AI’s output.

Microsoft’s research team successfully tested the Skeleton Key technique on several prominent AI models, including Meta’s Llama3-70b-instruct, Google’s Gemini Pro, OpenAI’s GPT-3.5 Turbo and GPT-4, Mistral Large, Anthropic’s Claude 3 Opus, and Cohere Commander R Plus.

All of the affected models complied fully with requests across various risk categories, including explosives, bioweapons, political content, self-harm, racism, drugs, graphic sex, and violence.

The attack works by instructing the model to augment its behaviour guidelines, convincing it to respond to any request for information or content while providing a warning if the output might be considered offensive, harmful, or illegal. This approach, known as “Explicit: forced instruction-following,” proved effective across multiple AI systems.

“In bypassing safeguards, Skeleton Key allows the user to cause the model to produce ordinarily forbidden behaviours, which could range from production of harmful content to overriding its usual decision-making rules,” explained Microsoft.

In response to this discovery, Microsoft has implemented several protective measures in its AI offerings, including Copilot AI assistants.

Microsoft says that it has also shared its findings with other AI providers through responsible disclosure procedures and updated its Azure AI-managed models to detect and block this type of attack using Prompt Shields.

To mitigate the risks associated with Skeleton Key and similar jailbreak techniques, Microsoft recommends a multi-layered approach for AI system designers:

  • Input filtering to detect and block potentially harmful or malicious inputs
  • Careful prompt engineering of system messages to reinforce appropriate behaviour
  • Output filtering to prevent the generation of content that breaches safety criteria
  • Abuse monitoring systems trained on adversarial examples to detect and mitigate recurring problematic content or behaviours

Microsoft has also updated its PyRIT (Python Risk Identification Toolkit) to include Skeleton Key, enabling developers and security teams to test their AI systems against this new threat.

The discovery of the Skeleton Key jailbreak technique underscores the ongoing challenges in securing AI systems as they become more prevalent in various applications.

(Photo by Matt Artz)

See also: Think tank calls for AI incident reporting system

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft details ‘Skeleton Key’ AI jailbreak appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/microsoft-details-skeleton-key-ai-jailbreak/feed/ 0
NCSC: AI to significantly boost cyber threats over next two years https://www.artificialintelligence-news.com/news/ncsc-ai-significantly-boost-cyber-threats-next-two-years/ https://www.artificialintelligence-news.com/news/ncsc-ai-significantly-boost-cyber-threats-next-two-years/#respond Wed, 24 Jan 2024 16:50:10 +0000 https://www.artificialintelligence-news.com/?p=14257 A report published by the UK’s National Cyber Security Centre (NCSC) warns that AI will substantially increase cyber threats over the next two years.  The centre warns of a surge in ransomware attacks in particular; involving hackers deploying malicious software to encrypt a victim’s files or entire system and demanding a ransom payment for the […]

The post NCSC: AI to significantly boost cyber threats over next two years appeared first on AI News.

]]>
A report published by the UK’s National Cyber Security Centre (NCSC) warns that AI will substantially increase cyber threats over the next two years. 

The centre warns of a surge in ransomware attacks in particular; involving hackers deploying malicious software to encrypt a victim’s files or entire system and demanding a ransom payment for the decryption key.

The NCSC assessment predicts AI will enhance threat actors’ capabilities mainly in carrying out more persuasive phishing attacks that trick individuals into providing sensitive information or clicking on malicious links.

“Generative AI can already create convincing interactions like documents that fool people, free of the translation and grammatical errors common in phishing emails,” the report states. 

The advent of generative AI, capable of creating convincing interactions and documents free of common phishing red flags, is identified as a key contributor to the rising threat landscape over the next two years.

The NCSC assessment identifies challenges in cyber resilience, citing the difficulty in verifying the legitimacy of emails and password reset requests due to generative AI and large language models. The shrinking time window between security updates and threat exploitation further complicates rapid vulnerability patching for network managers.

James Babbage, director general for threats at the National Crime Agency, commented: “AI services lower barriers to entry, increasing the number of cyber criminals, and will boost their capability by improving the scale, speed, and effectiveness of existing attack methods.”

However, the NCSC report also outlined how AI could bolster cybersecurity through improved attack detection and system design. It calls for further research on how developments in defensive AI solutions can mitigate evolving threats.

Access to quality data, skills, tools, and time makes advanced AI-powered cyber operations feasible mainly for highly capable state actors currently. But the NCSC warns these barriers to entry will progressively fall as capable groups monetise and sell AI-enabled hacking tools.

Extent of capability uplift by AI over next two years:

(Credit: NCSC)

Lindy Cameron, CEO of the NCSC, stated: “We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat.”

The UK government has allocated £2.6 billion under its Cyber Security Strategy 2022 to strengthen the country’s resilience to emerging high-tech threats.

AI is positioned to substantially change the cyber risk landscape in the near future. Continuous investment in defensive capabilities and research will be vital to counteract its potential to empower attackers.

A full copy of the NCSC’s report can be found here.

(Photo by Muha Ajjan on Unsplash)

See also: AI-generated Biden robocall urges Democrats not to vote

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NCSC: AI to significantly boost cyber threats over next two years appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ncsc-ai-significantly-boost-cyber-threats-next-two-years/feed/ 0
Global AI security guidelines endorsed by 18 countries https://www.artificialintelligence-news.com/news/global-ai-security-guidelines-endorsed-by-18-countries/ https://www.artificialintelligence-news.com/news/global-ai-security-guidelines-endorsed-by-18-countries/#respond Mon, 27 Nov 2023 10:28:13 +0000 https://www.artificialintelligence-news.com/?p=13954 The UK has published the world’s first global guidelines for securing AI systems against cyberattacks. The new guidelines aim to ensure AI technology is developed safely and securely. The guidelines were developed by the UK’s National Cyber Security Centre (NCSC) and the US’ Cybersecurity and Infrastructure Security Agency (CISA). They have already secured endorsements from […]

The post Global AI security guidelines endorsed by 18 countries appeared first on AI News.

]]>
The UK has published the world’s first global guidelines for securing AI systems against cyberattacks. The new guidelines aim to ensure AI technology is developed safely and securely.

The guidelines were developed by the UK’s National Cyber Security Centre (NCSC) and the US’ Cybersecurity and Infrastructure Security Agency (CISA). They have already secured endorsements from 17 other countries, including all G7 members.

The guidelines provide recommendations for developers and organisations using AI to incorporate cybersecurity at every stage. This “secure by design” approach advises baking in security from the initial design phase through development, deployment, and ongoing operations.  

Specific guidelines cover four key areas: secure design, secure development, secure deployment, and secure operation and maintenance. They suggest security behaviours and best practices for each phase.

The launch event in London convened over 100 industry, government, and international partners. Speakers included reps from Microsoft, the Alan Turing Institute, and cyber agencies from the US, Canada, Germany, and the UK.  

NCSC CEO Lindy Cameron stressed the need for proactive security amidst AI’s rapid pace of development. She said, “security is not a postscript to development but a core requirement throughout.”

The guidelines build on existing UK leadership in AI safety. Last month, the UK hosted the first international summit on AI safety at Bletchley Park.

US Secretary of Homeland Security Alejandro Mayorkas said: “We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy.

“The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a common-sense path to designing, developing, deploying, and operating AI with cybersecurity at its core.”

The 18 endorsing countries span Europe, Asia-Pacific, Africa, and the Americas. Here is the full list of international signatories:

  • Australia – Australian Signals Directorate’s Australian Cyber Security Centre (ACSC)
  • Canada – Canadian Centre for Cyber Security (CCCS) 
  • Chile – Chile’s Government CSIRT
  • Czechia – Czechia’s National Cyber and Information Security Agency (NUKIB)
  • Estonia – Information System Authority of Estonia (RIA) and National Cyber Security Centre of Estonia (NCSC-EE)
  • France – French Cybersecurity Agency (ANSSI)
  • Germany – Germany’s Federal Office for Information Security (BSI)
  • Israel – Israeli National Cyber Directorate (INCD)
  • Italy – Italian National Cybersecurity Agency (ACN)
  • Japan – Japan’s National Center of Incident Readiness and Strategy for Cybersecurity (NISC; Japan’s Secretariat of Science, Technology and Innovation Policy, Cabinet Office
  • New Zealand – New Zealand National Cyber Security Centre
  • Nigeria – Nigeria’s National Information Technology Development Agency (NITDA)
  • Norway – Norwegian National Cyber Security Centre (NCSC-NO)
  • Poland – Poland’s NASK National Research Institute (NASK)
  • Republic of Korea – Republic of Korea National Intelligence Service (NIS)
  • Singapore – Cyber Security Agency of Singapore (CSA)
  • United Kingdom – National Cyber Security Centre (NCSC)
  • United States of America – Cybersecurity and Infrastructure Agency (CISA); National Security Agency (NSA; Federal Bureau of Investigations (FBI)

UK Science and Technology Secretary Michelle Donelan positioned the new guidelines as cementing the UK’s role as “an international standard bearer on the safe use of AI.”

“Just weeks after we brought world leaders together at Bletchley Park to reach the first international agreement on safe and responsible AI, we are once again uniting nations and companies in this truly global effort,” adds Donelan.

The guidelines are now published on the NCSC website alongside explanatory blogs. Developer uptake will be key to translating the secure by design vision into real-world improvements in AI security.

(Photo by Jan Antonin Kolar on Unsplash)

See also: Paul O’Sullivan, Salesforce: Transforming work in the GenAI era

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Global AI security guidelines endorsed by 18 countries appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/global-ai-security-guidelines-endorsed-by-18-countries/feed/ 0
DHS AI roadmap prioritises cybersecurity and national safety https://www.artificialintelligence-news.com/news/dhs-ai-roadmap-prioritises-cybersecurity-national-safety/ https://www.artificialintelligence-news.com/news/dhs-ai-roadmap-prioritises-cybersecurity-national-safety/#respond Wed, 15 Nov 2023 10:10:47 +0000 https://www.artificialintelligence-news.com/?p=13893 The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has launched its inaugural Roadmap for AI. Viewed as a crucial step in the broader governmental effort to ensure the secure development and implementation of AI capabilities, the move aligns with President Biden’s recent Executive Order. “DHS has a broad leadership role in […]

The post DHS AI roadmap prioritises cybersecurity and national safety appeared first on AI News.

]]>
The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has launched its inaugural Roadmap for AI.

Viewed as a crucial step in the broader governmental effort to ensure the secure development and implementation of AI capabilities, the move aligns with President Biden’s recent Executive Order.

“DHS has a broad leadership role in advancing the responsible use of AI and this cybersecurity roadmap is one important element of our work,” said Secretary of Homeland Security Alejandro N. Mayorkas.

“The Biden-Harris Administration is committed to building a secure and resilient digital ecosystem that promotes innovation and technological progress.” 

Following the Executive Order, DHS is mandated to globally promote AI safety standards, safeguard US networks and critical infrastructure, and address risks associated with AI—including potential use “to create weapons of mass destruction”.

“In last month’s Executive Order, the President called on DHS to promote the adoption of AI safety standards globally and help ensure the safe, secure, and responsible use and development of AI,” added Mayorkas.

“CISA’s roadmap lays out the steps that the agency will take as part of our Department’s broader efforts to both leverage AI and mitigate its risks to our critical infrastructure and cyber defenses.”

CISA’s roadmap outlines five strategic lines of effort, providing a blueprint for concrete initiatives and a responsible approach to integrating AI into cybersecurity.

CISA Director Jen Easterly highlighted the dual nature of AI, acknowledging its promise in enhancing cybersecurity while acknowledging the immense risks it poses.

“Artificial Intelligence holds immense promise in enhancing our nation’s cybersecurity, but as the most powerful technology of our lifetimes, it also presents enormous risks,” commented Easterly.

“Our Roadmap for AI – focused at the nexus of AI, cyber defense, and critical infrastructure – sets forth an agency-wide plan to promote the beneficial uses of AI to enhance cybersecurity capabilities; ensure AI systems are protected from cyber-based threats; and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day.”

The outlined lines of effort are as follows:

  • Responsibly use AI to support our mission: CISA commits to using AI-enabled tools ethically and responsibly to strengthen cyber defense and support its critical infrastructure mission. The adoption of AI will align with constitutional principles and all relevant laws and policies.
  • Assess and Assure AI systems: CISA will assess and assist in secure AI-based software adoption across various stakeholders, establishing assurance through best practices and guidance for secure and resilient AI development.
  • Protect critical infrastructure from malicious use of AI: CISA will evaluate and recommend mitigation of AI threats to critical infrastructure, collaborating with government agencies and industry partners. The establishment of JCDC.AI aims to facilitate focused collaboration on AI-related threats.
  • Collaborate and communicate on key AI efforts: CISA commits to contributing to interagency efforts, supporting policy approaches for the US government’s national strategy on cybersecurity and AI, and coordinating with international partners to advance global AI security practices.
  • Expand AI expertise in our workforce: CISA will educate its workforce on AI systems and techniques, actively recruiting individuals with AI expertise and ensuring a comprehensive understanding of the legal, ethical, and policy aspects of AI-based software systems.

“This is a step in the right direction. It shows the government is taking the potential threats and benefits of AI seriously. The roadmap outlines a comprehensive strategy for leveraging AI to enhance cybersecurity, protect critical infrastructure, and foster collaboration. It also emphasises the importance of security in AI system design and development,” explains Joseph Thacker, AI and security researcher at AppOmni.

“The roadmap is pretty comprehensive. Nothing stands out as missing initially, although the devil is in the details when it comes to security, and even more so when it comes to a completely new technology. CISA’s ability to keep up may depend on their ability to get talent or train internal folks. Both of those are difficult to accomplish at scale.”

CISA invites stakeholders, partners, and the public to explore the Roadmap for Artificial Intelligence and gain insights into the strategic vision for AI technology and cybersecurity here.

See also: Google expands partnership with Anthropic to enhance AI safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DHS AI roadmap prioritises cybersecurity and national safety appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/dhs-ai-roadmap-prioritises-cybersecurity-national-safety/feed/ 0
Enterprises struggle to address generative AI’s security implications https://www.artificialintelligence-news.com/news/enterprises-struggle-address-generative-ai-security-implications/ https://www.artificialintelligence-news.com/news/enterprises-struggle-address-generative-ai-security-implications/#respond Wed, 18 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13766 In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use. Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace. The […]

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools, revealing a significant cognitive dissonance among IT and security leaders. Astonishingly, 73 percent of these leaders confessed that their employees frequently use generative AI tools or Large Language Models (LLM) at work. Despite this, a staggering majority admitted to being uncertain about how to effectively address the associated security risks.

When questioned about their concerns, IT and security leaders expressed more worry about the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop, said: “By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

One of the startling revelations from the study was the ineffectiveness of generative AI bans. About 32 percent of respondents stated that their organisations had prohibited the use of these tools. However, only five percent reported that employees never used these tools—indicating that bans alone are not enough to curb their usage.

The study also highlighted a clear desire for guidance, particularly from government bodies. A significant 90 percent of respondents expressed the need for government involvement, with 60 percent advocating for mandatory regulations and 30 percent supporting government standards for businesses to adopt voluntarily.

Despite a sense of confidence in their current security infrastructure, the study revealed gaps in basic security practices.

While 82 percent felt confident in their security stack’s ability to protect against generative AI threats, less than half had invested in technology to monitor generative AI use. Alarmingly, only 46 percent had established policies governing acceptable use and merely 42 percent provided training to users on the safe use of these tools.

The findings come in the wake of the rapid adoption of technologies like ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ generative AI usage to identify potential security vulnerabilities.

You can find a full copy of the report here.

(Photo by Hennie Stander on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/enterprises-struggle-address-generative-ai-security-implications/feed/ 0
Dave Barnett, Cloudflare: Delivering speed and security in the AI era https://www.artificialintelligence-news.com/news/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/ https://www.artificialintelligence-news.com/news/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/#respond Fri, 13 Oct 2023 15:39:34 +0000 https://www.artificialintelligence-news.com/?p=13742 AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era. According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably, […]

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era.

According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably, a significant portion of their services are offered to consumers for free.

“We continuously reinvent, we’re very comfortable in the digital space. We’re very proud that the vast majority of our customers actually consume our services for free because it’s our way of giving back to society,” said Barnett.

Barnett also revealed Cloudflare’s focus on AI during their anniversary week. The company aims to enable organisations to consume AI securely and make it accessible to everyone. Barnett says that Cloudflare achieves those goals in three key ways.

“One, as I mentioned, is operating AI inference engines within Cloudflare close to consumers’ eyeballs. The second area is securing the use of AI within the workplace, because, you know, AI has some incredibly positive impacts on people … but the problem is there are some data protection requirements around that,” explains Barnett.

“Finally, is the question of, ‘Could AI be used by the bad guys against the good guys?’ and that’s an area that we’re continuing to explore.”

Just a day earlier, AI News heard from Raviv Raz, Cloud Security Manager at ING, during a session at the expo that focused on the alarming potential of AI-powered cybercrime.

Regarding security models, Barnett discussed the evolution of the zero-trust concept, emphasising its practical applications in enhancing both usability and security. Cloudflare’s own journey with zero-trust began with a focus on usability, leading to the development of its own zero-trust network access products.

“We have servers everywhere and engineers everywhere that need to reboot those servers. In 2015, that involved VPNs and two-factor authentication… so we built our own zero-trust network access product for our own use that meant the user experiences for engineers rebooting servers in far-flung places was a lot better,” says Barnett.

“After 2015, the world started to realise that this approach had great security benefits so we developed that product and launched it in 2018 as Cloudflare Access.”

Cloudflare’s innovative strides also include leveraging NVIDIA GPUs to accelerate machine learning AI tasks on an edge network. This technology enables organisations to run inference tasks – such as image recognition – close to end-users, ensuring low latency and optimal performance.

“We launched Workers AI, which means that organisations around the world – in fact, individuals as well – can run their inference tasks at a very close place to where the consumers of that inference are,” explains Barnett.

“You could ask a question, ‘Cat or not cat?’, to a trained cat detection engine very close to the people that need it. We’re doing that in a way that makes it easily accessible to organisations looking to use AI to benefit their business.”

For developers interested in AI, Barnett outlined Cloudflare’s role in supporting the deployment of machine learning models. While machine learning training is typically conducted outside Cloudflare, the company excels in providing low-latency inference engines that are essential for real-time applications like image recognition.

Our conversation with Barnett shed light on Cloudflare’s commitment to cloud-native architecture, AI accessibility, and cybersecurity. As the industry continues to advance, Cloudflare remains at the forefront of delivering speed and security in the AI era.

You can watch our full interview with Dave Barnett below:

(Photo by ryan baker on Unsplash)

See also: JPMorgan CEO: AI will be used for ‘every single process’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo, Edge Computing Expo, and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/feed/ 0