network Archives - AI News https://www.artificialintelligence-news.com/news/tag/network/ Artificial Intelligence News Fri, 25 Apr 2025 14:08:01 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png network Archives - AI News https://www.artificialintelligence-news.com/news/tag/network/ 32 32 Kunal Anand, F5: Shaping AI-optimised networks and enhancing security https://www.artificialintelligence-news.com/news/f5-shaping-ai-optimised-networks-and-enhancing-security/ https://www.artificialintelligence-news.com/news/f5-shaping-ai-optimised-networks-and-enhancing-security/#respond Tue, 24 Sep 2024 06:49:22 +0000 https://www.artificialintelligence-news.com/?p=16144 As AI applications evolve, they place greater demands on network infrastructure, particularly in terms of latency and connectivity. Supporting large-scale AI deployments introduces new issues, and analysts predict that AI-related traffic will soon account for a major portion of total network traffic. The industry must be prepared to handle this surge effectively. F5 is adapting […]

The post Kunal Anand, F5: Shaping AI-optimised networks and enhancing security appeared first on AI News.

]]>
As AI applications evolve, they place greater demands on network infrastructure, particularly in terms of latency and connectivity.

Supporting large-scale AI deployments introduces new issues, and analysts predict that AI-related traffic will soon account for a major portion of total network traffic. The industry must be prepared to handle this surge effectively. F5 is adapting its solutions to manage the complexity of AI workloads, and its technology now includes real-time processing of multimodal data.

Kunal Anand, Chief Technology and AI Officer at F5
Kunal Anand, Chief Technology and AI Officer at F5 (Source – F5)

AI presents both opportunities and risks in security, as it has the capability to enhance protection while also enabling AI-driven cyber threats. Collaboration among hyperscalers, telcos, and technology companies is critical for establishing AI-optimised networks. Collaboration and innovation continue to change the AI networking landscape, and F5 is dedicated to driving progress in this area.

Ahead of AI & Big Data Expo Europe, Kunal Anand, Chief Technology and AI Officer at F5, discusses the company’s role and initiatives to stay at the forefront of AI-enabled networking solutions.

AI News: As AI applications evolve, the demands on network infrastructure are becoming more complex. What key challenges does the industry face regarding latency and connectivity in supporting large-scale AI deployments?

Anand: F5 discovered that AI has drastically transformed application architectures. Some companies are investing billions of dollars in AI factories – massive GPU clusters – while others prefer cloud-based solutions or small language models (SLMs) as less expensive alternatives.

Network architectures are evolving to address these challenges. AI factories operate on distinct networking stacks, like InfiniBand with specific GPUs like the H100s or NVIDIA’s upcoming Blackwell series. At the same time, cloud-based technologies and GPU clouds are advancing.

A major trend is data gravity, where organisations’ data is locked in specific environments. This has driven the evolution of multi-cloud architectures, allowing workloads to link with data across environments for retrieval-augmented generation (RAG).

As RAG demands rise, organisations face higher latency because of limited resources, whether from heavily used data stores or limited sets of GPU servers.

AI News: As analysts predict AI-related traffic will soon make up a significant portion of network traffic. What unique challenges does this influx of AI-generated traffic pose for existing network infrastructure, and how do you see the industry preparing for it?

Anand: F5 believes that by the end of the decade, most applications will be AI-powered or AI-driven, necessitating augmentation across the network services chain. These applications will use APIs to communicate with AI factories and third-party services, access data for RAG, and potentially expose their own APIs. Essentially, APIs will be the glue holding this ecosystem together, as analysts have suggested.

Looking ahead, AI-related traffic is expected to dominate network traffic as AI becomes increasingly integrated into applications and APIs. As AI becomes central to practically all applications, AI-related traffic will naturally increase.

AI News: With AI applications becoming more complex and processing multimodal data in real time, how is F5 adapting its solutions to ensure networks can efficiently manage these dynamic workloads?

Anand: F5 looks at this from many angles. In the case of RAG, when data – whether images, binary streams, or text – must be retrieved from a data storage, the method is the same regardless of data format. Customers often want quick Layer 4 load balancing, traffic management, and steering capabilities, all of which F5 excels at. The company provides organisations with load balancing, traffic management, and security services, guaranteeing RAG has efficient data access. F5 has also enabled load balancing among AI factories.

In some cases, large organisations manage massive GPU clusters with tens of thousands of GPUs. Since AI workloads are unpredictable, these GPUs may be available or unavailable depending on the workload. F5 ensures efficient traffic routing, mitigating the unpredictability of AI workloads.

F5 improves performance, increases throughput, and adds security capabilities for organisations building AI factories and clusters.

AI News: As AI enhances security while also posing AI-driven cyber threats, what approaches is F5 taking to strengthen network security and resilience against these evolving challenges?

Anand: There are many different AI-related challenges on the way. Attackers are already employing AI to generate new payloads, find loopholes, and launch unique attacks. For example, ChatGPT and visual transformers have the ability to break CAPTCHAs, especially interactive ones. Recent demonstrations have shown the sophistication of these attacks.

As seen in past security patterns, every time attackers gain an advantage with new technology, defenders must rise to the challenge. This often necessitates reconsidering security models, like shifting from “allow everything, deny some” to “allow some, deny everything.” Many organisations are exploring solutions to combat AI-driven threats.

F5 is making big investments to keep ahead of AI-driven threats. As part of its F5 intelligence programme, the company is developing, training, and deploying models, which are supported by its AI Center of Excellence.

Earlier this year, F5 launched an AI data fabric, with a team dedicated to developing models that serve the entire business, from policy creation to insight delivery. F5 feels it is well placed to face these rising issues.

AI News: What role do partnerships play in developing the next generation of AI-optimised networks, especially between hyperscalers, telcos, and tech companies?

Anand: Partnerships are important for AI development. The AI stack is complex and involves several components, including electricity, data centres, hardware, servers, GPUs, memory, computational power, and a networking stack, all of which must function together. It is unusual for a single organisation to oversee everything from start to finish.

F5 focuses on establishing and maintaining the necessary partnerships in computation, networking, and storage to support AI.

AI News: How does F5 view its role in advancing AI networking, and what initiatives are you focusing on to stay at the forefront of AI-enabled networking solutions?

Anand: F5 is committed to developing its technology platform. The AI Data Fabric, launched earlier this year, will work with the AI Center of Excellence to prepare the organisation for the future.

F5 is also forming strong partnerships, with announcements to come. The company is excited about its work and the rapid pace of global change. F5’s unique vantage point – processing worldwide traffic – enables it to correlate data insights with industry trends. F5 also intends to be more forthcoming about its research and models, with some open-source contributions coming soon.

Overall, F5 is incredibly optimistic about the future. The transformative impact of AI is remarkable, and it is an exciting time to be part of this shift.

(Image by Lucent_Designs_dinoson20)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Kunal Anand, F5: Shaping AI-optimised networks and enhancing security appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/f5-shaping-ai-optimised-networks-and-enhancing-security/feed/ 0
SingularityNET bets on supercomputer network to deliver AGI https://www.artificialintelligence-news.com/news/singularitynet-bets-supercomputer-network-deliver-agi/ https://www.artificialintelligence-news.com/news/singularitynet-bets-supercomputer-network-deliver-agi/#respond Tue, 13 Aug 2024 12:47:29 +0000 https://www.artificialintelligence-news.com/?p=15714 SingularityNET is betting on a network of powerful supercomputers to get us to Artificial General Intelligence (AGI), with the first one set to whir into action this September. While today’s AI excels in specific areas – think GPT-4 composing poetry or DeepMind’s AlphaFold predicting protein structures – it’s still miles away from genuine human-like intelligence.  […]

The post SingularityNET bets on supercomputer network to deliver AGI appeared first on AI News.

]]>
SingularityNET is betting on a network of powerful supercomputers to get us to Artificial General Intelligence (AGI), with the first one set to whir into action this September.

While today’s AI excels in specific areas – think GPT-4 composing poetry or DeepMind’s AlphaFold predicting protein structures – it’s still miles away from genuine human-like intelligence. 

“While the novel neural-symbolic AI approaches developed by the SingularityNET AI team decrease the need for data, processing and energy somewhat relative to standard deep neural nets, we still need significant supercomputing facilities,” SingularityNET CEO Ben Goertzel explained to LiveScience in a recent written statement.

Enter SingularityNET’s ambitious plan: a “multi-level cognitive computing network” designed to host and train the incredibly complex AI architectures required for AGI. Imagine deep neural networks that mimic the human brain, vast language models (LLMs) trained on colossal datasets, and systems that seamlessly weave together human behaviours like speech and movement with multimedia outputs.

But this level of sophistication doesn’t come cheap. The first supercomputer, slated for completion by early 2025, will be a Frankensteinian beast of cutting-edge hardware: Nvidia GPUs, AMD processors, Tenstorrent server racks – you name it, it’s in there.

This, Goertzel believes, is more than just a technological leap, it’s a philosophical one: “Before our eyes, a paradigmatic shift is taking place towards continuous learning, seamless generalisation, and reflexive AI self-modification.”

To manage this distributed network and its precious data, SingularityNET has developed OpenCog Hyperon, an open-source software framework specifically designed for AI systems. Think of it as the conductor trying to make sense of a symphony played across multiple concert halls. 

But SingularityNET isn’t keeping all this brainpower to itself. Reminiscent of arcade tokens, users will purchase access to the supercomputer network with the AGIX token on blockchains like Ethereum and Cardano and contribute data to the collective pool—fuelling further AGI development.  

With experts like DeepMind’s Shane Legg predicting human-level AI by 2028, the race is on. Only time will tell if this global network of silicon brains will birth the next great leap in artificial intelligence.

(Photo by Anshita Nair)

See also: The merging of AI and blockchain was inevitable – but what will it mean?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SingularityNET bets on supercomputer network to deliver AGI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/singularitynet-bets-supercomputer-network-deliver-agi/feed/ 0
Bosch partners with Fetch.ai to ‘transform’ digital ecosystems using DLTs https://www.artificialintelligence-news.com/news/bosch-partners-fetch-ai-transform-digital-ecosystems-dlts/ https://www.artificialintelligence-news.com/news/bosch-partners-fetch-ai-transform-digital-ecosystems-dlts/#respond Thu, 18 Feb 2021 12:56:00 +0000 http://artificialintelligence-news.com/?p=10280 Bosch has partnered with Cambridge-based AI blockchain startup Fetch.ai with the aim of transforming existing digital ecosystems using distributed ledger technologies (DLTs). The global engineering giant will test key features of Fetch.ai’s testnet until the end of this month and will deploy a node on the network. The strategic engineering project between Fetch.ai and Bosch […]

The post Bosch partners with Fetch.ai to ‘transform’ digital ecosystems using DLTs appeared first on AI News.

]]>
Bosch has partnered with Cambridge-based AI blockchain startup Fetch.ai with the aim of transforming existing digital ecosystems using distributed ledger technologies (DLTs).

The global engineering giant will test key features of Fetch.ai’s testnet until the end of this month and will deploy a node on the network. The strategic engineering project between Fetch.ai and Bosch is called the Economy of Things (EoT).

Dr Alexander Poddey, the leading researcher for digital socio-economy, cryptology, and artificial intelligence in the EoT project, said:

“Our collaboration with Fetch.ai spans from the aspects of governance and orchestration of DLT-based ecosystems, multi-agent technologies to collective learning.

They share our belief that these elements are crucial to realising the economic, social, and environmental benefits of IoT technologies.”

Fetch.ai’s testnet launched in October 2020 and the firm is now gearing up for its mainnet launch in March. The company has been ramping up announcements in advance of the mainnet launch and just last week announced a partnership with FESTO to launch a decentralised marketplace for manufacturing.

After the mainnet launch, Bosch intends to run nodes and applications on Fetch.ai’s blockchain network.

Jonathan Ward, CTO of Fetch.ai, commented:

“We have been working with Bosch for some time towards our shared vision of building open, fair, and transparent digital ecosystems. I’m delighted to be able to announce the first public step in bringing these technologies into the real world.

We’re looking forward to working further with Bosch to bring about the wide adoption of these ground-breaking innovations, which will hugely benefit consumers and businesses in many industries including automotive, manufacturing, and healthcare.” 

Fetch.ai is working on decentralised autonomous “agents” which perform real-world tasks. 

Bosch is attracted to Fetch.ai’s vision of collective learning technologies and believes it can be a key enabler in their plans for AI-enabled devices—allowing AI agents to be trained which operate within smart devices while preserving users’ privacy and control of their data.

Fetch.ai’s vision is bold but it has the team and partnerships to pull it off. The company’s roster features talent with experience from DeepMind, Siemens, Sony, and a number of esteemed academic institutions.

Bosch has long expressed a keen interest in distributed ledger technologies and established multiple industry partnerships.

The venture capital arm of Bosch, Robert Bosch Venture-Capital, invested in the IOTA Foundation. Bosch later patented an IOTA-based digital payments system and recently financially supported a hackathon for the DLT platform which uses a scalable DAG (Directed Acyclic Graph) data structure called the ‘Tangle’ in a bid to overcome some of the historic problems with early blockchains.

Fetch.ai and IOTA are in the same space but have different goals, it’s not a choice of one or the other. Companies like Bosch can take advantage of the exciting potential offered by both DLTs to gain a competitive edge.

(Photo by Adi Goldstein on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Bosch partners with Fetch.ai to ‘transform’ digital ecosystems using DLTs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/bosch-partners-fetch-ai-transform-digital-ecosystems-dlts/feed/ 0
Researchers achieve 94% power reduction for on-device AI tasks https://www.artificialintelligence-news.com/news/researchers-achieve-power-reduction-on-device-ai-tasks/ https://www.artificialintelligence-news.com/news/researchers-achieve-power-reduction-on-device-ai-tasks/#respond Thu, 17 Sep 2020 15:47:52 +0000 http://artificialintelligence-news.com/?p=9859 Researchers from Applied Brain Research (ABR) have achieved significantly reduced power consumption for a range of AI-powered devices. ABR designed a new neural network called the Legendre Memory Unit (LMU). With LMU, on-device AI tasks – such as those on speech-enabled devices like wearables, smartphones, and smart speakers – can take up to 94 percent […]

The post Researchers achieve 94% power reduction for on-device AI tasks appeared first on AI News.

]]>
Researchers from Applied Brain Research (ABR) have achieved significantly reduced power consumption for a range of AI-powered devices.

ABR designed a new neural network called the Legendre Memory Unit (LMU). With LMU, on-device AI tasks – such as those on speech-enabled devices like wearables, smartphones, and smart speakers – can take up to 94 percent less power.

The reduction in power consumption achieved through LMU will be particularly beneficial to smaller form-factor devices such as smartwatches; which struggle with small batteries. IoT devices which carry out AI tasks – but may have to last months, if not years, before they’re replaced – should also benefit.

LMU is described as a Recurrent Neural Network (RNN) which enables lower power and more accurate processing of time-varying signals.

ABR says the LMU can be used to build AI networks for all time-varying tasks—such as speech processing, video analysis, sensor monitoring, and control systems.

The AI industry’s current go-to model is the Long-Short-Term-Memory (LSTM) network. LSTM was first proposed back in 1995 and is used for most popular speech recognition and translation services today like those from Google, Amazon, Facebook, and Microsoft.

Last year, researchers from the University of Waterloo debuted LMU as an alternative RNN to LSTM. Those researchers went on to form ABR, which now consists of 20 employees.

Peter Suma, co-CEO of Applied Brain Research, said in an email:

“We are a University of Waterloo spinout from the Theoretical Neuroscience Lab at UW. We looked at how the brain processes signals in time and created an algorithm based on how “time-cells” in your brain work.

We called the new AI, a Legendre-Memory-Unit (LMU) after a mathematical tool we used to model the time cells. The LMU is mathematically proven to be optimal at processing signals. You cannot do any better. Over the coming years, this will make all forms of temporal AI better.”

ABR debuted a paper in late-2019 during the NeurIPS conference which demonstrated that LMU is 1,000,000x more accurate than the LSTM while encoding 100x more time-steps.

In terms of size, the LMU model is also smaller. LMU uses 500 parameters versus the LSTM’s 41,000 (a 98 percent reduction in network size.)

“We implemented our speech recognition with the LMU and it lowered the power used for command word processing to ~8 millionths of a watt, which is 94 percent less power than the best on the market today,” says Suma. “For full speech, we got the power down to 4 milli-watts, which is about 70 percent smaller than the best out there.”

Suma says the next step for ABR is to work on video, sensor and drone control AI processing—to also make them smaller and better.

A full whitepaper detailing LMU and its benefits can be found on preprint repository arXiv here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Researchers achieve 94% power reduction for on-device AI tasks appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/researchers-achieve-power-reduction-on-device-ai-tasks/feed/ 0