cloud Archives - AI News https://www.artificialintelligence-news.com/news/tag/cloud/ Artificial Intelligence News Tue, 29 Apr 2025 16:42:00 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png cloud Archives - AI News https://www.artificialintelligence-news.com/news/tag/cloud/ 32 32 OpenAI’s latest LLM opens doors for China’s AI startups https://www.artificialintelligence-news.com/news/openai-latest-llm-opens-doors-for-china-ai-startups/ https://www.artificialintelligence-news.com/news/openai-latest-llm-opens-doors-for-china-ai-startups/#respond Tue, 29 Apr 2025 16:41:59 +0000 https://www.artificialintelligence-news.com/?p=16158 At the Apsara Conference in Hangzhou, hosted by Alibaba Cloud, China’s AI startups emphasised their efforts to develop large language models. The companies’ efforts follow the announcement of OpenAI’s latest LLMs, including the o1 generative pre-trained transformer model backed by Microsoft. The model is intended to tackle difficult tasks, paving the way for advances in […]

The post OpenAI’s latest LLM opens doors for China’s AI startups appeared first on AI News.

]]>
At the Apsara Conference in Hangzhou, hosted by Alibaba Cloud, China’s AI startups emphasised their efforts to develop large language models.

The companies’ efforts follow the announcement of OpenAI’s latest LLMs, including the o1 generative pre-trained transformer model backed by Microsoft. The model is intended to tackle difficult tasks, paving the way for advances in science, coding, and mathematics.

During the conference, Kunal Zhilin, founder of Moonshot AI, underlined the importance of the o1 model, adding that it has the potential to reshape various industries and create new opportunities for AI startups.

Zhilin stated that reinforcement learning and scalability might be pivotal for AI development. He spoke of the scaling law, which states that larger models with more training data perform better.

“This approach pushes the ceiling of AI capabilities,” Zhilin said, adding that OpenAI o1 has the potential to disrupt sectors and generate new opportunities for startups.

OpenAI has also stressed the model’s ability to solve complex problems, which it says operate in a manner similar to human thinking. By refining its strategies and learning from mistakes, the model improves its problem-solving capabilities.

Zhilin said companies with enough computing power will be able to innovate not only in algorithms, but also in foundational AI models. He sees this as pivotal, as AI engineers rely increasingly on reinforcement learning to generate new data after exhausting available organic data sources.

StepFun CEO Jiang Daxin concurred with Zhilin but stated that computational power remains a big challenge for many start-ups, particularly due to US trade restrictions that hinder Chinese enterprises’ access to advanced semiconductors.

“The computational requirements are still substantial,” Daxin stated.

An insider at Baichuan AI has said that only a small group of Chinese AI start-ups — including Moonshot AI, Baichuan AI, Zhipu AI, and MiniMax — are in a position to make large-scale investments in reinforcement learning. These companies — collectively referred to as the “AI tigers” — are involved heavily in LLM development, pushing the next generation of AI.

More from the Apsara Conference

Also at the conference, Alibaba Cloud made several announcements, including the release of its Qwen 2.5 model family, which features advances in coding and mathematics. The models range from 0.5 billion to 72 billion parameters and support approximately 29 languages, including Chinese, English, French, and Spanish.

Specialised models such as Qwen2.5-Coder and Qwen2.5-Math have already gained some traction, with over 40 million downloads on platforms Hugging Face and ModelScope.

Alibaba Cloud added to its product portfolio, delivering a text-to-video model in its picture generator, Tongyi Wanxiang. The model can create videos in realistic and animated styles, with possible uses in advertising and filmmaking.

Alibaba Cloud unveiled Qwen 2-VL, the latest version of its vision language model. It handles videos longer than 20 minutes, supports video-based question-answering, and is optimised for mobile devices and robotics.

For more information on the conference, click here.

(Photo by: @Guy_AI_Wise via X)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI’s latest LLM opens doors for China’s AI startups appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-latest-llm-opens-doors-for-china-ai-startups/feed/ 0
How LetzAI empowered creativity with scalable, high-performance AI infrastructure https://www.artificialintelligence-news.com/news/how-letzai-empowered-creativity-with-scalable-high-performance-ai-infrastructure/ https://www.artificialintelligence-news.com/news/how-letzai-empowered-creativity-with-scalable-high-performance-ai-infrastructure/#respond Tue, 25 Mar 2025 06:53:00 +0000 https://www.artificialintelligence-news.com/?p=104977 LetzAI is quickly becoming a go-to platform for high-quality AI-generated images. With a mission to democratise and personalise AI-powered image generation, it has emerged as one of the most popular and high-quality options on the market. The problem: In 2023, Neon Internet CEO and co-founder Misch Strotz was struck by a clever idea: give Luxembourg […]

The post How LetzAI empowered creativity with scalable, high-performance AI infrastructure appeared first on AI News.

]]>
LetzAI is quickly becoming a go-to platform for high-quality AI-generated images. With a mission to democratise and personalise AI-powered image generation, it has emerged as one of the most popular and high-quality options on the market.

The problem:

In 2023, Neon Internet CEO and co-founder Misch Strotz was struck by a clever idea: give Luxembourg residents the power to easily generate local images using AI. Within a month, Luxembourg-focused LetzAI V1 went live.

Encouraged by strong local demand, Strotz and his team began working on a global version of the platform. The vision? An opt-in AI platform empowering brands, creators, artists, and individuals to unlock endless creative possibilities by adding their own images, art styles, and products. “Other AI platforms scrape the internet, incorporating people and their content without permission. We wanted to put the choice and power in each person’s hands,” Strotz explains.

Before long, the team began working on V2. In addition to generating higher quality and more personalised AI-generated images, V2 would drive consistency across objects, characters, and styles. After uploading their own photos and creating their own models, users can blend them with other models created by the community to create an endless number of unique images.

However, LetzAI faced a significant hurdle in training and launching V2 – a global GPU shortage. With limited resources to train its models, LetzAI needed a reliable partner to help evolve its AI-driven platform and keep it operating smoothly.

The solution:

In the search for a fitting partner, Strotz spoke to major vendors including hyperscalers and various Europe-based providers. Meeting Gcore’s product leadership team made the decision clear. “It was amazing to meet executives who were so knowledgeable about technology and took us seriously,” recalls Strotz.

Gcore’s approach to data security and sovereignty further solidified the decision. “We needed a trusted partner who shared our commitment to European data protection principles, which we incorporated into the development of our platform” he continues.

The result:

LetzAI opted for Gcore’s state-of-the-art NVIDIA H100 GPUs in Luxembourg. “This was the perfect option, allowing us to keep our model training and development local. With Gcore, we can rent GPUs rather than entire servers, making it a far more cost-effective solution by avoiding unnecessary costs like excess storage and idle server capacity,” Strotz explains. This approach provided flexibility, efficiency, and high performance, tailored specifically for AI workloads.

LetzAI was able to adapt its app to run in containers, configure model training tasks to run on GPU Cloud, and use Everywhere Inference for image generation and upscaling. “Everywhere inference reduces the latency of our output and enhances the performance of AI-enabled apps, allowing us to optimise our workflows for more accurate, real-time results,” Strotz says.

In just two months, LetzAI V2 launched to serve users around the world. And Strotz and team were already developing its successor.

Empowering creativity with scalable, high-performance AI infrastructure

With Gcore’s continued support, LetzAI quickly deployed V3. “The Gcore team was incredibly responsive to our needs, guiding us to the best solution for our evolving

requirements. This has given us a powerful and efficient infrastructure that can flex according to demand,” says Strotz.

Running V3 on Gcore means LetzAI users experience fast, reliable performance. Artists, individuals, and brands are already putting V3 to use in interesting ways. For example, in response to what LetzAI calls its ‘AI Challenges’, a Luxembourg restaurant chain prompted residents to create thousands of images using its model of a pizza.

In another example, LetzAI teamed with digital agency LOOP to dress PUMA’s virtual influencer and avatar, Laila, in a Moroccan soccer jersey. According to Strotz, “PUMA had struggled to make clothing look realistic on Laila. When they saw our images, they said the result was 1,000 times better than anything they had tried.”

That wasn’t the only brand intrigued by V3’s possibilities. After LetzAI posted V3-generated images of models wearing Sloggi underwear, Sloggi’s creative agency STAN Studios asked LetzAI to generate more images for market testing.

Always looking for new ways to support creators, LetzAI also launched its Image Upscaler feature, which enhances images and doubles their resolution. “Our creators can now resolve common AI image issues around quality and resolution. Everywhere Inference is pivotal in delivering the power and speed needed for these dynamic image enhancements,” noted Strotz.

Platform evolution and AI innovation without limits

As its models exceed user expectations worldwide, LetzAI can rely on Gcore to handle a high volume of requests. Confident about generating a limitless number of high-quality images on the fly, LetzAI can continue to scale rapidly to become a sustainable, innovation-driven business.

“As we further evolve—such as by adding video features to our platform – our partnership with Gcore will be central to LetzAI’s continued success,” Strotz concluded.

Photo by Tim Arterbury on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

The post How LetzAI empowered creativity with scalable, high-performance AI infrastructure appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/how-letzai-empowered-creativity-with-scalable-high-performance-ai-infrastructure/feed/ 0
Han Heloir, MongoDB: The role of scalable databases in AI-powered apps https://www.artificialintelligence-news.com/news/han-heloir-mongodb-the-future-of-ai-powered-applications-and-scalable-databases/ https://www.artificialintelligence-news.com/news/han-heloir-mongodb-the-future-of-ai-powered-applications-and-scalable-databases/#respond Mon, 30 Sep 2024 00:22:58 +0000 https://www.artificialintelligence-news.com/?p=16108 As data management grows more complex and modern applications extend the capabilities of traditional approaches, AI is revolutionising application scaling. In addition to freeing operators from outdated, inefficient methods that require careful supervision and extra resources, AI enables real-time, adaptive optimisation of application scaling. Ultimately, these benefits combine to enhance efficiency and reduce costs for […]

The post Han Heloir, MongoDB: The role of scalable databases in AI-powered apps appeared first on AI News.

]]>
As data management grows more complex and modern applications extend the capabilities of traditional approaches, AI is revolutionising application scaling.

Han Heloir, EMEA gen AI senior solutions architect at MongoDB
Han Heloir, EMEA gen AI senior solutions architect, MongoDB.

In addition to freeing operators from outdated, inefficient methods that require careful supervision and extra resources, AI enables real-time, adaptive optimisation of application scaling. Ultimately, these benefits combine to enhance efficiency and reduce costs for targeted applications.

With its predictive capabilities, AI ensures that applications scale efficiently, improving performance and resource allocation—marking a major advance over conventional methods.

Ahead of AI & Big Data Expo Europe, Han Heloir, EMEA gen AI senior solutions architect at MongoDB, discusses the future of AI-powered applications and the role of scalable databases in supporting generative AI and enhancing business processes.

AI News: As AI-powered applications continue to grow in complexity and scale, what do you see as the most significant trends shaping the future of database technology?

Heloir: While enterprises are keen to leverage the transformational power of generative AI technologies, the reality is that building a robust, scalable technology foundation involves more than just choosing the right technologies. It’s about creating systems that can grow and adapt to the evolving demands of generative AI, demands that are changing quickly, some of which traditional IT infrastructure may not be able to support. That is the uncomfortable truth about the current situation.

Today’s IT architectures are being overwhelmed by unprecedented data volumes generated from increasingly interconnected data sets. Traditional systems, designed for less intensive data exchanges, are currently unable to handle the massive, continuous data streams required for real-time AI responsiveness. They are also unprepared to manage the variety of data being generated.

The generative AI ecosystem often comprises a complex set of technologies. Each layer of technology—from data sourcing to model deployment—increases functional depth and operational costs. Simplifying these technology stacks isn’t just about improving operational efficiency; it’s also a financial necessity.

AI News: What are some key considerations for businesses when selecting a scalable database for AI-powered applications, especially those involving generative AI?

Heloir: Businesses should prioritise flexibility, performance and future scalability. Here are a few key reasons:

  • The variety and volume of data will continue to grow, requiring the database to handle diverse data types—structured, unstructured, and semi-structured—at scale. Selecting a database that can manage such variety without complex ETL processes is important.
  • AI models often need access to real-time data for training and inference, so the database must offer low latency to enable real-time decision-making and responsiveness.
  • As AI models grow and data volumes expand, databases must scale horizontally, to allow organisations to add capacity without significant downtime or performance degradation.
  • Seamless integration with data science and machine learning tools is crucial, and native support for AI workflows—such as managing model data, training sets and inference data—can enhance operational efficiency.

AI News: What are the common challenges organisations face when integrating AI into their operations, and how can scalable databases help address these issues?

Heloir: There are a variety of challenges that organisations can run into when adopting AI. These include the massive amounts of data from a wide variety of sources that are required to build AI applications. Scaling these initiatives can also put strain on the existing IT infrastructure and once the models are built, they require continuous iteration and improvement.

To make this easier, a database that scales can help simplify the management, storage and retrieval of diverse datasets. It offers elasticity, allowing businesses to handle fluctuating demands while sustaining performance and efficiency. Additionally, they accelerate time-to-market for AI-driven innovations by enabling rapid data ingestion and retrieval, facilitating faster experimentation.

AI News: Could you provide examples of how collaborations between database providers and AI-focused companies have driven innovation in AI solutions?

Heloir: Many businesses struggle to build generative AI applications because the technology evolves so quickly. Limited expertise and the increased complexity of integrating diverse components further complicate the process, slowing innovation and hindering the development of AI-driven solutions.

One way we address these challenges is through our MongoDB AI Applications Program (MAAP), which provides customers with resources to assist them in putting AI applications into production. This includes reference architectures and an end-to-end technology stack that integrates with leading technology providers, professional services and a unified support system.

MAAP categorises customers into four groups, ranging from those seeking advice and prototyping to those developing mission-critical AI applications and overcoming technical challenges. MongoDB’s MAAP enables faster, seamless development of generative AI applications, fostering creativity and reducing complexity.

AI News: How does MongoDB approach the challenges of supporting AI-powered applications, particularly in industries that are rapidly adopting AI?

Heloir: Ensuring you have the underlying infrastructure to build what you need is always one of the biggest challenges organisations face.

To build AI-powered applications, the underlying database must be capable of running queries against rich, flexible data structures. With AI, data structures can become very complex. This is one of the biggest challenges organisations face when building AI-powered applications, and it’s precisely what MongoDB is designed to handle. We unify source data, metadata, operational data, vector data and generated data—all in one platform.

AI News: What future developments in database technology do you anticipate, and how is MongoDB preparing to support the next generation of AI applications?

Heloir: Our key values are the same today as they were when MongoDB initially launched: we want to make developers’ lives easier and help them drive business ROI. This remains unchanged in the age of artificial intelligence. We will continue to listen to our customers, assist them in overcoming their biggest difficulties, and ensure that MongoDB has the features they require to develop the next [generation of] great applications.

(Photo by Caspar Camille Rubin)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Han Heloir, MongoDB: The role of scalable databases in AI-powered apps appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/han-heloir-mongodb-the-future-of-ai-powered-applications-and-scalable-databases/feed/ 0
Chinese firms use cloud loophole to access US AI tech https://www.artificialintelligence-news.com/news/chinese-firms-cloud-loophole-access-us-ai-tech/ https://www.artificialintelligence-news.com/news/chinese-firms-cloud-loophole-access-us-ai-tech/#respond Wed, 28 Aug 2024 13:40:22 +0000 https://www.artificialintelligence-news.com/?p=15851 Chinese organisations are utilising cloud services from Amazon and its competitors to gain access to advanced US AI chips and capabilities that they cannot otherwise obtain, according to a Reuters report based on public tender documents. In a comprehensive investigation, Reuters revealed how Chinese cloud access to US AI chips is facilitated through intermediaries. Over […]

The post Chinese firms use cloud loophole to access US AI tech appeared first on AI News.

]]>
Chinese organisations are utilising cloud services from Amazon and its competitors to gain access to advanced US AI chips and capabilities that they cannot otherwise obtain, according to a Reuters report based on public tender documents.

In a comprehensive investigation, Reuters revealed how Chinese cloud access to US AI chips is facilitated through intermediaries. Over 50 tender documents posted in the past year revealed that at least 11 Chinese entities have sought access to restricted US technologies or cloud services. Four of these explicitly named Amazon Web Services (AWS) as a cloud service provider, though accessed through Chinese intermediaries rather than directly from AWS.

“AWS complies with all applicable US laws, including trade laws, regarding the provision of AWS services inside and outside of China,” an AWS spokesperson told Reuters.

The report highlights that while the US government has restricted the export of high-end AI chips to China, providing access to such chips or advanced AI models through the cloud is not a violation of US regulations. This loophole has raised concerns among US officials and lawmakers.

One example cited in the report involves Shenzhen University, which spent 200,000 yuan (£21,925) on an AWS account to access cloud servers powered by Nvidia A100 and H100 chips for an unspecified project. The university obtained this service via an intermediary, Yunda Technology Ltd Co. Neither Shenzhen University nor Yunda Technology responded to Reuters’ requests for comment.

The investigation also revealed that Zhejiang Lab, a research institute developing its own large language model called GeoGPT, stated in a tender document that it intended to spend 184,000 yuan to purchase AWS cloud computing services. The institute claimed that its AI model could not get enough computing power from homegrown Alibaba cloud services.

Michael McCaul, chair of the US House of Representatives Foreign Affairs Committee, told Reuters: “This loophole has been a concern of mine for years, and we are long overdue to address it.”

In response to these concerns, the US Commerce Department is tightening rules. A government spokeswoman told Reuters that they are “seeking additional resources to strengthen our existing controls that restrict PRC companies from accessing advanced AI chips through remote access to cloud computing capability.”

The Commerce Department has also proposed a rule that would require US cloud computing firms to verify large AI model users and notify authorities when they use US cloud computing services to train large AI models capable of “malicious cyber-enabled activity.”

The study also found that Chinese companies are seeking access to Microsoft’s cloud services. For example, Sichuan University stated in a tender filing that it was developing a generative AI platform and would purchase 40 million Microsoft Azure OpenAI tokens to help with project delivery.

Reuters’ report also indicated that Amazon has provided Chinese businesses with access to modern AI chips as well as advanced AI models such as Anthropic’s Claude, which they would not otherwise have had. This was demonstrated by public postings, tenders, and marketing materials evaluated by the news organisation.

Chu Ruisong, President of AWS Greater China, stated during a generative AI-themed conference in Shanghai in May that “Bedrock provides a selection of leading LLMs, including prominent closed-source models such as Anthropic’s Claude 3.”

The report overall emphasises the difficulty of regulating access to advanced computing resources in an increasingly interconnected global technological ecosystem. It focuses on the intricate relationship between US export laws, cloud service providers, and Chinese enterprises looking to improve their AI capabilities.

As the US government works to close this gap, the scenario raises concerns about the efficacy of present export controls and the potential need for more comprehensive laws that cover cloud-based access to banned technologies.

The findings of this paper are likely to feed ongoing discussions about technology transfer, national security, and the global AI race. As politicians and industry leaders analyse these findings, they may spark fresh discussions about how to balance technological cooperation with national security concerns in an era of rapid AI growth.

See also: GlobalData: China is ahead of global rivals for AI ‘unicorns’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Chinese firms use cloud loophole to access US AI tech appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chinese-firms-cloud-loophole-access-us-ai-tech/feed/ 0
Palantir and Microsoft partner to provide federal AI services https://www.artificialintelligence-news.com/news/palantir-and-microsoft-partner-federal-ai-services/ https://www.artificialintelligence-news.com/news/palantir-and-microsoft-partner-federal-ai-services/#respond Mon, 12 Aug 2024 10:15:42 +0000 https://www.artificialintelligence-news.com/?p=15696 Palantir, a data analytics company known for its work in the defence and intelligence sectors, has announced a significant partnership with Microsoft. The collaboration aims to deliver advanced services for classified networks utilised by US defence and intelligence agencies. According to the recent announcement, Palantir is integrating Microsoft’s cutting-edge large language models via the Azure […]

The post Palantir and Microsoft partner to provide federal AI services appeared first on AI News.

]]>
Palantir, a data analytics company known for its work in the defence and intelligence sectors, has announced a significant partnership with Microsoft. The collaboration aims to deliver advanced services for classified networks utilised by US defence and intelligence agencies.

According to the recent announcement, Palantir is integrating Microsoft’s cutting-edge large language models via the Azure OpenAI Service into its AI platforms. The integration will occur within Microsoft’s government and classified cloud environments. As this collaboration is the first of its kind, this specific configuration has the potential to completely transform the use of AI in critical national security missions.

Palantir, whose name draws inspiration from the potentially misleading “seeing-stones” in J.R.R. Tolkien’s fictional works, specialises in processing and analysing vast quantities of data to assist governments and corporations with surveillance and decision-making tasks. While the precise nature of the services to be offered through this partnership remains somewhat ambiguous, it is clear that Palantir’s products will be integrated into Microsoft’s Azure cloud services. This development follows Azure’s previous incorporation of OpenAI’s GPT-4 technology into a “top secret” version of its software.

The company’s journey is notable. Co-founded by Peter Thiel and initially funded by In-Q-Tel, the CIA’s venture capital arm, Palantir has grown to serve a diverse clientele. Its roster includes government agencies such as Immigration and Customs Enforcement (ICE) and various police departments, as well as private sector giants like the pharmaceutical company Sanofi. Palantir has also become deeply involved in supporting Ukraine’s war efforts, with reports suggesting its software may be utilised in targeting decisions for military operations.

Even though Palantir has operated with a large customer base for years, it only reached its first annual profit in 2023. However, with the current surge of interest in AI, the company has been able to grow rapidly, particularly in the commercial sector. According to Bloomberg, Palantir’s CEO, Alex Karp, warned that Palantir’s “commercial business is exploding in a way we don’t know how to handle.”

Despite the urgency of this mission, the company’s annual filing clearly states that it neither does business with nor on behalf of the Chinese Communist Party, nor does it plan to do so. This indicates that Palantir is especially careful in developing its customer base, considering the geopolitical implications of its work.

The announcement of this partnership has been well-received by investors, with Palantir’s share price surging more than 75 per cent in 2024 as of the time of writing. This dramatic increase reflects the market’s optimism about the potential of AI in national security applications and Palantir’s position at the forefront of this field.

Still, the partnership between Palantir and Microsoft raises significant questions about the role of AI in national security and surveillance. This is no surprise, as these are particularly sensitive areas, and the development of new technologies could potentially transform the sector forever.

More discussions and investigations are needed to understand the ethical implications of implementing these innovative tools. All things considered, the Palantir and Microsoft partnership is a significant event that will likely shape the future use of AI technologies and cloud computing in areas such as intelligence and defence.

(Photo by Katie Moum)

See also: Paige and Microsoft unveil next-gen AI models for cancer diagnosis

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Palantir and Microsoft partner to provide federal AI services appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/palantir-and-microsoft-partner-federal-ai-services/feed/ 0
AI expansion drives $5B in deals for Lumen https://www.artificialintelligence-news.com/news/ai-expansion-drives-5-billion-in-deals-for-lumen/ https://www.artificialintelligence-news.com/news/ai-expansion-drives-5-billion-in-deals-for-lumen/#respond Thu, 08 Aug 2024 09:31:51 +0000 https://www.artificialintelligence-news.com/?p=15665 Lumen Technologies, a leading telecommunications firm, has recently announced significant new contracts totalling $5 billion with cloud and tech companies for its networking and cybersecurity solutions. This surge in demand comes as businesses across various sectors rapidly adopt AI-driven technologies. Among these notable agreements is a deal with Microsoft, which revealed last month its plans […]

The post AI expansion drives $5B in deals for Lumen appeared first on AI News.

]]>
Lumen Technologies, a leading telecommunications firm, has recently announced significant new contracts totalling $5 billion with cloud and tech companies for its networking and cybersecurity solutions.

This surge in demand comes as businesses across various sectors rapidly adopt AI-driven technologies.

Among these notable agreements is a deal with Microsoft, which revealed last month its plans to utilise Lumen’s network equipment to expand capacity for AI workloads. Lumen, known for providing secure digital connections for data centres, disclosed recently that it is engaged in active discussions with customers regarding additional sales opportunities valued at approximately $7 billion.

The widespread adoption of AI has prompted enterprises across multiple industries to invest heavily in infrastructure capable of supporting AI-powered applications. Lumen reports that major corporations are urgently seeking to secure high-capacity fibre, a resource becoming increasingly valuable, and potentially scarce due to growing AI requirements.

There is an optimistic prospect for further success, as Kate Johnson, the CEO of Lumen, expressed: “Our partners are turning to us because of our AI-ready infrastructure and expansive network. This is just the beginning of a significant opportunity for Lumen, one that will lead to one of the largest expansions of the internet ever.”

Another piece of evidence regarding the company’s strategic positioning in such a rapidly changing and highly unstable market is the creation of a new division, Custom Networks. This division will be responsible for managing the Lumen Private Connectivity Fabric solutions portfolio. At the same time, since the demand for networking is rising from various organisations seeking solutions designed to satisfy the specific needs of their target environments, it is rational to develop a new division for networks.

This highlights that telecommunications infrastructure plays a crucial role in the current AI revolution. As an increasing number of firms implement AI technologies in their operations, it is essential to have plenty of secure, expansive networks.

Lumen’s recent success in securing these substantial contracts underscores the company’s strong market position and its ability to meet the evolving needs of tech giants and cloud service providers. As the AI landscape continues to evolve, Lumen appears well-positioned to capitalise on the increasing demand for advanced networking solutions.

The telecommunications sector, and Lumen in particular, is likely to remain at the forefront of enabling AI advancements across industries. As this trend progresses, it will be interesting to observe how Lumen and its competitors adapt to meet the challenges and opportunities presented by this technological shift.

(Photo by Vladimir Solomianyi)

See also: UK backs smaller AI projects while scrapping major investments

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI expansion drives $5B in deals for Lumen appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-expansion-drives-5-billion-in-deals-for-lumen/feed/ 0
Microsoft to forge AI partnerships with South Korean tech leaders https://www.artificialintelligence-news.com/news/microsoft-forge-ai-partnerships-south-korean-tech-leaders/ https://www.artificialintelligence-news.com/news/microsoft-forge-ai-partnerships-south-korean-tech-leaders/#respond Mon, 22 Apr 2024 09:32:31 +0000 https://www.artificialintelligence-news.com/?p=14729 Microsoft is set to host top executives from South Korea’s leading technology firms next month to strengthen its AI partnerships. The high-level meeting, dubbed the MS CEO Summit 2024, will be held on 14 May 2024 and feature Microsoft’s founder Bill Gates and Chairman and CEO Satya Nadella. They will engage in closed-door discussions with […]

The post Microsoft to forge AI partnerships with South Korean tech leaders appeared first on AI News.

]]>
Microsoft is set to host top executives from South Korea’s leading technology firms next month to strengthen its AI partnerships.

The high-level meeting, dubbed the MS CEO Summit 2024, will be held on 14 May 2024 and feature Microsoft’s founder Bill Gates and Chairman and CEO Satya Nadella. They will engage in closed-door discussions with Kyung Kye-hyun of Samsung, Kwak Noh-jung of SK Hynix, Cho Joo-wan of LG Electronics, and Ryu Young-sang of SK Telecom.

Sources for The Korea Economic Daily suggest that Microsoft plans to explore joint ventures in AI technology across various sectors. Discussions with Samsung and SK Hynix will likely centre on the joint development and supply of AI chips.

Samsung and SK Hynix are recognised as being among the world’s leading memory chipmakers and can enhance Microsoft’s server capabilities with next-generation technologies such as High-Bandwidth Memory (HBM) AI chips and solid-state drives (SSDs).

Collaboration topics with LG Electronics will include integrating AI technologies into home appliances, a move that will boost Microsoft’s competitive edge against rivals like Google and Meta. With SK Telecom, Microsoft is expected to delve further into cloud and 5G services.

These meetings are timely, as the global tech landscape sees an increased focus on AI development. By potentially integrating Microsoft’s AI services into products like Samsung’s smartphones and LG’s home appliances, Microsoft could significantly elevate its market standing.

Kyung of Samsung’s Device Solutions indicated last month that their new AI accelerators, Mach-1 and Mach-2, will soon move into mass production. These accelerators are designed to optimise the synergy between GPUs and HBM chips, promising a revolution in processing speeds. Earlier this month, the company unveiled the industry’s first LPDDR5X DRAM which aims to boost on-device AI.

SK Telecom, under CEO Ryu, spearheads the Global Telco AI Alliance (GTAA). This consortium, including major global players like Deutsche Telekom and SingTel, aims to develop AI infrastructure and generative AI services across a customer base exceeding 1.3 billion globally.

Last year, SK Telecom invested $100 million in AI startup Anthropic to develop a large language model (LLM) specifically for telcos. The collaborative endeavour extends to the Telco AI Platform, an ongoing project initiated by the GTAA.

The MS CEO Summit 2024 presents an opportunity for enhanced AI cooperation and technological advancement, securing Microsoft’s position as a pivotal player in the industry.

(Photo by Natalie Pedigo)

See also: Meta raises the bar with open source Llama 3 LLM

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft to forge AI partnerships with South Korean tech leaders appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/microsoft-forge-ai-partnerships-south-korean-tech-leaders/feed/ 0
Mistral AI unveils LLM rivalling major players https://www.artificialintelligence-news.com/news/mistral-ai-unveils-llm-rivalling-major-players/ https://www.artificialintelligence-news.com/news/mistral-ai-unveils-llm-rivalling-major-players/#respond Tue, 27 Feb 2024 12:59:49 +0000 https://www.artificialintelligence-news.com/?p=14455 Mistral AI, a France-based startup, has introduced a new large language model (LLM) called Mistral Large that it claims can compete with several top AI systems on the market.   Mistral AI stated that Mistral Large outscored most major LLMs except for OpenAI’s recently launched GPT-4 in tests of language understanding. It also performed strongly in […]

The post Mistral AI unveils LLM rivalling major players appeared first on AI News.

]]>
Mistral AI, a France-based startup, has introduced a new large language model (LLM) called Mistral Large that it claims can compete with several top AI systems on the market.  

Mistral AI stated that Mistral Large outscored most major LLMs except for OpenAI’s recently launched GPT-4 in tests of language understanding. It also performed strongly in maths and coding assessments.

Co-founder and Chief Scientist Guillaume Lample said Mistral Large represents a major advance over earlier Mistral models. The company also launched a chatbot interface named Le Chat to allow users to interact with the system, similar to ChatGPT.  

The proprietary model boasts fluency in English, French, Spanish, German, and Italian, with a vocabulary exceeding 20,000 words. While Mistral’s first model was open-source, Mistral Large’s code remains closed like systems from OpenAI and other firms.  

Mistral AI received nearly $500 million in funding late last year from backers such as Nvidia and Andreessen Horowitz. It also recently partnered with Microsoft to provide access to Mistral Large through Azure cloud services.  

Microsoft’s investment of €15 million into Mistral AI is set to face scrutiny from European Union regulators who are already analysing the tech giant’s ties to OpenAI, maker of market-leading models like GPT-3 and GPT-4. The European Commission said Tuesday it will review Microsoft’s deal with Mistral, which could lead to a formal probe jeopardising the partnership.

Microsoft has focused most of its AI efforts on OpenAI, having invested around $13 billion into the California company. Those links are now also under review in both the EU and UK for potential anti-competitive concerns. 

Pricing for the Mistral Large model starts at $8 per million tokens of input and $24 per million output tokens. The system will leverage Azure’s computing infrastructure for training and deployment needs as Mistral AI and Microsoft partner on AI research as well.

While third-party rankings have yet to fully assess Mistral Large, the firm’s earlier Mistral Medium ranked 6th out of over 60 language models. With the latest release, Mistral AI appears positioned to challenge dominant players in the increasingly crowded AI space.

(Photo by Joshua Golde on Unsplash)

See also: Stability AI previews Stable Diffusion 3 text-to-image model

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mistral AI unveils LLM rivalling major players appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/mistral-ai-unveils-llm-rivalling-major-players/feed/ 0
Microsoft is quadrupling its AI and cloud investment in Spain https://www.artificialintelligence-news.com/news/microsoft-quadrupling-ai-cloud-investment-spain/ https://www.artificialintelligence-news.com/news/microsoft-quadrupling-ai-cloud-investment-spain/#respond Wed, 21 Feb 2024 15:53:40 +0000 https://www.artificialintelligence-news.com/?p=14431 Microsoft has announced plans to significantly boost its investment in AI and cloud infrastructure in Spain, with a commitment to quadruple its spending during 2024-2025 to reach $2.1 billion. This substantial increase marks the largest investment by Microsoft in Spain since its establishment in the country 37 years ago. The tech giant is set to […]

The post Microsoft is quadrupling its AI and cloud investment in Spain appeared first on AI News.

]]>
Microsoft has announced plans to significantly boost its investment in AI and cloud infrastructure in Spain, with a commitment to quadruple its spending during 2024-2025 to reach $2.1 billion. This substantial increase marks the largest investment by Microsoft in Spain since its establishment in the country 37 years ago.

The tech giant is set to unveil new data centres in Madrid and has outlined its intention to construct additional centres in Aragon, catering to European companies and public entities. The increased European infrastructure aims to deliver Microsoft’s cloud services with heightened security, privacy, and data sovereignty measures, facilitating access to the company’s full suite of AI solutions for businesses and public administrations in the region.

According to an analysis by IDC, these new Microsoft data centres have the potential to contribute €8.4 billion to the national GDP and help to generate 69,000 jobs from 2026 to 2030.

The commitment to investment aligns with a collaborative agreement forged between the President of the Government, Pedro Sánchez, and Microsoft President Brad Smith. Under this collaboration, Microsoft and the Government of Spain will collaborate on various initiatives aimed at advancing responsible AI, enhancing citizen services, and bolstering national cybersecurity and resilience across Spanish companies, public bodies, and critical infrastructures.

This partnership operates within the framework of the National Strategy for Artificial Intelligence and the National Cybersecurity Strategy outlined by the Spanish government. It revolves around four key action points:

  1. Extension of AI in public administration: Efforts will be directed towards modernising administrative processes and equipping officials with AI tools to boost efficiency. This includes deploying generative AI solutions and implementing AI training plans for officials.
  1. Promotion of responsible AI: Microsoft will share its responsible AI design standards, along with implementation guides and best practices documentation, with the Spanish Agency for the Supervision of Artificial Intelligence (AESIA).
  1. Strengthening national cybersecurity: Collaboration with the National Cryptological Center (CNI) aims to enhance early warning mechanisms and response to cybersecurity incidents in public administrations.
  1. Improving cyber-resilience of companies: Microsoft will collaborate with the National Institute of Cybersecurity (INCIBE) to enhance the cybersecurity posture of Spanish companies, particularly SMEs, by providing access to threat intelligence and conducting joint outreach initiatives.

Microsoft’s increased investment underscores its commitment to advancing technological innovation in Spain while fostering a secure and responsible digital ecosystem.

(Photo by engin akyurt on Unsplash)

See also: Wipro and IBM collaborate to propel enterprise AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft is quadrupling its AI and cloud investment in Spain appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/microsoft-quadrupling-ai-cloud-investment-spain/feed/ 0
Google announces UK data centre to meet ‘growing demand’ for AI https://www.artificialintelligence-news.com/news/google-announces-uk-data-centre-meet-growing-demand-ai/ https://www.artificialintelligence-news.com/news/google-announces-uk-data-centre-meet-growing-demand-ai/#respond Fri, 19 Jan 2024 14:33:41 +0000 https://www.artificialintelligence-news.com/?p=14240 Google has announced plans to invest $1 billion in a new data centre in the UK which it says will help to meet “growing demand” for its AI and cloud services. The 33-acre site in Waltham Cross, Hertfordshire will bring much-needed compute capacity to businesses, supporting AI innovation and ensuring reliable digital services for Google […]

The post Google announces UK data centre to meet ‘growing demand’ for AI appeared first on AI News.

]]>
Google has announced plans to invest $1 billion in a new data centre in the UK which it says will help to meet “growing demand” for its AI and cloud services.

The 33-acre site in Waltham Cross, Hertfordshire will bring much-needed compute capacity to businesses, supporting AI innovation and ensuring reliable digital services for Google Cloud customers and general consumers relying on products like Search, Maps, and YouTube.

Ruth Porat, Alphabet’s president and chief financial officer, said the data centre “represents our latest investment in the UK and the wider digital economy.” She added that it builds on previous investments like Saint Giles and Kings Cross offices, a multi-year research deal with Cambridge, and the Grace Hopper subsea cable connecting the UK with the US and Spain.

Porat said the facility will “help meet growing demand for our AI and cloud services and bring crucial compute capacity to businesses across the UK while creating construction and technical jobs for the local community.”

As a pioneer in computing infrastructure, Google runs some of the most efficient data centres in the world and has committed to powering them entirely on carbon-free energy around the clock by 2030.

Last year, Google signed a deal with ENGIE for offshore wind energy from Scotland’s Moray West farm which will provide 100MW of energy and put UK operations on track for 90 percent clean energy by 2025.

The new data centre will recover heat for local homes and businesses while also deploying an air-cooling system.

Porat called the new data centre the “latest in a series of investments that support Brits and the wider economy” and evidence of its “continued commitment to the UK.” Other investments include $1 billion for its Central Saint Giles office space, developing the one million sq ft King’s Cross campus, and an Accessibility Discovery Centre spurring accessible technology.

Beyond offices, data centres, and subsea cables, Google has also provided digital skills training for over a million Brits and expanded its AI-focussed Digital Garage curriculum to capitalise on demand for the technology.  

Google’s announcement follows Microsoft confirming a £2.5 billion data centre in the UK last November after overcoming regulatory hurdles for its £55 billion Activision Blizzard acquisition.

“This is the single largest investment in its 40-year history in the country which will see Microsoft grow its UK AI infrastructure across sites in London and Cardiff and potential expansion into northern England, helping to meet the exploding demand for efficient, scalable, and sustainable AI specific compute power,” explained HM Treasury.

“Data centres process, host, and store the massive amounts of digital information that is critical for developing AI models.”

Microsoft is supplying its UK data centre with more than 20,000 advanced GPUs for machine learning and the development of new AI models.

Chancellor of the Exchequer Jeremy Hunt said: “The UK is the tech hub of Europe with an ecosystem worth more than that of Germany and France combined – and this investment is another vote of confidence in us as a science superpower.”

(Image Credit: Google)

See also: DeepMind AlphaGeometry solves complex geometry problems

The post Google announces UK data centre to meet ‘growing demand’ for AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-announces-uk-data-centre-meet-growing-demand-ai/feed/ 0
Amdocs, NVIDIA and Microsoft Azure build custom LLMs for telcos https://www.artificialintelligence-news.com/news/amdocs-nvidia-microsoft-azure-build-custom-llms-for-telcos/ https://www.artificialintelligence-news.com/news/amdocs-nvidia-microsoft-azure-build-custom-llms-for-telcos/#respond Thu, 16 Nov 2023 12:09:48 +0000 https://www.artificialintelligence-news.com/?p=13907 Amdocs has partnered with NVIDIA and Microsoft Azure to build custom Large Language Models (LLMs) for the $1.7 trillion global telecoms industry. Leveraging the power of NVIDIA’s AI foundry service on Microsoft Azure, Amdocs aims to meet the escalating demand for data processing and analysis in the telecoms sector. The telecoms industry processes hundreds of […]

The post Amdocs, NVIDIA and Microsoft Azure build custom LLMs for telcos appeared first on AI News.

]]>
Amdocs has partnered with NVIDIA and Microsoft Azure to build custom Large Language Models (LLMs) for the $1.7 trillion global telecoms industry.

Leveraging the power of NVIDIA’s AI foundry service on Microsoft Azure, Amdocs aims to meet the escalating demand for data processing and analysis in the telecoms sector.

The telecoms industry processes hundreds of petabytes of data daily. With the anticipation of global data transactions surpassing 180 zettabytes by 2025, telcos are turning to generative AI to enhance efficiency and productivity.

NVIDIA’s AI foundry service – comprising the NVIDIA AI Foundation Models, NeMo framework, and DGX Cloud AI supercomputing – provides an end-to-end solution for creating and optimising custom generative AI models.

Amdocs will utilise the AI foundry service to develop enterprise-grade LLMs tailored for the telco and media industries, facilitating the deployment of generative AI use cases across various business domains.

This collaboration builds on the existing Amdocs-Microsoft partnership, ensuring the adoption of applications in secure, trusted environments, both on-premises and in the cloud.

Enterprises are increasingly focusing on developing custom models to perform industry-specific tasks. Amdocs serves over 350 of the world’s leading telecom and media companies across 90 countries. This partnership with NVIDIA opens avenues for exploring generative AI use cases, with initial applications focusing on customer care and network operations.

In customer care, the collaboration aims to accelerate the resolution of inquiries by leveraging information from across company data. In network operations, the companies are exploring solutions to address configuration, coverage, or performance issues in real-time.

This move by Amdocs positions the company at the forefront of ushering in a new era for the telecoms industry by harnessing the capabilities of custom generative AI models.

(Photo by Danist Soh on Unsplash)

See also: Wolfram Research: Injecting reliability into generative AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amdocs, NVIDIA and Microsoft Azure build custom LLMs for telcos appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amdocs-nvidia-microsoft-azure-build-custom-llms-for-telcos/feed/ 0
Dave Barnett, Cloudflare: Delivering speed and security in the AI era https://www.artificialintelligence-news.com/news/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/ https://www.artificialintelligence-news.com/news/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/#respond Fri, 13 Oct 2023 15:39:34 +0000 https://www.artificialintelligence-news.com/?p=13742 AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era. According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably, […]

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era.

According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably, a significant portion of their services are offered to consumers for free.

“We continuously reinvent, we’re very comfortable in the digital space. We’re very proud that the vast majority of our customers actually consume our services for free because it’s our way of giving back to society,” said Barnett.

Barnett also revealed Cloudflare’s focus on AI during their anniversary week. The company aims to enable organisations to consume AI securely and make it accessible to everyone. Barnett says that Cloudflare achieves those goals in three key ways.

“One, as I mentioned, is operating AI inference engines within Cloudflare close to consumers’ eyeballs. The second area is securing the use of AI within the workplace, because, you know, AI has some incredibly positive impacts on people … but the problem is there are some data protection requirements around that,” explains Barnett.

“Finally, is the question of, ‘Could AI be used by the bad guys against the good guys?’ and that’s an area that we’re continuing to explore.”

Just a day earlier, AI News heard from Raviv Raz, Cloud Security Manager at ING, during a session at the expo that focused on the alarming potential of AI-powered cybercrime.

Regarding security models, Barnett discussed the evolution of the zero-trust concept, emphasising its practical applications in enhancing both usability and security. Cloudflare’s own journey with zero-trust began with a focus on usability, leading to the development of its own zero-trust network access products.

“We have servers everywhere and engineers everywhere that need to reboot those servers. In 2015, that involved VPNs and two-factor authentication… so we built our own zero-trust network access product for our own use that meant the user experiences for engineers rebooting servers in far-flung places was a lot better,” says Barnett.

“After 2015, the world started to realise that this approach had great security benefits so we developed that product and launched it in 2018 as Cloudflare Access.”

Cloudflare’s innovative strides also include leveraging NVIDIA GPUs to accelerate machine learning AI tasks on an edge network. This technology enables organisations to run inference tasks – such as image recognition – close to end-users, ensuring low latency and optimal performance.

“We launched Workers AI, which means that organisations around the world – in fact, individuals as well – can run their inference tasks at a very close place to where the consumers of that inference are,” explains Barnett.

“You could ask a question, ‘Cat or not cat?’, to a trained cat detection engine very close to the people that need it. We’re doing that in a way that makes it easily accessible to organisations looking to use AI to benefit their business.”

For developers interested in AI, Barnett outlined Cloudflare’s role in supporting the deployment of machine learning models. While machine learning training is typically conducted outside Cloudflare, the company excels in providing low-latency inference engines that are essential for real-time applications like image recognition.

Our conversation with Barnett shed light on Cloudflare’s commitment to cloud-native architecture, AI accessibility, and cybersecurity. As the industry continues to advance, Cloudflare remains at the forefront of delivering speed and security in the AI era.

You can watch our full interview with Dave Barnett below:

(Photo by ryan baker on Unsplash)

See also: JPMorgan CEO: AI will be used for ‘every single process’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo, Edge Computing Expo, and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/feed/ 0