Military - AI News https://www.artificialintelligence-news.com/categories/ai-industries/military/ Artificial Intelligence News Thu, 24 Apr 2025 11:42:48 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Military - AI News https://www.artificialintelligence-news.com/categories/ai-industries/military/ 32 32 Best data security platforms of 2025 https://www.artificialintelligence-news.com/news/best-data-security-platforms-of-2025/ https://www.artificialintelligence-news.com/news/best-data-security-platforms-of-2025/#respond Wed, 12 Mar 2025 08:44:49 +0000 https://www.artificialintelligence-news.com/?p=104737 With the rapid growth in the generation, storage, and sharing of data, ensuring its security has become both a necessity and a formidable challenge. Data breaches, cyberattacks, and insider threats are constant risks that require sophisticated solutions. This is where Data Security Platforms come into play, providing organisations with centralised tools and strategies to protect […]

The post Best data security platforms of 2025 appeared first on AI News.

]]>
With the rapid growth in the generation, storage, and sharing of data, ensuring its security has become both a necessity and a formidable challenge. Data breaches, cyberattacks, and insider threats are constant risks that require sophisticated solutions. This is where Data Security Platforms come into play, providing organisations with centralised tools and strategies to protect sensitive information and maintain compliance.

Key components of data security platforms

Effective DSPs are built on several core components that work together to protect data from unauthorised access, misuse, and theft. The components include:

1. Data discovery and classification

Before data can be secured, it needs to be classified and understood. DSPs typically include tools that automatically discover and categorize data based on its sensitivity and use. For example:

  • Personal identifiable information (PII): Names, addresses, social security numbers, etc.
  • Financial data: Credit card details, transaction records.
  • Intellectual property (IP): Trade secrets, proprietary designs.
  • Regulated data: Information governed by laws like GDPR, HIPAA, or CCPA.

By identifying data types and categorizing them by sensitivity level, organisations can prioritise their security efforts.

2. Data encryption

Encryption transforms readable data into an unreadable format, ensuring that even if unauthorised users access the data, they cannot interpret it without the decryption key. Most DSPs support various encryption methods, including:

  • At-rest encryption: Securing data stored on drives, databases, or other storage systems.
  • In-transit encryption: Protecting data as it moves between devices, networks, or applications.

Modern DSPs often deploy advanced encryption standards (AES) or bring-your-own-key (BYOK) solutions, ensuring data security even when using third-party cloud storage.

3. Access control and identity management

Managing who has access to data is a important aspect of data security. DSPs enforce robust role-based access control (RBAC), ensuring only authorised users and systems can access sensitive information. With identity and access management (IAM) integration, DSPs can enhance security by combining authentication methods like:

  • Passwords.
  • Biometrics (e.g. fingerprint or facial recognition).
  • Multi-factor authentication (MFA).
  • Behaviour-based authentication (monitoring user actions for anomalies).

4. Data loss prevention (DLP)

Data loss prevention tools in DSPs help prevent unauthorised sharing or exfiltration of sensitive data. They monitor and control data flows, blocking suspicious activity like:

  • Sending confidential information over email.
  • Transferring sensitive data to unauthorised external devices.
  • Uploading important files to unapproved cloud services.

By enforcing data-handling policies, DSPs help organisations maintain control over their sensitive information.

5. Threat detection and response

DSPs employ threat detection systems powered by machine learning, artificial intelligence (AI), and behaviour analytics to identify unauthorised or malicious activity. Common features include:

  • Anomaly detection: Identifies unusual behaviour, like accessing files outside normal business hours.
  • Insider threat detection: Monitors employees or contractors who might misuse their access to internal data.
  • Real-time alerts: Provide immediate notifications when a potential threat is detected.

Some platforms also include automated response mechanisms to isolate affected data or deactivate compromised user accounts.

6. Compliance audits and reporting

Many industries are subject to strict data protection regulations, like GDPR, HIPAA, CCPA, or PCI DSS. DSPs help organisations comply with these laws by:

  • Continuously monitoring data handling practices.
  • Generating detailed audit trails.
  • Providing pre-configured compliance templates and reporting tools.

The features simplify regulatory audits and reduce the risk of non-compliance penalties.

Best data security platforms of 2025

Whether you’re a small business or a large enterprise, these tools will help you manage risks, secure databases, and protect sensitive information.

1. Velotix

Velotix is an AI-driven data security platform focused on policy automation and intelligent data access control. It simplifies compliance with stringent data regulations like GDPR, HIPAA, and CCPA, and helps organisations strike the perfect balance between accessibility and security. Key Features:

  • AI-powered access governance: Velotix uses machine learning to ensure users only access data they need to see, based on dynamic access policies.
  • Seamless integration: It integrates smoothly with existing infrastructures across cloud and on-premises environments.
  • Compliance automation: Simplifies meeting legal and regulatory requirements by automating compliance processes.
  • Scalability: Ideal for enterprises with complex data ecosystems, supporting hundreds of terabytes of sensitive data.

Velotix stands out for its ability to reduce the complexity of data governance, making it a must-have in today’s security-first corporate world.

2. NordLayer

NordLayer, from the creators of NordVPN, offers a secure network access solution tailored for businesses. While primarily a network security tool, it doubles as a robust data security platform by ensuring end-to-end encryption for your data in transit.

Key features:

  • Zero trust security: Implements a zero trust approach, meaning users and devices must be verified every time data access is requested.
  • AES-256 encryption Standards: Protects data flows with military-grade encryption.
  • Cloud versatility: Supports hybrid and multi-cloud environments for maximum flexibility.
  • Rapid deployment: Easy to implement even for smaller teams, requiring minimal IT involvement.

NordLayer ensures secure, encrypted communications between your team and the cloud, offering peace of mind when managing sensitive data.

3. HashiCorp Vault

HashiCorp Vault is a leader in secrets management, encryption as a service, and identity-based access. Designed for developers, it simplifies access control without placing sensitive data at risk, making it important for modern application development.

Key features:

  • Secrets management: Protect sensitive credentials like API keys, tokens, and passwords.
  • Dynamic secrets: Automatically generate temporary, time-limited credentials for improved security.
  • Encryption as a service: Offers flexible tools for encrypting any data across multiple environments.
  • Audit logging: Monitor data access attempts for greater accountability and compliance.

With a strong focus on application-level security, HashiCorp Vault is ideal for organisations seeking granular control over sensitive operational data.

4. Imperva Database Risk & Compliance

Imperva is a pioneer in database security. Its Database Risk & Compliance solution combines analytics, automation, and real-time monitoring to protect sensitive data from breaches and insider threats.

Key features:

  • Database activity monitoring (DAM): Tracks database activity in real time to identify unusual patterns.
  • Vulnerability assessment: Scans databases for security weaknesses and provides actionable remediation steps.
  • Cloud and hybrid deployment: Supports flexible environments, ranging from on-premises deployments to modern cloud setups.
  • Audit preparation: Simplifies audit readiness with detailed reporting tools and predefined templates.

Imperva’s tools are trusted by enterprises to secure their most confidential databases, ensuring compliance and top-notch protection.

5. ESET

ESET, a well-known name in cybersecurity, offers an enterprise-grade security solution that includes powerful data encryption tools. Famous for its malware protection, ESET combines endpoint security with encryption to safeguard sensitive information.

Key features:

  • Endpoint encryption: Ensures data remains protected even if devices are lost or stolen.
  • Multi-platform support: Works across Windows, Mac, and Linux systems.
  • Proactive threat detection: Combines AI and machine learning to detect potential threats before they strike.
  • Ease of use: User-friendly dashboards enable intuitive management of security policies.

ESET provides an all-in-one solution for companies needing endpoint protection, encryption, and proactive threat management.

6. SQL Secure

Aimed at database administrators, SQL Secure delivers specialised tools to safeguard SQL Server environments. It allows for detailed role-based analysis, helping organisations improve their database security posture and prevent data leaks.

Key features:

  • Role analysis: Identifies and mitigates excessive or unauthorised permission assignments.
  • Dynamic data masking: Protects sensitive data by obscuring it in real-time in applications and queries.
  • Customisable alerts: Notify teams of improper database access or policy violations immediately.
  • Regulatory compliance: Predefined policies make it easy to align with GDPR, HIPAA, PCI DSS, and other regulations.

SQL Secure is a tailored solution for businesses dependent on SQL databases, providing immediate insights and action plans for tighter security.

7. Acra

Acra is a modern, developer-friendly cryptographic tool engineered for data encryption and secure data lifecycle management. It brings cryptography closer to applications, ensuring deep-rooted data protection at every level.

Key features:

  • Application-level encryption: Empowers developers to integrate customised encryption policies directly into their apps.
  • Intrusion detection: Monitors for data leaks with a robust intrusion detection mechanism.
  • End-to-end data security: Protect data at rest, in transit, and in use, making it more versatile than traditional encryption tools.
  • Open source availability: Trusted by developers thanks to its open-source model, offering transparency and flexibility.

Acra is particularly popular with startups and tech-savvy enterprises needing a lightweight, developer-first approach to securing application data.

8. BigID

BigID focuses on privacy, data discovery, and compliance by using AI to identify sensitive data across structured and unstructured environments. Known for its data intelligence capabilities, BigID is one of the most comprehensive platforms for analysing and protecting enterprise data.

Key Features:

  • Data discovery: Automatically classify sensitive data like PII (Personally Identifiable Information) and PHI (Protected Health Information).
  • Privacy-by-design: Built to streamline compliance with global privacy laws like GDPR, CCPA, and more.
  • Risk management: Assess data risks and prioritise actions based on importance.
  • Integrations: Easily integrates with other security platforms and cloud providers for a unified approach.

BigID excels at uncovering hidden risks and ensuring compliance, making it an essential tool for data-driven enterprises.

9. DataSunrise Database Security

DataSunrise specialises in database firewall protection and intrusion detection for a variety of databases, including SQL-based platforms, NoSQL setups, and cloud-hosted solutions. It focuses on safeguarding sensitive data while providing robust real-time monitoring.

Key features:

  • Database firewall: Blocks unauthorised access attempts with role-specific policies.
  • Sensitive data discovery: Identifies risky data in your database for preventative action.
  • Audit reporting: Generate detailed investigative reports about database activity.
  • Cross-platform compatibility: Works with MySQL, PostgreSQL, Oracle, Amazon Aurora, Snowflake, and more.

DataSunrise is highly configurable and scalable, making it a solid choice for organisations running diverse database environments.

10. Covax Polymer

Covax Polymer is an innovative data security platform dedicated to governing sensitive data use in cloud-based collaboration tools like Slack, Microsoft Teams, and Google Workspace. It’s perfect for businesses that rely on SaaS applications for productivity.

Key features:

  • Real-time governance: Monitors and protects data transfers occurring across cloud collaboration tools.
  • Context-aware decisions: Evaluates interactions to identify potential risks, ensuring real-time security responses.
  • Data loss prevention (DLP): Prevents sensitive information from being shared outside approved networks.
  • Comprehensive reporting: Tracks and analyses data sharing trends, offering actionable insights for compliance.

Covax Polymer addresses the growing need for securing communications and shared data in collaborative workspaces.

(Image source: Unsplash)

The post Best data security platforms of 2025 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/best-data-security-platforms-of-2025/feed/ 0
ChatGPT Gov aims to modernise US government agencies https://www.artificialintelligence-news.com/news/chatgpt-gov-aims-modernise-us-government-agencies/ https://www.artificialintelligence-news.com/news/chatgpt-gov-aims-modernise-us-government-agencies/#respond Tue, 28 Jan 2025 16:21:26 +0000 https://www.artificialintelligence-news.com/?p=16999 OpenAI has launched ChatGPT Gov, a specially designed version of its AI chatbot tailored for use by US government agencies. ChatGPT Gov aims to harness the potential of AI to enhance efficiency, productivity, and service delivery while safeguarding sensitive data and complying with stringent security requirements. “We believe the US government’s adoption of artificial intelligence […]

The post ChatGPT Gov aims to modernise US government agencies appeared first on AI News.

]]>
OpenAI has launched ChatGPT Gov, a specially designed version of its AI chatbot tailored for use by US government agencies.

ChatGPT Gov aims to harness the potential of AI to enhance efficiency, productivity, and service delivery while safeguarding sensitive data and complying with stringent security requirements.

“We believe the US government’s adoption of artificial intelligence can boost efficiency and productivity and is crucial for maintaining and enhancing America’s global leadership in this technology,” explained OpenAI.

The company emphasised how its AI solutions present “enormous potential” for tackling complex challenges in the public sector, ranging from improving public health and infrastructure to bolstering national security.

By introducing ChatGPT Gov, OpenAI hopes to offer tools that “serve the national interest and the public good, aligned with democratic values,” while assisting policymakers in responsibly integrating AI to enhance services for the American people.

The role of ChatGPT Gov

Public sector organisations can deploy ChatGPT Gov within their own Microsoft Azure environments, either through Azure’s commercial cloud or the specialised Azure Government cloud.

This self-hosting capability ensures that agencies can meet strict security, privacy, and compliance standards, such as IL5, CJIS, ITAR, and FedRAMP High. 

OpenAI believes this infrastructure will not only help facilitate compliance with cybersecurity frameworks, but also speed up internal authorisation processes for handling non-public sensitive data.

The tailored version of ChatGPT incorporates many of the features found in the enterprise version, including:

  • The ability to save and share conversations within a secure government workspace.
  • Uploading text and image files for streamlined workflows.
  • Access to GPT-4o, OpenAI’s state-of-the-art model capable of advanced text interpretation, summarisation, coding, image analysis, and mathematics.
  • Customisable GPTs, which enable users to create and share specifically tailored models for their agency’s needs.
  • A built-in administrative console to help CIOs and IT departments manage users, groups, security protocols such as single sign-on (SSO), and more.

These features ensure that ChatGPT Gov is not merely a tool for innovation, but an infrastructure supportive of secure and efficient operations across US public-sector entities.

OpenAI says it’s actively working to achieve FedRAMP Moderate and High accreditations for its fully managed SaaS product, ChatGPT Enterprise, a step that would bolster trust in its AI offerings for government use.

Additionally, the company is exploring ways to expand ChatGPT Gov’s capabilities into Azure’s classified regions for even more secure environments.

“ChatGPT Gov reflects our commitment to helping US government agencies leverage OpenAI’s technology today,” the company said.

A better track record in government than most politicians

Since January 2024, ChatGPT has seen widespread adoption among US government agencies, with over 90,000 users across more than 3,500 federal, state, and local agencies having already sent over 18 million messages to support a variety of operational tasks.

Several notable agencies have highlighted how they are employing OpenAI’s AI tools for meaningful outcomes:

  • The Air Force Research Laboratory: The lab uses ChatGPT Enterprise for administrative purposes, including improving access to internal resources, basic coding assistance, and boosting AI education efforts.
  • Los Alamos National Laboratory: The laboratory leverages ChatGPT Enterprise for scientific research and innovation. This includes work within its Bioscience Division, which is evaluating ways GPT-4o can safely advance bioscientific research in laboratory settings.
  • State of Minnesota: Minnesota’s Enterprise Translations Office uses ChatGPT Team to provide faster, more accurate translation services to multilingual communities across the state. The integration has resulted in significant cost savings and reduced turnaround times.
  • Commonwealth of Pennsylvania: Employees in Pennsylvania’s pioneering AI pilot programme reported that ChatGPT Enterprise helped them reduce routine task times, such as analysing project requirements, by approximately 105 minutes per day on days they used the tool.

These early use cases demonstrate the transformative potential of AI applications across various levels of government.

Beyond delivering tangible improvements to government workflows, OpenAI seeks to foster public trust in artificial intelligence through collaboration and transparency. The company said it is committed to working closely with government agencies to align its tools with shared priorities and democratic values. 

“We look forward to collaborating with government agencies to enhance service delivery to the American people through AI,” OpenAI stated.

As other governments across the globe begin adopting similar technologies, America’s proactive approach may serve as a model for integrating AI into the public sector while safeguarding against risks.

Whether supporting administrative workflows, research initiatives, or language services, ChatGPT Gov stands as a testament to the growing role AI will play in shaping the future of effective governance.

(Photo by Dave Sherrill)

See also: Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT Gov aims to modernise US government agencies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chatgpt-gov-aims-modernise-us-government-agencies/feed/ 0
President Biden issues first National Security Memorandum on AI https://www.artificialintelligence-news.com/news/president-biden-issues-first-national-security-memorandum-ai/ https://www.artificialintelligence-news.com/news/president-biden-issues-first-national-security-memorandum-ai/#respond Fri, 25 Oct 2024 14:45:42 +0000 https://www.artificialintelligence-news.com/?p=16393 President Biden has issued the US’ first-ever National Security Memorandum (NSM) on AI, addressing how the nation approaches the technology from a security perspective. The memorandum, which builds upon Biden’s earlier executive order on AI, is founded on the premise that cutting-edge AI developments will substantially impact national security and foreign policy in the immediate […]

The post President Biden issues first National Security Memorandum on AI appeared first on AI News.

]]>
President Biden has issued the US’ first-ever National Security Memorandum (NSM) on AI, addressing how the nation approaches the technology from a security perspective.

The memorandum, which builds upon Biden’s earlier executive order on AI, is founded on the premise that cutting-edge AI developments will substantially impact national security and foreign policy in the immediate future.

Security experts suggest the implications are already being felt. “AI already has implications for national security, as we know that more and more attackers are using AI to create higher volume and more complex attacks, especially in the social engineering and misinformation fronts,” says Melissa Ruzzi, Director of AI at AppOmni.

At its core, the NSM outlines three primary objectives: establishing US leadership in safe AI development, leveraging AI technologies for national security, and fostering international governance frameworks.

“Our competitors want to upend US AI leadership and have employed economic and technological espionage in efforts to steal US technology,” the memorandum states, elevating the protection of American AI innovations to a “top-tier intelligence priority.”

The document formally designates the AI Safety Institute as the primary governmental point of contact for the AI industry. This institute will be staffed with technical experts and will maintain close partnerships with national security agencies, including the intelligence community, Department of Defence, and Department of Energy.

“The actions listed in the memo are great starting points to get a good picture of the status quo and obtain enough information to make decisions based on data, instead of jumping to conclusions to make decisions based on vague assumptions,” Ruzzi explains.

However, Ruzzi cautions that “the data that needs to be collected on the actions is not trivial, and even with the data, assumptions and trade-offs will be necessary for final decision making. Making decisions after data gathering is where the big challenge will be.”

In a notable move to democratise AI research, the memorandum reinforces support for the National AI Research Resource pilot programme. This initiative aims to extend AI research capabilities beyond major tech firms to universities, civil society organisations, and small businesses.

The NSM introduces the Framework to Advance AI Governance and Risk Management in National Security (PDF), which establishes comprehensive guidelines for implementing AI in national security applications. These guidelines mandate rigorous risk assessment procedures and safeguards against privacy invasions, bias, discrimination, and human rights violations.

Security considerations feature prominently in the framework, with Ruzzi emphasising their importance: “Cybersecurity of AI is crucial – we know that if AI is misconfigured, it can pose risks similar to misconfigurations in SaaS applications that cause confidential data to be exposed.”

On the international front, the memorandum builds upon recent diplomatic achievements, including the G7’s International Code of Conduct on AI and agreements reached at the Bletchley and Seoul AI Safety Summits. Notably, 56 nations have endorsed the US-led Political Declaration on the Military Use of AI and Autonomy.

The Biden administration has also secured a diplomatic victory with the passage of the first UN General Assembly Resolution on AI, which garnered unanimous support, including co-sponsorship from China.

The memorandum emphasises the critical role of semiconductor manufacturing in AI development, connecting to Biden’s earlier CHIPS Act. It directs actions to enhance chip supply chain security and diversity, ensuring American leadership in advanced computing infrastructure.

This latest initiative forms part of the Biden-Harris Administration’s broader strategy for responsible innovation in the AI sector, reinforcing America’s commitment to maintaining technological leadership while upholding democratic values and human rights.

(Photo by Nils Huenerfuerst)

See also: EU AI Act: Early prep could give businesses competitive edge

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post President Biden issues first National Security Memorandum on AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/president-biden-issues-first-national-security-memorandum-ai/feed/ 0
Palantir and Microsoft partner to provide federal AI services https://www.artificialintelligence-news.com/news/palantir-and-microsoft-partner-federal-ai-services/ https://www.artificialintelligence-news.com/news/palantir-and-microsoft-partner-federal-ai-services/#respond Mon, 12 Aug 2024 10:15:42 +0000 https://www.artificialintelligence-news.com/?p=15696 Palantir, a data analytics company known for its work in the defence and intelligence sectors, has announced a significant partnership with Microsoft. The collaboration aims to deliver advanced services for classified networks utilised by US defence and intelligence agencies. According to the recent announcement, Palantir is integrating Microsoft’s cutting-edge large language models via the Azure […]

The post Palantir and Microsoft partner to provide federal AI services appeared first on AI News.

]]>
Palantir, a data analytics company known for its work in the defence and intelligence sectors, has announced a significant partnership with Microsoft. The collaboration aims to deliver advanced services for classified networks utilised by US defence and intelligence agencies.

According to the recent announcement, Palantir is integrating Microsoft’s cutting-edge large language models via the Azure OpenAI Service into its AI platforms. The integration will occur within Microsoft’s government and classified cloud environments. As this collaboration is the first of its kind, this specific configuration has the potential to completely transform the use of AI in critical national security missions.

Palantir, whose name draws inspiration from the potentially misleading “seeing-stones” in J.R.R. Tolkien’s fictional works, specialises in processing and analysing vast quantities of data to assist governments and corporations with surveillance and decision-making tasks. While the precise nature of the services to be offered through this partnership remains somewhat ambiguous, it is clear that Palantir’s products will be integrated into Microsoft’s Azure cloud services. This development follows Azure’s previous incorporation of OpenAI’s GPT-4 technology into a “top secret” version of its software.

The company’s journey is notable. Co-founded by Peter Thiel and initially funded by In-Q-Tel, the CIA’s venture capital arm, Palantir has grown to serve a diverse clientele. Its roster includes government agencies such as Immigration and Customs Enforcement (ICE) and various police departments, as well as private sector giants like the pharmaceutical company Sanofi. Palantir has also become deeply involved in supporting Ukraine’s war efforts, with reports suggesting its software may be utilised in targeting decisions for military operations.

Even though Palantir has operated with a large customer base for years, it only reached its first annual profit in 2023. However, with the current surge of interest in AI, the company has been able to grow rapidly, particularly in the commercial sector. According to Bloomberg, Palantir’s CEO, Alex Karp, warned that Palantir’s “commercial business is exploding in a way we don’t know how to handle.”

Despite the urgency of this mission, the company’s annual filing clearly states that it neither does business with nor on behalf of the Chinese Communist Party, nor does it plan to do so. This indicates that Palantir is especially careful in developing its customer base, considering the geopolitical implications of its work.

The announcement of this partnership has been well-received by investors, with Palantir’s share price surging more than 75 per cent in 2024 as of the time of writing. This dramatic increase reflects the market’s optimism about the potential of AI in national security applications and Palantir’s position at the forefront of this field.

Still, the partnership between Palantir and Microsoft raises significant questions about the role of AI in national security and surveillance. This is no surprise, as these are particularly sensitive areas, and the development of new technologies could potentially transform the sector forever.

More discussions and investigations are needed to understand the ethical implications of implementing these innovative tools. All things considered, the Palantir and Microsoft partnership is a significant event that will likely shape the future use of AI technologies and cloud computing in areas such as intelligence and defence.

(Photo by Katie Moum)

See also: Paige and Microsoft unveil next-gen AI models for cancer diagnosis

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Palantir and Microsoft partner to provide federal AI services appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/palantir-and-microsoft-partner-federal-ai-services/feed/ 0
US clamps down on China-bound investments https://www.artificialintelligence-news.com/news/us-clamps-down-china-bound-investments/ https://www.artificialintelligence-news.com/news/us-clamps-down-china-bound-investments/#respond Thu, 27 Jun 2024 16:06:56 +0000 https://www.artificialintelligence-news.com/?p=15129 In a move that has further strained the already tense US-China relations, the Biden administration has advanced plans to restrict American investments in key Chinese technology sectors. This decision, announced by the US Treasury Department, has sparked a swift and sharp rebuke from Beijing, highlighting the deepening rift between the world’s two largest economies. The […]

The post US clamps down on China-bound investments appeared first on AI News.

]]>
In a move that has further strained the already tense US-China relations, the Biden administration has advanced plans to restrict American investments in key Chinese technology sectors. This decision, announced by the US Treasury Department, has sparked a swift and sharp rebuke from Beijing, highlighting the deepening rift between the world’s two largest economies.

The proposed rules, focusing on curbing investments in AI, quantum computing, and semiconductors, represent the latest salvo in what many observers call a “tech cold war.” These restrictions aim to prevent China from gaining ground in technologies critical to national security, particularly those with potential military applications.

China’s Ministry of Commerce responded with “severe concern and resolute opposition,” accusing the US of politicizing and weaponizing trade and commerce issues. The ministry’s statement urges the US to “respect the rules of a market economy and the principle of fair competition,” calling for cancelling the proposed rules and improving economic relations.

The Chinese government’s strong reaction underscores the significance of these restrictions. Beijing views them as an attempt to hinder China’s technological progress and economic development, a claim it has frequently levelled against Washington in recent years. The ministry went further, asserting that the US move would “pressure the normal development of China’s industry” and disrupt the “security and stability” of global supply chains.

This latest development is part of a broader pattern of increasing technological rivalry between the US and China. The trade dispute began in 2018 under the Trump administration and has already resulted in substantial tariffs on both sides. Additionally, the US has taken steps to restrict the activities of numerous Chinese tech firms within its borders and has encouraged global enterprises to limit their business in China.

US draws new battle lines in tech race with China

As Bloomberg puts it, the recently released Notice of Proposed Rulemaking (NPRM) is essentially one of several bureaucratic steps set in motion by an executive order issued last August. The proposed US rules are comprehensive in scope, covering various types of investments, including equity acquisitions, certain debt financing, joint ventures, and even some limited partner investments in non-US pooled investment funds. 

However, the proposal includes exemptions, such as investments in publicly traded companies and full ownership buyouts, possibly to balance national security concerns with maintaining some level of economic engagement. The focus on AI in these restrictions is particularly noteworthy. 

The US administration has expressed concerns about China developing AI applications for weapons targeting and mass surveillance, highlighting the dual-use nature of this technology and the ethical considerations surrounding its development. This emphasis on AI reflects its growing importance in future technological and economic competitiveness.

The price of this tech tug-of-war

The potential impact of these rules extends far beyond the immediate US-China relationship. They could lead to a further decoupling of the US and Chinese tech ecosystems, potentially accelerating China’s efforts to achieve technological self-sufficiency. Moreover, these restrictions could have ripple effects on international collaborations in scientific research and technological development, potentially slowing progress across the board.

From a geopolitical perspective, this move will likely further complicate US-China relations, which are already strained by trade disputes and human rights concerns. It may also prompt other countries to reassess their policies regarding tech investments and knowledge sharing with China.

The challenge for the Biden administration will be to effectively protect US national security interests without stifling innovation or causing undue economic harm. China’s assertion of its right to take countermeasures adds another layer of uncertainty to an already complex situation. How Beijing responds could have significant implications for global trade and technology development.

(Photo by Chenyu Guan)

See also: US introduces new AI chip export restrictions

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US clamps down on China-bound investments appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/us-clamps-down-china-bound-investments/feed/ 0
SAS aims to make AI accessible regardless of skill set with packaged AI models https://www.artificialintelligence-news.com/news/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/ https://www.artificialintelligence-news.com/news/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/#respond Wed, 17 Apr 2024 23:37:00 +0000 https://www.artificialintelligence-news.com/?p=14696 SAS, a specialist in data and AI solutions, has unveiled what it describes as a “game-changing approach” for organisations to tackle business challenges head-on. Introducing lightweight, industry-specific AI models for individual licence, SAS hopes to equip organisations with readily deployable AI technology to productionise real-world use cases with unparalleled efficiency. Chandana Gopal, research director, Future […]

The post SAS aims to make AI accessible regardless of skill set with packaged AI models appeared first on AI News.

]]>
SAS, a specialist in data and AI solutions, has unveiled what it describes as a “game-changing approach” for organisations to tackle business challenges head-on.

Introducing lightweight, industry-specific AI models for individual licence, SAS hopes to equip organisations with readily deployable AI technology to productionise real-world use cases with unparalleled efficiency.

Chandana Gopal, research director, Future of Intelligence, IDC, said: “SAS is evolving its portfolio to meet wider user needs and capture market share with innovative new offerings,

“An area that is ripe for SAS is productising models built on SAS’ core assets, talent and IP from its wealth of experience working with customers to solve industry problems.”

In today’s market, the consumption of models is primarily focused on large language models (LLMs) for generative AI. In reality, LLMs are a very small part of the modelling needs of real-world production deployments of AI and decision making for businesses. With the new offering, SAS is moving beyond LLMs and delivering industry-proven deterministic AI models for industries that span use cases such as fraud detection, supply chain optimization, entity management, document conversation and health care payment integrity and more.

Unlike traditional AI implementations that can be cumbersome and time-consuming, SAS’ industry-specific models are engineered for quick integration, enabling organisations to operationalise trustworthy AI technology and accelerate the realisation of tangible benefits and trusted results.

Expanding market footprint

Organisations are facing pressure to compete effectively and are looking to AI to gain an edge. At the same time, staffing data science teams has never been more challenging due to AI skills shortages. Consequently, businesses are demanding agility in using AI to solve problems and require flexible AI solutions to quickly drive business outcomes. SAS’ easy-to-use, yet powerful models tuned for the enterprise enable organisations to benefit from a half-century of SAS’ leadership across industries.

Delivering industry models as packaged offerings is one outcome of SAS’ commitment of $1 billion to AIpowered industry solutions. As outlined in the May 2023 announcement, the investment in AI builds on SAS’ decades-long focus on providing packaged solutions to address industry challenges in banking, government, health care and more.

Udo Sglavo, VP for AI and Analytics, SAS, said: “Models are the perfect complement to our existing solutions and SAS Viya platform offerings and cater to diverse business needs across various audiences, ensuring that innovation reaches every corner of our ecosystem. 

“By tailoring our approach to understanding specific industry needs, our frameworks empower businesses to flourish in their distinctive Environments.”

Bringing AI to the masses

SAS is democratising AI by offering out-of-the-box, lightweight AI models – making AI accessible regardless of skill set – starting with an AI assistant for warehouse space optimisation. Leveraging technology like large language models, these assistants cater to nontechnical users, translating interactions into optimised workflows seamlessly and aiding in faster planning decisions.

Sgvalo said: “SAS Models provide organisations with flexible, timely and accessible AI that aligns with industry challenges.

“Whether you’re embarking on your AI journey or seeking to accelerate the expansion of AI across your enterprise, SAS offers unparalleled depth and breadth in addressing your business’s unique needs.”

The first SAS Models are expected to be generally available later this year.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SAS aims to make AI accessible regardless of skill set with packaged AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/feed/ 0
Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’ https://www.artificialintelligence-news.com/news/stanhope-raises-2-3m-for-ai-that-teaches-machines-to-make-human-like-decisions/ https://www.artificialintelligence-news.com/news/stanhope-raises-2-3m-for-ai-that-teaches-machines-to-make-human-like-decisions/#respond Mon, 25 Mar 2024 10:40:00 +0000 https://www.artificialintelligence-news.com/?p=14604 Stanhope AI – a company applying decades of neuroscience research to teach machines how to make human-like decisions in the real world – has raised £2.3m in seed funding led by the UCL Technology Fund. Creator Fund also participated, along with, MMC Ventures, Moonfire Ventures and Rockmount Capital and leading angel investors.  Stanhope AI was […]

The post Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’ appeared first on AI News.

]]>
Stanhope AI – a company applying decades of neuroscience research to teach machines how to make human-like decisions in the real world – has raised £2.3m in seed funding led by the UCL Technology Fund.

Creator Fund also participated, along with, MMC Ventures, Moonfire Ventures and Rockmount Capital and leading angel investors. 

Stanhope AI was founded as a spinout from University College London, supported by UCL Business, by three of the most eminent names in neuroscience and AI research – CEO Professor Rosalyn Moran (former Deputy Director of King’s Institute for Artificial Intelligence), Director Karl Friston, Professor at the UCL Queen Square Institute of Neurology and Technical Advisor Dr Biswa Sengupta (MD of AI and Cloud products at JP Morgan Chase). 

By using key neuroscience principles and applying them to AI and mathematics, Stanhope AI is at the forefront of the new generation of AI technology known as ‘agentic’ AI.  The team has built algorithms that, like the human brain, are always trying to guess what will happen next; learning from any discrepancies between predicted and actual events to continuously update their “internal models of the world.” Instead of training vast LLMs to make decisions based on seen data, Stanhope agentic AI’s models are in charge of their own learning. They autonomously decode their environments and rebuild and refine their “world models” using real-time data, continuously fed to them via onboard sensors.  

The rise of agentic AI

This approach, and Stanhope AI’s technology, are based on the neuroscience principle of Active Inference – the idea that our brains, in order to minimise free energy, are constantly making predictions about incoming sensory data around us. As this data changes, our brains adapt and update our predictions in response to rebuild and refine our world view. 

This is very different to the traditional machine learning methods used to train today’s AI systems such as LLMs. Today’s models can only operate within the realms of the training they are given, and can only make best-guess decisions based on the information they have. They can’t learn on the go. They require extreme amounts of processing power and energy to train and run, as well as vast amounts of seen data.  

By contrast, Stanhope AI’s Active Inference models are truly autonomous. They can constantly rebuild and refine their predictions. Uncertainty is minimised by default, which removes the risk of hallucinations about what the AI thinks is true, and this moves Stanhope’s unique models towards reasoning and human-like decision-making. What’s more, by drastically reducing the size and energy required to run the models and the machines, Stanhope AI’s models can operate on small devices such as drones and similar.  

“The most all-encompassing idea since natural selection”

Stanhope AI’s approach is possible because of its founding team’s extensive research into the neuroscience principles of Active Inference, as well as free energy. Director Indeed Professor Friston, a world-renowned neuroscientist at UCL whose work has been cited twice as many times as Albert Einstein, is the inventor of the Free Energy Theory Principle. 

Friston’s principle theory centres on how our brains minimise surprise and uncertainty. It explains that all living things are driven to minimise free energy, and thus the energy needed to predict and perceive the world. Such is its impact, the Free Energy Theory Principle has been described as the “most all-encompassing idea since the theory of natural selection.” Active Inference sits within this theory to explain the process our brains use in order to minimise this energy. This idea infuses Stanhope AI’s work, led by Professor Moran, a specialist in Active Inference and its application through AI; and Dr Biswa Sengupta, whose doctoral research was in dynamical systems, optimisation and energy efficiency from the University of Cambridge. 

Real-world application

In the immediate term, the technology is being tested with delivery drones and autonomous machines used by partners including Germany’s Federal Agency for Disruptive Innovation and the Royal Navy. In the long term, the technology holds huge promise in the realms of manufacturing, industrial robotics and embodied AI. The investment will be used to further the company’s development of its agentic AI models and the practical application of its research.  

Professor Rosalyn Moran, CEO and co-founder of Stanhope AI, said: “Our mission at Stanhope AI is to bridge the gap between neuroscience and artificial intelligence, creating a new generation of AI systems that can think, adapt, and decide like humans. We believe this technology will transform the capabilities of AI and robotics and make them more impactful in real-world scenarios. We trust the math and we’re delighted to have the backing of investors like UCL Technology Fund who deeply understand the science behind this technology and their support will be significant on our journey to revolutionise AI technology.”

David Grimm, partner UCL Technology Fund, said: “AI startups may be some of the hottest investments right now but few have the calibre and deep scientific and technical know-how as the Stanhope AI team. This is emblematic of their unique approach, combining neuroscience insights with advanced AI, which presents a groundbreaking opportunity to advance the field and address some of the most challenging problems in AI today. We can’t wait to see what this team achieves.” 

Marina Santilli, sasociate director UCL Business, added “The promise offered by Stanhope AI’s approach to Artificial Intelligence is hugely exciting, providing hope for powerful whilst energy-light models. UCLB is delighted to have been able to support the formation of a company built on the decades of fundamental research at UCL led by Professor Friston, developing the Free Energy Principle.” 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/stanhope-raises-2-3m-for-ai-that-teaches-machines-to-make-human-like-decisions/feed/ 0
AUKUS trial advances AI for military operations  https://www.artificialintelligence-news.com/news/aukus-trial-advances-ai-for-military-operations/ https://www.artificialintelligence-news.com/news/aukus-trial-advances-ai-for-military-operations/#respond Mon, 05 Feb 2024 16:29:13 +0000 https://www.artificialintelligence-news.com/?p=14324 The UK armed forces and Defence Science and Technology Laboratory (Dstl) recently collaborated with the militaries of Australia and the US as part of the AUKUS partnership in a landmark trial focused on AI and autonomous systems.  The trial, called Trusted Operation of Robotic Vehicles in Contested Environments (TORVICE), was held in Australia under the […]

The post AUKUS trial advances AI for military operations  appeared first on AI News.

]]>
The UK armed forces and Defence Science and Technology Laboratory (Dstl) recently collaborated with the militaries of Australia and the US as part of the AUKUS partnership in a landmark trial focused on AI and autonomous systems. 

The trial, called Trusted Operation of Robotic Vehicles in Contested Environments (TORVICE), was held in Australia under the AUKUS partnership formed last year between the three countries. It aimed to test robotic vehicles and sensors in situations involving electronic attacks, GPS disruption, and other threats to evaluate the resilience of autonomous systems expected to play a major role in future military operations.

Understanding how to ensure these AI systems can operate reliably in the face of modern electronic warfare and cyber threats will be critical before the technology can be more widely adopted.  

The TORVICE trial featured US and British autonomous vehicles carrying out reconnaissance missions while Australia units simulated battlefield electronic attacks on their systems. Analysis of the performance data will help strengthen protections and safeguards needed to prevent system failures or disruptions.

Guy Powell, Dstl’s technical authority for the trial, said: “The TORVICE trial aims to understand the capabilities of robotic and autonomous systems to operate in contested environments. We need to understand how robust these systems are when subject to attack.

“Robotic and autonomous systems are a transformational capability that we are introducing to armies across all three nations.” 

This builds on the first AUKUS autonomous systems trial held in April 2023 in the UK. It also represents a step forward following the AUKUS defense ministers’ December announcement that Resilient and Autonomous Artificial Intelligence Technologies (RAAIT) would be integrated into the three countries’ military forces beginning in 2024.

Dstl military advisor Lt Col Russ Atherton says that successfully harnessing AI and autonomy promises to “be an absolute game-changer” that reduces the risk to soldiers. The technology could carry out key tasks like sensor operation and logistics over wider areas.

“The ability to deploy different payloads such as sensors and logistics across a larger battlespace will give commanders greater options than currently exist,” explained Lt Atherton.

By collaborating, the AUKUS allies aim to accelerate development in this crucial new area of warfare, improving interoperability between their forces, maximising their expertise, and strengthening deterrence in the Indo-Pacific region.

As AUKUS continues to deepen cooperation on cutting-edge military technologies, this collaborative effort will significantly enhance military capabilities while reducing risks for warfighters.

(Image Credit: Dstl)

See also: Experts from 30 nations will contribute to global AI safety report

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AUKUS trial advances AI for military operations  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/aukus-trial-advances-ai-for-military-operations/feed/ 0
Palantir demos how AI can be used in the military https://www.artificialintelligence-news.com/news/palantir-demos-how-ai-can-used-military/ https://www.artificialintelligence-news.com/news/palantir-demos-how-ai-can-used-military/#respond Fri, 28 Apr 2023 13:29:50 +0000 https://www.artificialintelligence-news.com/?p=12995 Palantir has demonstrated how AI can be used for national defense and other military purposes. The use of AI in the military is highly controversial. In this context, Large Language Models (LLMs) and algorithms must be implemented as ethically as possible. Palantir believes that’s where its AI Platform (AIP) comes in. AIP offers cutting-edge AI […]

The post Palantir demos how AI can be used in the military appeared first on AI News.

]]>
Palantir has demonstrated how AI can be used for national defense and other military purposes.

The use of AI in the military is highly controversial. In this context, Large Language Models (LLMs) and algorithms must be implemented as ethically as possible.

Palantir believes that’s where its AI Platform (AIP) comes in. AIP offers cutting-edge AI capabilities and claims to ensure that the use of LLMs and AI in the military context is guided by ethical principles.

AIP is able to deploy LLMs and AI across any network, from classified networks to devices on the tactical edge. AIP connects highly sensitive and classified intelligence data to create a real-time representation of the environment.

The solution’s security features let you define what LLMs and AI can and cannot see and what they can and cannot do with safe AI and handoff functions. This control and governance are crucial for mitigating significant legal, regulatory, and ethical risks posed by LLMs and AI in sensitive and classified settings.

AIP also implements guardrails to control, govern, and increase trust. As operators and AI take action on the platform, AIP generates a secure digital record of operations. These capabilities are essential for responsible, effective, and compliant deployment of AI in the military.

In a demo showcasing AIP, a military operator responsible for monitoring activity within Eastern Europe receives an alert that military equipment is amassed in a field 30km from friendly forces.

AIP leverages large language models to allow operators to quickly ask questions such as:

  • What enemy units are in the region?
  • Task new imagery for this location at a resolution of one metre or higher
  • Generate three courses of action to target this enemy equipment
  • Analyse the battlefield, considering a Stryker vehicle and a platoon-size unit
  • How many Javelin missiles does Team Omega have?
  • Assign jammers to each of the validated high-priority communications targets
  • Summarise the operational plan

As the operator poses questions, the LLM is using real-time information integrated from across public and classified sources. Data is automatically tagged and protected by classification markings, and AIP enforces which parts of the organisation the LLM has access to while respecting an individual’s permissions, role, and need to know.

Every response from AIP retains links back to the underlying data records to enable transparency for the user who can investigate as necessary.

AIP unleashes the power of large language models and cutting-edge AI for defense and military organisations while aiming to do so with the appropriate guardrails and high levels of ethics and transparency that are required for such sensitive applications.


(Image Credit: Palantir)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Palantir demos how AI can be used in the military appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/palantir-demos-how-ai-can-used-military/feed/ 0
FBI director warns about Beijing’s AI program https://www.artificialintelligence-news.com/news/fbi-director-warns-beijing-ai-program/ https://www.artificialintelligence-news.com/news/fbi-director-warns-beijing-ai-program/#respond Mon, 23 Jan 2023 14:26:40 +0000 https://www.artificialintelligence-news.com/?p=12644 FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program. During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”. Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning […]

The post FBI director warns about Beijing’s AI program appeared first on AI News.

]]>
FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program.

During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”.

Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning to further boost the capabilities of its state-sponsored hackers.

Much like nuclear expertise, AI can be used to benefit the world or harm it.

“I have the same reaction every time,” Wray explained. “I think, ‘Wow, we can do that.’ And then, ‘Oh god, they can do that.’”

Beijing is often accused of influencing other countries through its infrastructure investments. Washington largely views China’s expanding economic influence and military might as America’s main long-term security challenge.

Wray says that Beijing’s AI program “is built on top of the massive troves of intellectual property and sensitive data that they’ve stolen over the years.”

Furthermore, it will be used “to advance that same intellectual property theft, to advance the repression that occurs not just back home in mainland China but increasingly as a product they export around the world.”

Cloudflare CEO Matthew Prince spoke on the same panel and offered a more positive take: “The thing that makes me optimistic in this space: there are more good guys than bad guys.”

Prince acknowledges that whoever has the most data will win the AI race. Western data collection protections have historically been much stricter than in China.

“In a world where all these technologies are available to both the good guys and the bad guys, the good guys are constrained by the rule of law and international norms,” Wray added. “The bad guys aren’t, which you could argue gives them a competitive advantage.”

Prince and Wray say it’s the cooperation of the “good guys” that gives them the best chance at staying a step ahead of those wishing to cause harm.

“When we’re all working together, they’re no match,” concludes Wray.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with the Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post FBI director warns about Beijing’s AI program appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/fbi-director-warns-beijing-ai-program/feed/ 0
Google to speed up AI releases in response to ChatGPT https://www.artificialintelligence-news.com/news/google-speed-up-ai-releases-in-response-chatgpt/ https://www.artificialintelligence-news.com/news/google-speed-up-ai-releases-in-response-chatgpt/#respond Fri, 20 Jan 2023 17:17:36 +0000 https://www.artificialintelligence-news.com/?p=12635 Google is reportedly set to speed up its release of AI solutions in response to the launch of ChatGPT. The New York Times claims ChatGPT set off alarm bells at Google. At the invite of Google CEO Sundar Pichai, the company’s founders – Larry Page and Sergey Brin – returned for a series of meetings […]

The post Google to speed up AI releases in response to ChatGPT appeared first on AI News.

]]>
Google is reportedly set to speed up its release of AI solutions in response to the launch of ChatGPT.

The New York Times claims ChatGPT set off alarm bells at Google. At the invite of Google CEO Sundar Pichai, the company’s founders – Larry Page and Sergey Brin – returned for a series of meetings to review Google’s AI product strategy.

Google is one of the biggest investors in AI and has some of the most talented minds in the industry. As a result, the company is scrutinised more than most when it comes to any AI developments.

In 2020, leading AI ethics researcher Timnit Gebru was fired by Google. Gebru claims she was fired over an unpublished paper and sending an email critical of the company’s practices. Numerous other AI experts at Google left following her firing.

Just two years earlier, over 4,000 Googlers signed a petition demanding that Google cease its plans to develop AI for the US military. Google withdrew from the contract but not before at least a dozen employees resigned.

With the company in the spotlight, Google has allegedly been ultra-cautious in how it develops and deploys AI.

According to a CNBC report, Pichai and Google AI Chief Jeff Dean were asked in a meeting whether ChatGPT represented a “missed opportunity” for the company. Pichai and Dean said that Google’s own models were just as capable but the company had to move “more conservatively than a small startup” because of the “reputational risk” it poses.

Microsoft has invested so heavily in OpenAI that it’s hard to consider the company a small startup anymore. The two companies have established a deep partnership and Microsoft has begun integrating OpenAI’s technologies into its own products.

Earlier this month, AI News reported that Microsoft and OpenAI are set to integrate technology from OpenAI in Bing to challenge Google’s search dominance. That appears to have been what really set off the alarm bells at Google.

Google now appears to be speeding up the reveal and deployment of its own AI solutions. To that end, the company is reportedly working to speed up the review process which checks if it’s operating ethically.

One of the first AI solutions set to debut sounds very similar to what Microsoft and OpenAI have planned for Bing.

A demo of a chatbot-enhanced Google Search is expected at the company’s annual I/O developer conference in May. The demo will prioritise “getting facts right, ensuring safety and getting rid of misinformation.”

Other AI-powered product launches expected to be shown include an image generator, a set of tools for enterprises to develop their own AI prototypes within a browser window, and an app for testing such prototypes.

Google is also said to be working on a rival to GitHub Copilot, a coding assistant powered by OpenAI’s technology. Google’s alternative is called PaLM-Coder 2 and will have a version for building smartphone apps called Colab that will be integrated into Android Studio.

Overall, Google is set to unveil more than 20 AI-powered projects this year. The announcements should calm investors who’ve criticised Google’s slow AI developments in recent years but ethicists will be concerned about the company prioritising speed over safety.

(Photo by Mitchell Luo on Unsplash)

Relevant: OpenAI CEO: People are ‘begging to be disappointed’ about GPT-4

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google to speed up AI releases in response to ChatGPT appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-speed-up-ai-releases-in-response-chatgpt/feed/ 0
US introduces new AI chip export restrictions https://www.artificialintelligence-news.com/news/us-introduces-new-ai-chip-export-restrictions/ https://www.artificialintelligence-news.com/news/us-introduces-new-ai-chip-export-restrictions/#respond Thu, 01 Sep 2022 16:01:15 +0000 https://www.artificialintelligence-news.com/?p=12228 NVIDIA has revealed that it’s subject to new laws restricting the export of AI chips to China and Russia. In an SEC filing, NVIDIA says the US government has informed the chipmaker of a new license requirement that impacts two of its GPUs designed to speed up machine learning tasks: the current A100, and the […]

The post US introduces new AI chip export restrictions appeared first on AI News.

]]>
NVIDIA has revealed that it’s subject to new laws restricting the export of AI chips to China and Russia.

In an SEC filing, NVIDIA says the US government has informed the chipmaker of a new license requirement that impacts two of its GPUs designed to speed up machine learning tasks: the current A100, and the upcoming H100.

“The license requirement also includes any future NVIDIA integrated circuit achieving both peak performance and chip-to-chip I/O performance equal to or greater than thresholds that are roughly equivalent to the A100, as well as any system that includes those circuits,” adds NVIDIA.

The US government has reportedly told NVIDIA that the new rules are geared at addressing the risk of the affected products being used for military purposes.

“While we are not in a position to outline specific policy changes at this time, we are taking a comprehensive approach to implement additional actions necessary related to technologies, end-uses, and end-users to protect US national security and foreign policy interests,” said a US Department of Commerce spokesperson.

China is a large market for NVIDIA and the new rules could affect around $400 million in quarterly sales.

AMD has also been told the new rules will impact its similar products, including the MI200.

As of writing, NVIDIA’s shares were down 11.45 percent from the market open. AMD’s shares are down 6.81 percent. However, it’s worth noting that it’s been another red day for the wider stock market.

(Photo by Wesley Tingey on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US introduces new AI chip export restrictions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/us-introduces-new-ai-chip-export-restrictions/feed/ 0