virtual assistant Archives - AI News https://www.artificialintelligence-news.com/news/tag/virtual-assistant/ Artificial Intelligence News Thu, 24 Apr 2025 11:39:57 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png virtual assistant Archives - AI News https://www.artificialintelligence-news.com/news/tag/virtual-assistant/ 32 32 Amazon Nova Act: A step towards smarter, web-native AI agents https://www.artificialintelligence-news.com/news/amazon-nova-act-step-towards-smarter-web-native-ai-agents/ https://www.artificialintelligence-news.com/news/amazon-nova-act-step-towards-smarter-web-native-ai-agents/#respond Tue, 01 Apr 2025 16:57:43 +0000 https://www.artificialintelligence-news.com/?p=105105 Amazon has introduced Nova Act, an advanced AI model engineered for smarter agents that can execute tasks within web browsers. While large language models popularised the concept of “agents” as tools that answer queries or retrieve information via methods such as Retrieval-Augmented Generation (RAG), Amazon envisions something more robust. The company defines agents not just […]

The post Amazon Nova Act: A step towards smarter, web-native AI agents appeared first on AI News.

]]>
Amazon has introduced Nova Act, an advanced AI model engineered for smarter agents that can execute tasks within web browsers.

While large language models popularised the concept of “agents” as tools that answer queries or retrieve information via methods such as Retrieval-Augmented Generation (RAG), Amazon envisions something more robust. The company defines agents not just as responders but as entities capable of performing tangible, multi-step tasks in diverse digital and physical environments.

“Our dream is for agents to perform wide-ranging, complex, multi-step tasks like organising a wedding or handling complex IT tasks to increase business productivity,” said Amazon.

Current market offerings often fall short, with many agents requiring continuous human supervision and their functionality dependent on comprehensive API integration—something not feasible for all tasks. Nova Act is Amazon’s answer to these limitations.

Alongside the model, Amazon is releasing a research preview of the Amazon Nova Act SDK. Using the SDK, developers can create agents capable of automating web tasks like submitting out-of-office notifications, scheduling calendar holds, or enabling automatic email replies.

The SDK aims to break down complex workflows into dependable “atomic commands” such as searching, checking out, or interacting with specific interface elements like dropdowns or popups. Detailed instructions can be added to refine these commands, allowing developers to, for instance, instruct an agent to bypass an insurance upsell during checkout.

To further enhance accuracy, the SDK supports browser manipulation via Playwright, API calls, Python integrations, and parallel threading to overcome web page load delays.

Nova Act: Exceptional performance on benchmarks

Unlike other generative models that showcase middling accuracy on complex tasks, Nova Act prioritises reliability. Amazon highlights its model’s impressive scores of over 90% on internal evaluations for specific capabilities that typically challenge competitors. 

Nova Act achieved a near-perfect 0.939 on the ScreenSpot Web Text benchmark, which measures natural language instructions for text-based interactions, such as adjusting font sizes. Competing models such as Claude 3.7 Sonnet (0.900) and OpenAI’s CUA (0.883) trail behind by significant margins.

Similarly, Nova Act scored 0.879 in the ScreenSpot Web Icon benchmark, which tests interactions with visual elements like rating stars or icons. While the GroundUI Web test, designed to assess an AI’s proficiency in navigating various user interface elements, showed Nova Act slightly trailing competitors, Amazon sees this as an area ripe for improvement as the model evolves.

Amazon stresses its focus on delivering practical reliability. Once an agent built using Nova Act functions as expected, developers can deploy it headlessly, integrate it as an API, or even schedule it to run tasks asynchronously. In one demonstrated use case, an agent automatically orders a salad for delivery every Tuesday evening without requiring ongoing user intervention.

Amazon sets out its vision for scalable and smart AI agents

One of Nova Act’s standout features is its ability to transfer its user interface understanding to new environments with minimal additional training. Amazon shared an instance where Nova Act performed admirably in browser-based games, even though its training had not included video game experiences. This adaptability positions Nova Act as a versatile agent for diverse applications.

This capability is already being leveraged in Amazon’s own ecosystem. Within Alexa+, Nova Act enables self-directed web navigation to complete tasks for users, even when API access is not comprehensive enough. This represents a step towards smarter AI assistants that can function independently, harnessing their skills in more dynamic ways.

Amazon is clear that Nova Act represents the first stage in a broader mission to craft intelligent, reliable AI agents capable of handling increasingly complex, multi-step tasks. 

Expanding beyond simple instructions, Amazon’s focus is on training agents through reinforcement learning across varied, real-world scenarios rather than overly simplistic demonstrations. This foundational model serves as a checkpoint in a long-term training curriculum for Nova models, indicating the company’s ambition to reshape the AI agent landscape.

“The most valuable use cases for agents have yet to be built,” Amazon noted. “The best developers and designers will discover them. This research preview of our Nova Act SDK enables us to iterate alongside these builders through rapid prototyping and iterative feedback.”

Nova Act is a step towards making AI agents truly useful for complex, digital tasks. From rethinking benchmarks to emphasising reliability, its design philosophy is centred around empowering developers to move beyond what’s possible with current-generation tools. 

See also: Anthropic provides insights into the ‘AI biology’ of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon Nova Act: A step towards smarter, web-native AI agents appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/amazon-nova-act-step-towards-smarter-web-native-ai-agents/feed/ 0
Opera introduces browser-integrated AI agent https://www.artificialintelligence-news.com/news/opera-introduces-browser-integrated-ai-agent/ https://www.artificialintelligence-news.com/news/opera-introduces-browser-integrated-ai-agent/#respond Mon, 03 Mar 2025 16:34:09 +0000 https://www.artificialintelligence-news.com/?p=104668 Opera has introduced “Browser Operator,” a native AI agent designed to perform tasks for users directly within the browser. Rather than acting as a separate tool, Browser Operator is an extension of the browser itself—designed to empower users by automating repetitive tasks like purchasing products, completing online forms, and gathering web content. Unlike server-based AI […]

The post Opera introduces browser-integrated AI agent appeared first on AI News.

]]>
Opera has introduced “Browser Operator,” a native AI agent designed to perform tasks for users directly within the browser.

Rather than acting as a separate tool, Browser Operator is an extension of the browser itself—designed to empower users by automating repetitive tasks like purchasing products, completing online forms, and gathering web content.

Unlike server-based AI integrations which require sensitive data to be sent to third-party servers, Browser Operator processes tasks locally within the Opera browser.

Opera’s demonstration video showcases how Browser Operator can streamline an everyday task like buying socks. Instead of manually scrolling through product pages or filling out payment forms, users could delegate the entire process to Browser Operator—allowing them to shift focus to activities that matter more to them, such as spending time with loved ones.

Harnessing natural language processing powered by Opera’s AI Composer Engine, Browser Operator interprets written instructions from users and executes corresponding tasks within the browser. All operations occur locally on a user’s device, leveraging the browser’s own infrastructure to safely and swiftly complete commands.  

If Browser Operator encounters a sensitive step in the process, such as entering payment details or approving an order, it pauses and requests the user’s input. You also have the freedom to intervene and take control of the process at any time.  

Every step Browser Operator takes is transparent and fully reviewable, providing users a clear understanding of how tasks are being executed. If mistakes occur – like placing an incorrect order – you can further instruct the AI agent to make amends, such as cancelling the order or adjusting a form.

The key differentiators: Privacy, performance, and precision  

What sets Browser Operator apart from other AI-integrated tools is its localised, privacy-first architecture. Unlike competitors that depend on screenshots or video recordings to understand webpage content, Opera’s approach uses the Document Object Model (DOM) Tree and browser layout data—a textual representation of the webpage.  

This difference offers several key advantages:

  • Faster task completion: Browser Operator doesn’t need to “see” and interpret pixels on the screen or emulate mouse movements. Instead, it accesses web page elements directly, avoiding unnecessary overhead and allowing it to process pages holistically without scrolling.
  • Enhanced privacy: With all operations conducted on the browser itself, user data – including logins, cookies, and browsing history – remains secure on the local device. No screenshots, keystrokes, or personal information are sent to Opera’s servers.
  • Easier interaction with page elements: The AI can engage with elements hidden from the user’s view, such as behind cookie popups or verification dialogs, enabling seamless access to web page content.

By enabling the browser to autonomously perform tasks, Opera is taking a significant step forward in making browsers “agentic”—not just tools for accessing the internet, but assistants that actively enhance productivity.  

See also: You.com ARI: Professional-grade AI research agent for businesses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Opera introduces browser-integrated AI agent appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/opera-introduces-browser-integrated-ai-agent/feed/ 0
NVIDIA and Meta CEOs: Every business will ‘have an AI’ https://www.artificialintelligence-news.com/news/nvidia-and-meta-ceo-every-business-will-have-an-ai/ https://www.artificialintelligence-news.com/news/nvidia-and-meta-ceo-every-business-will-have-an-ai/#respond Tue, 30 Jul 2024 15:30:43 +0000 https://www.artificialintelligence-news.com/?p=15557 In a fireside chat at SIGGRAPH 2024, NVIDIA founder and CEO Jensen Huang and Meta founder and CEO Mark Zuckerberg shared their insights on the potential of open source AI and virtual assistants. The conversation began with Zuckerberg announcing the launch of AI Studio, a new platform designed to democratise AI creation. This tool allows […]

The post NVIDIA and Meta CEOs: Every business will ‘have an AI’ appeared first on AI News.

]]>
In a fireside chat at SIGGRAPH 2024, NVIDIA founder and CEO Jensen Huang and Meta founder and CEO Mark Zuckerberg shared their insights on the potential of open source AI and virtual assistants.

The conversation began with Zuckerberg announcing the launch of AI Studio, a new platform designed to democratise AI creation. This tool allows users to create, share, and discover AI characters, potentially opening up AI development to millions of creators and small businesses.

Huang emphasised the ubiquity of AI in the future, stating, “Every single restaurant, every single website will probably, in the future, have these AIs …”

Zuckerberg concurred, adding, “…just like every business has an email address and a website and a social media account, I think, in the future, every business is going to have an AI.”

This vision aligns with NVIDIA’s recent developments showcased at SIGGRAPH. The company previewed “James,” an interactive digital human based on the NVIDIA ACE (Avatar Cloud Engine) reference design. James – a virtual assistant capable of providing contextually accurate responses – demonstrates the potential for businesses to create custom, hyperrealistic avatars for customer interactions.

The discussion highlighted Meta’s significant contributions to AI development. Huang praised Meta’s work, saying, “You guys have done amazing AI work,” and cited advancements in computer vision, language models, and real-time translation. He also acknowledged the widespread use of PyTorch, an open-source machine learning framework developed by Meta.

Both CEOs stressed the importance of open source in advancing AI. Meta has positioned itself as a leader in this field, implementing AI across its platforms and releasing open-source models like Llama 3.1. This latest model, with 405 billion parameters, required training on over 16,000 NVIDIA H100 GPUs, representing a substantial investment in resources.

Zuckerberg shared his vision for more integrated AI models, saying, “I kind of dream of one day like you can almost imagine all of Facebook or Instagram being like a single AI model that has unified all these different content types and systems together.” He believes that collaboration is crucial for further advancements in AI.

The conversation touched on the potential of AI to enhance human productivity. Huang described a future where AI could generate images in real-time as users type, allowing for fluid collaboration between humans and AI assistants. This concept is reflected in NVIDIA’s latest advancements to the NVIDIA Maxine AI platform, including Maxine 3D and Audio2Face-2D, which aim to create immersive telepresence experiences.

Looking ahead, Zuckerberg expressed enthusiasm about combining AI with augmented reality eyewear, mentioning Meta’s collaboration with eyewear maker Luxottica. He envisions this technology transforming education, entertainment, and work.

Huang discussed the evolution of AI interactions, moving beyond turn-based conversations to more complex, multi-option simulations. “Today’s AI is kind of turn-based. You say something, it says something back to you,” Huang explained. “In the future, AI could contemplate multiple options, or come up with a tree of options and simulate outcomes, making it much more powerful.”

The importance of this evolution is evident in the adoption of NVIDIA’s technologies by companies across industries. HTC, Looking Glass, Reply, and UneeQ are among the latest firms using NVIDIA ACE and Maxine for applications ranging from customer service agents to telepresence experiences in entertainment, retail, and hospitality.

As AI continues to evolve and integrate into various aspects of our lives, the insights shared by these industry leaders provide a glimpse into a future where AI assistants are as commonplace as websites and social media accounts.

The developments showcased at SIGGRAPH 2024 by both NVIDIA and other companies demonstrate that this future is rapidly approaching, with digital humans becoming increasingly sophisticated and capable of natural, engaging interactions.

See also: Amazon strives to outpace Nvidia with cheaper, faster AI chips

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NVIDIA and Meta CEOs: Every business will ‘have an AI’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nvidia-and-meta-ceo-every-business-will-have-an-ai/feed/ 0
Google ushers in the “Gemini era” with AI advancements https://www.artificialintelligence-news.com/news/google-ushers-in-gemini-era-ai-advancements/ https://www.artificialintelligence-news.com/news/google-ushers-in-gemini-era-ai-advancements/#respond Wed, 15 May 2024 17:29:19 +0000 https://www.artificialintelligence-news.com/?p=14825 Google has unveiled a series of updates to its AI offerings, including the introduction of Gemini 1.5 Flash, enhancements to Gemini 1.5 Pro, and progress on Project Astra, its vision for the future of AI assistants. Gemini 1.5 Flash is a new addition to Google’s family of models, designed to be faster and more efficient […]

The post Google ushers in the “Gemini era” with AI advancements appeared first on AI News.

]]>
Google has unveiled a series of updates to its AI offerings, including the introduction of Gemini 1.5 Flash, enhancements to Gemini 1.5 Pro, and progress on Project Astra, its vision for the future of AI assistants.

Gemini 1.5 Flash is a new addition to Google’s family of models, designed to be faster and more efficient to serve at scale. While lighter-weight than the 1.5 Pro, it retains the ability for multimodal reasoning across vast amounts of information and features the breakthrough long context window of one million tokens.

“1.5 Flash excels at summarisation, chat applications, image and video captioning, data extraction from long documents and tables, and more,” explained Demis Hassabis, CEO of Google DeepMind. “This is because it’s been trained by 1.5 Pro through a process called ‘distillation,’ where the most essential knowledge and skills from a larger model are transferred to a smaller, more efficient model.”

Meanwhile, Google has significantly improved the capabilities of its Gemini 1.5 Pro model, extending its context window to a groundbreaking two million tokens. Enhancements have been made to its code generation, logical reasoning, multi-turn conversation, and audio and image understanding capabilities.

The company has also integrated Gemini 1.5 Pro into Google products, including the Gemini Advanced and Workspace apps. Additionally, Gemini Nano now understands multimodal inputs, expanding beyond text-only to include images.

Google announced its next generation of open models, Gemma 2, designed for breakthrough performance and efficiency. The Gemma family is also expanding with PaliGemma, the company’s first vision-language model inspired by PaLI-3.

Finally, Google shared progress on Project Astra (advanced seeing and talking responsive agent), its vision for the future of AI assistants. The company has developed prototype agents that can process information faster, understand context better, and respond quickly in conversation.

“We’ve always wanted to build a universal agent that will be useful in everyday life. Project Astra, shows multimodal understanding and real-time conversational capabilities,” explained Google CEO Sundar Pichai.

“With technology like this, it’s easy to envision a future where people could have an expert AI assistant by their side, through a phone or glasses.”

Google says that some of these capabilities will be coming to its products later this year. Developers can find all of the Gemini-related announcements they need here.

See also: GPT-4o delivers human-like AI interaction with text, audio, and vision integration

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google ushers in the “Gemini era” with AI advancements appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-ushers-in-gemini-era-ai-advancements/feed/ 0
OpenAI launches GPT Store for custom AI assistants https://www.artificialintelligence-news.com/news/openai-launches-gpt-store-custom-ai-assistants/ https://www.artificialintelligence-news.com/news/openai-launches-gpt-store-custom-ai-assistants/#respond Thu, 11 Jan 2024 16:47:47 +0000 https://www.artificialintelligence-news.com/?p=14175 OpenAI has launched its new GPT Store providing users with access to custom AI assistants. Since the announcement of custom ‘GPTs’ two months ago, OpenAI says users have already created over three million custom assistants. Builders can now share their creations in the dedicated store. The store features assistants focused on a wide range of […]

The post OpenAI launches GPT Store for custom AI assistants appeared first on AI News.

]]>
OpenAI has launched its new GPT Store providing users with access to custom AI assistants.

Since the announcement of custom ‘GPTs’ two months ago, OpenAI says users have already created over three million custom assistants. Builders can now share their creations in the dedicated store.

The store features assistants focused on a wide range of topics including art, research, programming, education, lifestyle, and more. OpenAI is highlighting assistants it deems most useful, including:

  • Personal trail recommendations from AllTrails
  • Searching academic papers with Consensus
  • Expanding coding skills via Khan Academy’s Code Tutor
  • Designing presentations with Canva, book recommendations from Books
  • Maths help from CK-12 Flexi

OpenAI says making an assistant is simple and requires no coding knowledge. To share one, builders currently need to make it accessible to ‘Anyone with the link’ and verify their profile.

OpenAI introduced new usage policies and brand guidelines to ensure compliance. A review system combines human and automated checking before assistants are listed. Users can also flag concerning content.  

From Q1 2024, OpenAI will pay qualifying US-based builders for user engagement with their assistants. More details on exact payment criteria will be shared closer to launch.

For enterprise users, OpenAI announced ChatGPT Team plans for teams of all sizes. These provide access to a private store section containing company-specific assistants published securely to their workspace.

ChatGPT Enterprise customers will soon get admin controls for internal sharing and selecting which external assistants can be used by employees. As with all ChatGPT Team and Enterprise content, conversations are not used to improve OpenAI’s models.

Few apps have ever achieved the adoption rate of ChatGPT. OpenAI will be hoping its new stores and revenue opportunities will build upon this momentum by incentivising builders to create assistants that provide value to consumers and enterprises alike.

(Image Credit: OpenAI)

See also: OpenAI: Copyrighted data ‘impossible’ to avoid for AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI launches GPT Store for custom AI assistants appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-launches-gpt-store-custom-ai-assistants/feed/ 0
IRS expands voice bot options for faster service https://www.artificialintelligence-news.com/news/irs-expands-voice-bot-options-for-faster-service/ https://www.artificialintelligence-news.com/news/irs-expands-voice-bot-options-for-faster-service/#respond Tue, 21 Jun 2022 13:51:14 +0000 https://www.artificialintelligence-news.com/?p=12096 The US Internal Revenue Service has unveiled expanded voice bot options to help eligible taxpayers easily verify their identity to set up or modify a payment plan while avoiding long wait times. “This is part of a wider effort at the IRS to help improve the experience of taxpayers,” said IRS commissioner Chuck Rettig. “We […]

The post IRS expands voice bot options for faster service appeared first on AI News.

]]>
The US Internal Revenue Service has unveiled expanded voice bot options to help eligible taxpayers easily verify their identity to set up or modify a payment plan while avoiding long wait times.

“This is part of a wider effort at the IRS to help improve the experience of taxpayers,” said IRS commissioner Chuck Rettig. “We continue to look for ways to better assist taxpayers, and that includes helping people avoid waiting on hold or having to make a second phone call to get what they need. The expanded voice bots are another example of how technology can help the IRS provide better service to taxpayers.”

Voice bots run on software powered by artificial intelligence, which enables a caller to navigate an interactive voice response. The IRS has been using voice bots on numerous toll-free lines since January, enabling taxpayers with simple payment or notice questions to get what they need quickly and avoid waiting. Taxpayers can always speak with an English- or Spanish-speaking IRS telephone representative if needed.

Eligible taxpayers who call the Automated Collection System (ACS) and Accounts Management toll-free lines and want to discuss payment plan options can authenticate or verify their identities through a personal identification number (PIN) creation process. Setting up a PIN is easy: Taxpayers will need their most recent IRS bill and some basic personal information to complete the process.

“To date, the voice bots have answered over three million calls. As we add more functions for taxpayers to resolve their issues, I anticipate many more taxpayers getting the service they need quickly and easily,” said Darren Guillot, IRS deputy commissioner of Small Business/Self Employed Collection & Operations Support.

Additional voice bot service enhancements are planned in 2022 that will allow authenticated individuals (taxpayers with established or newly created PINs) to get:

  • Account and return transcripts.
  • Payment history.
  • Current balance owed.

In addition to the payment lines, voice bots help people who call the Economic Impact Payment (EIP) toll-free line with general procedural responses to frequently asked questions. The IRS also added voice bots for the Advance Child Tax Credit toll-free line in February to provide similar assistance to callers who need help reconciling the credits on their 2021 tax return.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IRS expands voice bot options for faster service appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/irs-expands-voice-bot-options-for-faster-service/feed/ 0
Why AI needs human intervention https://www.artificialintelligence-news.com/news/why-ai-needs-human-intervention/ https://www.artificialintelligence-news.com/news/why-ai-needs-human-intervention/#respond Wed, 19 Jan 2022 17:07:47 +0000 https://artificialintelligence-news.com/?p=11586 In today’s tight labour market and hybrid work environment, organizations are increasingly turning to AI to support various functions within their business, from delivering more personalized experiences to improving operations and productivity to helping organizations make better and faster decisions. That is why the worldwide market for AI software, hardware, and services is expected to […]

The post Why AI needs human intervention appeared first on AI News.

]]>
In today’s tight labour market and hybrid work environment, organizations are increasingly turning to AI to support various functions within their business, from delivering more personalized experiences to improving operations and productivity to helping organizations make better and faster decisions. That is why the worldwide market for AI software, hardware, and services is expected to surpass $500 billion by 2024, according to IDC.

Yet, many enterprises aren’t ready to have their AI systems run independently and entirely without human intervention – nor should they do so. 

In many instances, enterprises simply don’t have sufficient expertise in the systems they use as AI technologies are extraordinarily complex. In other instances, rudimentary AI is built into enterprise software. These can be fairly static and remove control over the parameters of the data most organizations need. But even the most AI savvy organizations keep humans in the equation to avoid risks and reap the maximum benefits of AI. 

AI Checks and Balances

There are clear ethical, regulatory, and reputational reasons to keep humans in the loop. Inaccurate data can be introduced over time leading to poor decisions or even dire circumstances in some cases. Biases can also creep into the system whether it is introduced while training the AI model, as a result of changes in the training environment, or due to trending bias where the AI system reacts to recent activities more than previous ones. Moreover, AI is often incapable of understanding the subtleties of a moral decision. 

Take healthcare for instance. The industry perfectly illustrates how AI and humans can work together to improve outcomes or cause great harm if humans are not fully engaged in the decision-making process. For example, in diagnosing or recommending a care plan for a patient, AI is ideal for making the recommendation to the doctor, who then evaluates if that recommendation is sound and then gives the counsel to the patient.

Having a way for people to continually monitor AI responses and accuracy will avoid flaws that could lead to harm or catastrophe while providing a means for continuous training of the models so they get continuously better and better. That’s why IDC expects more than 70% of G2000 companies will have formal programs to monitor their digital trustworthiness by 2022.

Models for Human-AI Collaboration

Human-in-the-Loop (HitL) Reinforcement Learning and Conversational AI are two examples of how human intervention supports AI systems in making better decisions.

HitL allows AI systems to leverage machine learning to learn by observing humans dealing with real-life work and use cases. HitL models are like traditional AI models except they are continuously self-developing and improving based on human feedback while, in some cases, augmenting human interactions. It provides a controlled environment that limits the inherent risk of biases—such as the bandwagon effect—that can have devastating consequences, especially in crucial decision-making processes.

We can see the value of the HitL model in industries that manufacture critical parts for vehicles or aircraft requiring equipment that is up to standard. In situations like this, machine learning increases the speed and accuracy of inspections, while human oversight provides added assurances that parts are safe and secure for passengers.

Conversational AI, on the other hand, provides near-human-like communication. It can offload work from employees in handling simpler problems while knowing when to escalate an issue to humans for solving more complex issues. Contact centres provide a primary example.

When a customer reaches out to a contact centre, they have the option to call, text, or chat virtually with a representative. The virtual agent listens and understands the needs of the customer and engages back and forth in a conversation. It uses machine learning and AI to decide what needs to be done based on what it has learned from prior experience. Most AI systems within contact centres generate speech to help communicate with the customer and mimic the feeling of a human doing the typing or talking.

For most situations, a virtual agent is enough to help service customers and resolve their problems. However, there are cases where AI can stop typing or talking and then make a seamless transfer to a live representative to take over the call or chat.  Even in these examples, the AI system can shift from automation to augmentation, by still listening to the conversation and providing recommendations to the live representative to aid them in their decisions

Going beyond conversational AI with cognitive AI, these systems can learn to understand the emotional state of the other party, handle complex dialogue, provide real-time translation and even adjust based on the behaviour of the other person, taking human assistance to the next level of sophistication.

Blending Automation and Human Interaction Leads to Augmented Intelligence

AI is best applied when it is both monitored by and augments people. When that happens, people move up the skills continuum, taking on more complex challenges, while the AI continually learns, improves, and is kept in check, avoiding potentially harmful effects. Using models like HitL, conversational AI, and cognitive AI in collaboration with real people who possess expertise, ingenuity, empathy and moral judgment ultimately leads to augmented intelligence and more positive outcomes.

(Photo by Arteum.ro on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Why AI needs human intervention appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/why-ai-needs-human-intervention/feed/ 0
Stefano Somenzi, Athics: On no-code AI and deploying conversational bots https://www.artificialintelligence-news.com/news/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/ https://www.artificialintelligence-news.com/news/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/#respond Fri, 12 Nov 2021 16:47:39 +0000 https://artificialintelligence-news.com/?p=11369 No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic. AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual […]

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic.

AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual agents.

AI News: Do you think “no-code” will help more businesses to begin their AI journeys?

Stefano Somenzi: The real advantage of “no code” is not just the reduced effort required for businesses to get things done, it is also centered around changing the role of the user who will build the AI solution. In our case, a conversational AI agent.

“No code” means that the AI solution is built not by a data scientist but by the process owner. The process owner is best-suited to know what the AI solution should deliver and how. But, if you need coding, this means that the process owner needs to translate his/her requirements into a data scientist’s language.

This requires much more time and is affected by the “lost in translation” syndrome that hinders many IT projects. That’s why “no code” will play a major role in helping companies approach AI.

AN: Research from PwC found that 71 percent of US consumers would rather interact with a human than a chatbot or some other automated process. How can businesses be confident that bots created through your Crafter.ai platform will improve the customer experience rather than worsen it?

SS: Even the most advanced conversational AI agents, like ours, are not suited to replace a direct consumer-to-human interaction if what the consumer is looking for is the empathy that today only a human is able to show during a conversation.

At the same time, inefficiencies, errors, and lack of speed are among the most frequent causes for consumer dissatisfaction that hamper customer service performances.

Advanced conversational AI agents are the right tool to reduce these inefficiencies and errors while delivering strong customer service performances at light speed.

AN: What kind of real-time feedback is provided to your clients about their customers’ behaviour?

SS: Recognising the importance of a hybrid environment, where human and machine interaction are wisely mixed to leverage the best of both worlds, our Crafter.ai platform has been designed from the ground up with a module that manages the handover of the conversations between the bot and the call centre agents.

During a conversation, a platform user – with the right authorisation levels – can access an insights dashboard to check the key performance indicators that have been identified for the bot.

This is also true during the handover when agents and their supervisors receive real-time information on the customer behaviour during the company site navigation. Such information includes – and is not limited to – visited pages, form field contents, and clicked CTAs, and can be complemented with data collected from the company CRM.

AN: Europe is home to some of the strictest data regulations in the world. As a European organisation, do you think such regulations are too strict, not strict enough, or about right?

SS: We think that any company that wants to gain the trust of their customers should do their best to go beyond the strict regulations requirements.

AN: As conversational AIs progress to human-like levels, should it always be made clear that a person is speaking to an AI bot?

SS: Yes, a bot should always make clear that it is not human. In the end, this can help realise how amazing they can perform.

AN: What’s next for Athics?

SS: We have a solid roadmap for Crafter.ai with many new features and improvements that we bring every three months to our platform.

Our sole focus is on advanced conversational AI agents. We are currently working to include more and more domain specific capabilities to our bots.

Advanced profiling capabilities is a great area of interest where, thanks to our collaboration with universities and international research centres, we expect to deliver truly innovative solutions to our customers.

AN: Athics is sponsoring and exhibiting at this year’s AI & Big Data Expo Europe. What can attendees expect from your presence at the event? 

SS: Conversational AI agents allow businesses to obtain a balance between optimising resources and giving a top-class customer experience. Although there is no doubt regarding the benefits of adopting virtual agents, the successful integration across a company’s conversational streams needs to be correctly assessed, planned, and executed in order to leverage the full potential.

Athics will be at stand number 280 to welcome attending companies and give an overview of the advantages of integrating a conversational agent, explain how to choose the right product, and how to create a conversational vision that can scale and address organisational goals.

(Photo by Jason Leung on Unsplash)

Athics will be sharing their invaluable insights during this year’s AI & Big Data Expo Global which runs from 23-24 November 2021. Athics’ booth number is 280. Find out more about the event here.

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/feed/ 0
Hi Auto brings conversational AI to drive-thrus using Intel technology https://www.artificialintelligence-news.com/news/hi-auto-conversational-ai-drive-thrus-intel-technology/ https://www.artificialintelligence-news.com/news/hi-auto-conversational-ai-drive-thrus-intel-technology/#respond Thu, 20 May 2021 14:34:08 +0000 http://artificialintelligence-news.com/?p=10583 Hi Auto is increasing the efficiency of drive-thrus with a conversational AI system powered by Intel technologies. Drive-thru usage has rocketed over the past year with many indoor restaurants closed due to pandemic-induced restrictions. In fact, research suggests that drive-thru orders in the US alone increased by 22 percent in 2020. Long queues at drive-thrus […]

The post Hi Auto brings conversational AI to drive-thrus using Intel technology appeared first on AI News.

]]>
Hi Auto is increasing the efficiency of drive-thrus with a conversational AI system powered by Intel technologies.

Drive-thru usage has rocketed over the past year with many indoor restaurants closed due to pandemic-induced restrictions. In fact, research suggests that drive-thru orders in the US alone increased by 22 percent in 2020.

Long queues at drive-thrus have therefore become part of the “new normal” and fast food is no longer the convenient alternative to cooking after a long day of Zoom calls.

Israel-based Hi Auto has created a conversational AI system that greets drive-thru guests, answers their questions, suggests menu items, and enters their orders into the point-of-sale system. If an unrelated question is asked – or the customer orders something that is not on the standard menu – the AI system automatically switches over to a human employee.

The first restaurant to trial the system is Lee’s Famous Recipe Chicken in Ohio.

Chuck Doran, Owner and Operator at Lee’s Famous Recipe Chicken, said:

“The automated AI drive-thru has impacted my business in a simple way. We don’t have customers waiting anymore. We greet them as soon as they get to the board and the order is taken correctly.

It’s amazing to see the level of accuracy with the voice recognition technology, which helps speed up service. It can even suggest additional items based on the order, which helps us increase our sales.

If a person is running the drive-thru, they may suggest a sale in one out of 20 orders. With Hi Auto, it happens in every transaction where it’s feasible. So, we see improvements in our average check, service time, and improvements in consistency and customer service.

And, because the cashier is now less stressed, she can focus on customer service as well. A less-burdened employee will be a happier employee and we want happy employees interacting with our customers.”

By reducing the number of staff needed for customer service, more employees can be put to work on fulfilling orders to serve as many people as possible. A recent survey of small businesses found that 42 percent have job openings that can’t be filled so ensuring that every worker is optimally utilised is critical.

Roy Baharav, CEO and Co-Founder at Hi Auto, commented:

“At Lee’s, we met a team that puts its heart and soul into serving its customers.

We operationalised our AI system based on what we learned from the owners, general managers, and employees. They have embraced the solution and within a short time began reaping the benefits.

We are now applying the process and lessons learned at Lee’s at additional customer sites.”

Hi Auto’s solution runs on Intel Xeon processors in the cloud and Intel NUC.

Joe Jensen, VP in the Internet of Things Group and GM of Retail, Banking, Hospitality and Education at Intel, said:

“We’re increasingly seeing restaurants interested in leveraging AI to deliver actionable data and personalise customer experiences.

With Hi Auto’s solution powered by Intel technology, quick-service restaurants can help their employees be more productive while increasing customer satisfaction and, ultimately, their bottom line.”

Lee’s Famous Recipe Chicken restaurants plan to rollout Hi Auto’s solution at more of its branches. A video of the conversational AI system in action can be viewed here:

Going forward, Hi Auto plans to add Spanish language support and continue optimising its conversational AI solution. The company says pilots are already underway with some of the largest quick-service restaurants.

(Image Credit: Lee’s Famous Recipe Chicken)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Hi Auto brings conversational AI to drive-thrus using Intel technology appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/hi-auto-conversational-ai-drive-thrus-intel-technology/feed/ 0
Google’s AI reservation service Duplex is now available in 49 states https://www.artificialintelligence-news.com/news/google-ai-reservation-service-duplex-now-available-49-states/ https://www.artificialintelligence-news.com/news/google-ai-reservation-service-duplex-now-available-49-states/#respond Thu, 01 Apr 2021 14:05:35 +0000 http://artificialintelligence-news.com/?p=10431 Google has expanded its controversial AI reservation service Duplex to 49 states across the US. Duplex will have to comply with privacy regulations which vary between states and – when it expands further outside the US – their national laws too. Google says the rollout delay in the US was due to awaiting changes in […]

The post Google’s AI reservation service Duplex is now available in 49 states appeared first on AI News.

]]>
Google has expanded its controversial AI reservation service Duplex to 49 states across the US.

Duplex will have to comply with privacy regulations which vary between states and – when it expands further outside the US – their national laws too.

Google says the rollout delay in the US was due to awaiting changes in legislation or the need to add features on a per-state basis. Some states, for example, require a call-back number.

The reservation service caused both awe and fear when it was announced in May 2018 for sounding eerily human-like – complete with the “ums” and “ahs” we often fail to avoid – raising concerns it could be used for automating criminal activities such as fraud.

Many have since called for AI bots to identify themselves as such before speaking to a human, something which Duplex now does.

Duplex will, eventually, be able to fully-automatically undertake time-consuming and mundane tasks such as booking hairdresser appointments or table reservations at restaurants. However, right now it’s a bit hit-and-miss.

Google confirmed in 2019 that around 25 percent of calls made by Duplex are actually conducted by humans. A further 19 percent of calls initiated by Duplex had to be completed by us mere mortals.

The final state Duplex is yet to launch in is Louisiana. The local laws preventing Duplex’s launch in the state are unspecified.

You can find the current US states and international countries Duplex has launched in here.

(Photo by Luke Michael on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Google’s AI reservation service Duplex is now available in 49 states appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-ai-reservation-service-duplex-now-available-49-states/feed/ 0
IBM study highlights rapid uptake and satisfaction with AI chatbots https://www.artificialintelligence-news.com/news/ibm-study-uptake-satisfaction-ai-chatbots/ https://www.artificialintelligence-news.com/news/ibm-study-uptake-satisfaction-ai-chatbots/#respond Tue, 27 Oct 2020 11:03:20 +0000 http://artificialintelligence-news.com/?p=9975 A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction. Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an […]

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction.

Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an uncertain amount of time due to overwhelmed call centres.

Chatbots have come a long way and can now quickly handle most queries within minutes. Where a human is required, the reduced demand through using virtual agent technology (VAT) means customers can get the assistance they need more quickly.

The COVID-19 pandemic has greatly increased the adoption of VAT as businesses seek to maintain customer service through such a challenging time.

According to IBM’s study, 99 percent of organisations reported increased customer satisfaction by integrating virtual agents. Human agents also report increased satisfaction and IBM says those “who feel valued and empowered with the proper tools and support are more likely to deliver a better experience to customers.”

68 percent of leaders cite improving the human agent experience as being among their key reasons for adopting VAT. There’s also economic incentive, with the cost of replacing a dissatisfied agent who leaves a business estimated at as much as 33 percent of the exiting employee’s salary.

IBM claims that VAT performance in the past has only been studied through individual case studies. The company set out, alongside Oxford Economics, to change that by surveying 1,005 respondents from companies using VAT daily.

Businesses wondering whether virtual assistants are worth the investment may be interested to know that 96 percent of the respondents “exceeded, achieved, or expect to achieve” their anticipated return.

On average, companies which have implemented VAT have increased their revenue by three percent.

IBM is one of the leading providers of chatbots through its Watson Assistant solution. While there’s little reason to doubt the claims made in the report, it’s worth keeping in mind that it’s not entirely unbiased.

Watson Assistant has gone from strength-to-strength and appears to have been among the few things which benefited from the pandemic. Between February and August, Watson Assistant usage increased by 65 percent.

You can download a full copy of IBM’s report here.

(Photo by Volodymyr Hryshchenko on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ibm-study-uptake-satisfaction-ai-chatbots/feed/ 0
The BBC’s virtual assistant is now available for testing in the UK https://www.artificialintelligence-news.com/news/bbc-virtual-assistant-tested-in-uk/ https://www.artificialintelligence-news.com/news/bbc-virtual-assistant-tested-in-uk/#respond Wed, 03 Jun 2020 15:49:57 +0000 http://artificialintelligence-news.com/?p=9668 A virtual assistant from the BBC which aims to cater for Britain’s many dialects is now available for testing. Even as a Brit, it can often feel like a translation app is needed between Bristolian, Geordie, Mancunian, Brummie, Scottish, Irish, or any of the other regional dialects in the country. For a geographically small country, […]

The post The BBC’s virtual assistant is now available for testing in the UK appeared first on AI News.

]]>
A virtual assistant from the BBC which aims to cater for Britain’s many dialects is now available for testing.

Even as a Brit, it can often feel like a translation app is needed between Bristolian, Geordie, Mancunian, Brummie, Scottish, Irish, or any of the other regional dialects in the country. For a geographically small country, we’re a diverse bunch – and US-made voice assistants often struggle with even the slightest accent.

The BBC thinks it can do a better job than the incumbents and first announced its plans to build a voice assistant called ‘Beeb’ in August last year.

Beeb is being trained using the BBC’s staff from around the country. As a public service, the institution aims to offer as wide representation as possible which is reflected in its employees.

The broadcaster also believes that Beeb addresses public concerns about voice assistants; primarily that they collect vast amounts of data for commercial purposes. As a taxpayer-funded service, the BBC does not rely on things like advertising.

“People know and trust the BBC,” a spokesperson told The Guardian last year, “so it will use its role as public service innovator in technology to ensure everyone – not just the tech-elite – can benefit from accessing content and new experiences in this new way.”

An early version of Beeb is now available for testing by UK participants of the Windows Insider program. Microsoft is heavily involved in the Beeb assistant as the company’s Azure AI services are being used by the BBC.

The first version of Beeb allows users to do virtual assistant norms like getting weather updates and the news, access radio and podcasts, and even grab a few jokes from BBC Comedy writers and facts from QI host Sandi Toksvig.

According to the broadcaster, Beeb won’t launch on dedicated hardware but instead will be designed to eventually be implemented in smart speakers, TVs, and mobiles.

While it still has a long way to go to take on the capabilities of Google, Alexa, Siri, and others, Beeb may offer a compelling alternative for accent-heavy Brits that struggle with American voice assistants.

Grab the Beeb app from the Microsoft Store here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post The BBC’s virtual assistant is now available for testing in the UK appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/bbc-virtual-assistant-tested-in-uk/feed/ 0