Nvidia Introduces NeMo Guardrails to Curb ChatGPT's Inaccuracies
Nvidia, one of the leading companies in GPU technology, has recently introduced an open-source software named ‘NeMo Guardrails’. The software is specifically designed to improve the accuracy and safety of AI-based chatbots, primarily aiming to keep them relevant to the topic, prevent data breaching and tackle false information generation. Through NeMo Guardrails, Nvidia aims to address crucial issues that currently hamper the development of AI technology.
Several AI tools, including ChatGPT, Google Bard, and Bing Chat, are capable of reverting to an extensive range of queries; however, their responses cannot be considered universally trustworthy. OpenAI's ChatGPT has been consistently giving inaccurate answers during the tests, failing to perform basic computations, generating irrelevant responses and arbitrary content. Nvidia, the company behind the development of AI models like ChatGPT, has acknowledged this issue and is now introducing NeMe Guardrails to counter it.
NeMe Guardrails allow developers to ensure that language models comply with the required standards while instating topical, safety and security guidelines. According to the company, the software's topical rails aim to ‘prevent apps from veering off into undesired areas,' while its security guardians ‘ensure apps respond with accurate and appropriate information.' Its security guardrails also work by preventing the tools from connecting to unsafe third-party apps that may be suspected of compromising private information. Additionally, NeMe Guardrails deploys a second logic learning model (LLM) to fact-check the answers of the primary LLM. The second LLM then validates the answers, and if they fail to match or are deemed erroneous, the response is not sent to the user.
The software has been incorporated into the NVIDIA NeMo framework, which has all the features necessary to train and tune a language model. NeMe Guardrails being an open-source technology, it can be used by any enterprise app developer looking to add extra safeguards to their chatbot. Programmers can create custom rules for their AI model using the language, activating as many guardrails as they see fit.
While the NeMe Guardrails provide the developers with a framework, some experts argue that it may not go far enough. A recent complaint to the Federal Trade Commission (FTC) by the Center of AI and Digital Policy (CAIDP) raised concerns regarding "bias" and "deception" amongst AI-generated content. Nonetheless, the introduction of NeMe Guardrails represents a significant step towards regulating chatbot technology and ensuring their accuracy and safety.
Despite concerns from the industry, Nvidia is unlikely to slow down its focus on AI technology. Still, it's important for businesses to regulate and improve the accuracy of AI-generated content, and NeMe Guardrails is a significant step in the right direction.