TMCnet News
Guardrails AI is Solving the LLM Reliability Problem for AI Developers With $7.5 Million in Seed FundingSAN FRANCISCO, Feb. 15, 2024 (GLOBE NEWSWIRE) -- Today Guardrails AI, the open and trusted AI assurance company, formally launched during the opening keynote at the AI in Production conference. The company also introduced Guardrails Hub, an open source product that lets developers build, contribute, share and re-use advanced validation techniques, known as validators. These validators can be used with Guardrails, the company’s popular open source product that acts as a critical reliability layer for building AI applications that ensures they adhere to specified guidelines and norms. As with any groundbreaking technology, the rise of GenAI introduces a new set of challenges. GenAI is unlocking new workflows for AI systems to augment humans in a way that has never been done before. The most basic assumptions made by humans need to be explicitly enforced — like a healthcare insurance agent not ever giving medical advice, an attorney never citing made up cases and a customer support agent knowing not to recommend a competitor. The only way to enforce these basic assumptions is to carefully validate and reduce the risk of unwanted actions. Torsten Volk, managing research director at Enterprise Management Associates, said: “It can be ‘magical’ to observe how adding a few API calls to an LLM dramatically improves the usefulness of an application. Suddenly, users receive automated help, advice or contextualized information that enables them to make better decisions and implement them faster. This can be completely transformative for many business processes, but as we come to rely on all of this LLM-generated goodness, we need to remember that not even the creators of any specific LLM can exactly predict how their own model will respond in a certain situation. To make things more interesting, these responses can change significantly in response to only minor changes to the underlying LLM. Guardrails AI provides a governance layer that aims to address exactly these issues to minimize the risk that comes from the increased reliance of applications on decisions made ‘under the cover’ by LLMs, often without app developers completely understanding the risk of inconsistent or flat-out harmful results. As we haven’t even scratched the tip of the iceberg in terms of LLM use cases, the problem addressed by Guardrails AI will grow exponentially very soon.” Shrey Shahi, technical advisor to the CEO at customer Robinhood, describes the critical role AI safety with Guardrails plays in their AI adoption journey: “As a leader in the financial services industry, we are committed to our safety first value, and it is at the forefront of all of our inititives, including AI. As the technological landscape goes through this monumental shift, we are committed to integrating AI in a responsible and forward-thinking way to better serve our customers.” Developers have been leveraging Guardrails to gain the necessary assurance for confidently deploying their AI applications. Guardrails is being downloaded more than 10,000 times monthly and has more than 2800 GitHub stars since its release last year. Guardrails’ safety layer surrounds the AI application and is designed to enhance the reliability and integrity of AI applications via validation and correction mechanisms. These validators can be defined by the user which can be simple rules or more advanced AI checks. Use cases include:
“With the launch of Guardrails in 2023, we made a foundational commitment that responsible AI development must be transparent and involve multiple stakeholders. As we navigate the evolving landscape of AI risks, Guardrails Hub will serve as an open and collaborative platform, accelerating the discovery and adoption of groundbreaking tools and methodologies for safely adopting GenAI technologies,” said Shreya Rajpal, co-founder and CEO of Guardrails AI. She has been working in AI for a decade and has built AI systems for high stakes applications in self-driving cars at Drive.ai and autonomous systems at Apple. Guardrails Hub facilitates the creation, sharing and implementation of validators. The hub already has 50 pre-built validators including many contributed by a growing community of individuals and organizations. By combining validators together like building blocks into guards, developers can explicitly enforce the correctness guarantees and risk boundaries that are essential to them. With Guardrails Hub, developers can:
Guardrails AI Raises $7.5 Million in Seed Funding "With Guardrails AI, we see not just a company but a movement towards securing AI's future in enterprise. Their commitment to open source and collaborative innovation in AI risk management will ensure that the evolution towards safe and reliable AI applications is accessible to all, not just a select few,” said Apoorva Pandhi, managing partner at Zetta Venture Partners. About Guardrails AI Media and Analyst Contact: |