TMCnet News

Guardrails AI is Solving the LLM Reliability Problem for AI Developers With $7.5 Million in Seed Funding
[February 15, 2024]

Guardrails AI is Solving the LLM Reliability Problem for AI Developers With $7.5 Million in Seed Funding


SAN FRANCISCO, Feb. 15, 2024 (GLOBE NEWSWIRE) -- Today Guardrails AI, the open and trusted AI assurance company, formally launched during the opening keynote at the AI in Production conference. The company also introduced Guardrails Hub, an open source product that lets developers build, contribute, share and re-use advanced validation techniques, known as validators. These validators can be used with Guardrails, the company’s popular open source product that acts as a critical reliability layer for building AI applications that ensures they adhere to specified guidelines and norms.

As with any groundbreaking technology, the rise of GenAI introduces a new set of challenges. GenAI is unlocking new workflows for AI systems to augment humans in a way that has never been done before. The most basic assumptions made by humans need to be explicitly enforced — like a healthcare insurance agent not ever giving medical advice, an attorney never citing made up cases and a customer support agent knowing not to recommend a competitor. The only way to enforce these basic assumptions is to carefully validate and reduce the risk of unwanted actions.

Torsten Volk, managing research director at Enterprise Management Associates, said: “It can be ‘magical’ to observe how adding a few API calls to an LLM dramatically improves the usefulness of an application. Suddenly, users receive automated help, advice or contextualized information that enables them to make better decisions and implement them faster. This can be completely transformative for many business processes, but as we come to rely on all of this LLM-generated goodness, we need to remember that not even the creators of any specific LLM can exactly predict how their own model will respond in a certain situation. To make things more interesting, these responses can change significantly in response to only minor changes to the underlying LLM. Guardrails AI provides a governance layer that aims to address exactly these issues to minimize the risk that comes from the increased reliance of applications on decisions made ‘under the cover’ by LLMs, often without app developers completely understanding the risk of inconsistent or flat-out harmful results. As we haven’t even scratched the tip of the iceberg in terms of LLM use cases, the problem addressed by Guardrails AI will grow exponentially very soon.”

Shrey Shahi, technical advisor to the CEO at customer Robinhood, describes the critical role AI safety with Guardrails plays in their AI adoption journey: “As a leader in the financial services industry, we are committed to our safety first value, and it is at the forefront of all of our inititives, including AI. As the technological landscape goes through this monumental shift, we are committed to integrating AI in a responsible and forward-thinking way to better serve our customers.”



Developers have been leveraging Guardrails to gain the necessary assurance for confidently deploying their AI applications. Guardrails is being downloaded more than 10,000 times monthly and has more than 2800 GitHub stars since its release last year. Guardrails’ safety layer surrounds the AI application and is designed to enhance the reliability and integrity of AI applications via validation and correction mechanisms. These validators can be defined by the user which can be simple rules or more advanced AI checks. Use cases include:

  • Reducing hallucinations by confirming factuality for AI information extraction
  • Ensuring chatbot communications behave in an expected way like being on brand and message
  • Enforcing policies and regulations in AI automated workflows

“With the launch of Guardrails in 2023, we made a foundational commitment that responsible AI development must be transparent and involve multiple stakeholders. As we navigate the evolving landscape of AI risks, Guardrails Hub will serve as an open and collaborative platform, accelerating the discovery and adoption of groundbreaking tools and methodologies for safely adopting GenAI technologies,” said Shreya Rajpal, co-founder and CEO of Guardrails AI. She has been working in AI for a decade and has built AI systems for high stakes applications in self-driving cars at Drive.ai and autonomous systems at Apple.


Guardrails Hub facilitates the creation, sharing and implementation of validators. The hub already has 50 pre-built validators including many contributed by a growing community of individuals and organizations. By combining validators together like building blocks into guards, developers can explicitly enforce the correctness guarantees and risk boundaries that are essential to them. With Guardrails Hub, developers can:

  • Build validators: developers can create advanced validation techniques tailored to specific safety, compliance and performance requirements of AI applications. These validators can range from simple rule-based checks to more complex machine learning algorithms designed to detect and mitigate potential biases, inaccuracies and non-compliance with regulatory standards
  • Contribute and collaborate: once developed, these validators can be contributed back to the hub repository where they become accessible to other developers. This collaborative approach leverages the collective expertise of the community to address a wide array of challenges in AI reliability
  • Re-use validators: developers can browse the hub to find and implement pre-built validators that suit their project's needs. The reuse of validators accelerates the development process, ensuring AI applications meet the necessary safety and reliability standards without requiring developers to reinvent the wheel for common issues
  • Combine validators into guards: validators can be combined like building blocks to form comprehensive reliability layers or 'guards' around AI applications. This modular approach lets developers tailor the reliability measures to the specific risks and requirements of their application, enhancing flexibility and effectiveness
  • Enforce correctness guarantees and risk boundaries: through the implementation of these guards, developers can programmatically enforce the desired levels of reliability, compliance and performance. This ensures that AI applications operate within defined ethical and regulatory boundaries, significantly reducing the risk of unintended consequences

Guardrails AI Raises $7.5 Million in Seed Funding
Today Guardrails AI also announced that it closed a $7.5 million seed funding round led by Zetta Venture Partners. Bloomberg Beta, Pear VC, Factory and GitHub Fund as well as AI angels like Ian Goodfellow from DeepMind, Logan Kilpatrick from OpenAI and Lip-bu Tan also participated in the round. The funding will be used to expand the company's engineering and product teams as well as to continue to advance its products. 

"With Guardrails AI, we see not just a company but a movement towards securing AI's future in enterprise. Their commitment to open source and collaborative innovation in AI risk management will ensure that the evolution towards safe and reliable AI applications is accessible to all, not just a select few,” said Apoorva Pandhi, managing partner at Zetta Venture Partners.

About Guardrails AI
Guardrails AI empowers companies to harness the full potential of foundation models reliably and with confidence by building tools to measure, monitor and mitigate AI risks. By seamlessly integrating into the AI development lifecycle, Guardrails AI's breakthrough approach boosts system reliability and accuracy and provides developers with improved oversight. Guardrails AI is headquartered in San Francisco, CA. For more information, please visit www.guardrailsai.com.

Media and Analyst Contact:
Amber Rowland
[email protected]
+1-650-814-4560


Primary Logo


[ Back To TMCnet.com's Homepage ]