Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LLM Guardrails #6878

Open
rafaelsandroni opened this issue Feb 28, 2025 · 1 comment
Open

Add LLM Guardrails #6878

rafaelsandroni opened this issue Feb 28, 2025 · 1 comment
Labels
enhancement New feature or request

Comments

@rafaelsandroni
Copy link

rafaelsandroni commented Feb 28, 2025

Feature Request

API based options:

Open source models from HuggingFace:

Motivation

Some specific use cases require security controls, whether its a RAG agent or an AI agent. Typically, I conduct red teaming and evals to ensure my product is secure, but absolute certainty is never guaranteed. The emerging pattern now is to implement guardrails and specialized SLMs to prevent prompt attacks at the input level and data leakage at the output.

Your Contribution

No response

@rafaelsandroni rafaelsandroni added the enhancement New feature or request label Feb 28, 2025
@rogeriochaves
Copy link
Contributor

rogeriochaves commented Feb 28, 2025

hey @rafaelsandroni, we recently included LangWatch (https://langwatch.ai/) as a Langflow components:

#4722

it has prompt injection detection, PII and other guardrails as well as evaluators

also, here is a prompt template using the loop component to run an evaluation through a set of example, could be useful for red teaming too:

#6761

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants