Evaluators
In short, evaluators are small software we run that you can configure which evaluate specific aspects of a payload. A payload is your user's input or your LLM's output. Some evaluators focus on readability, others on JSON structure if you expect JSON from your model, or even safety by checking for prompt injection and jailbreaks.
To get started, you first pick the evaluator of your choice and create an instance. An instance is essentially a configured evaluator. Nearly all evaluators are configurable so you can run multiple instances of the same evaluator with slightly different settings.
In your team, you can list all of Modelmetry evaluators by going to Evaluators
in the sidebar. The list is searchable and filterable.
View details of an evaluator
Go to
Evaluators
Locate the evaluator you want to see the details of and click the
Details
button
The details page of an evaluator shows you some general information as well as the findings generated by the evaluator, as well as its configuration settings.
In the Instances
section, you can see a list of your current instances for that evaluator.

azure.prompt-shields.v1
Test an evaluator
Go to
Evaluators
Locate the evaluator you want to see the details of and click the
Test
buttonFill in the configuration, request, and grading data
If the evaluator uses an LLM, attach the needed secret to it


Last updated