Modelmetry Documentation
Back to site
  • Documentation
  • Getting Started
    • Concepts
  • Platform
    • Evaluators
    • Automations
    • Analytics
    • API Keys
    • Secrets
    • Team
      • Memberships
      • Invitations
      • Roles & Permissions
  • Evaluators
    • List of Evaluators
    • Prompt Shields (Azure)
    • DLP PII Detector (Google)
    • Text Moderation (Google)
    • Boolean LLM-as-Judge (Modelmetry)
    • Competitor Blocklist (Modelmetry)
    • Emotion Analysis (Modelmetry)
    • HTTP Request (Modelmetry)
    • Language Detector (Modelmetry)
    • Score LLM-as-Judge (Modelmetry)
    • Word Counter (Modelmetry)
  • Developers
    • Modelmetry JS SDK
    • Modelmetry Python SDK
Powered by GitBook
On this page
  • View details of an evaluator
  • Test an evaluator
  1. Platform

Evaluators

PreviousConceptsNextAutomations

Last updated 5 months ago

In short, evaluators are small software we run that you can configure which evaluate specific aspects of a payload. A payload is your user's input or your LLM's output. Some evaluators focus on readability, others on JSON structure if you expect JSON from your model, or even safety by checking for prompt injection and jailbreaks.

To get started, you first pick the evaluator of your choice and create an . An instance is essentially a configured evaluator. Nearly all evaluators are configurable so you can run multiple instances of the same evaluator with slightly different settings.

In your team, you can list all of Modelmetry evaluators by going to Evaluators in the sidebar. The list is searchable and filterable.

View details of an evaluator

  1. Go to Evaluators

  2. Locate the evaluator you want to see the details of and click the Details button

The details page of an evaluator shows you some general information as well as the findings generated by the evaluator, as well as its configuration settings.

In the Instances section, you can see a list of your current instances for that evaluator.

Test an evaluator

  1. Go to Evaluators

  2. Locate the evaluator you want to see the details of and click the Test button

  3. Fill in the configuration, request, and grading data

    1. If the evaluator uses an LLM, attach the needed secret to it

instance
View a list of all Modelmetry evaluators in the documentation.
Example of the details screen for azure.prompt-shields.v1
Example of test data
Example of a response when the test is executed