Microsoft.Extensions.AI.Evaluation.Quality Namespace

Contains evaluator classes that assess the quality of large language model (LLM) responses in an app according to various metrics.

Classes

CoherenceEvaluator

An IEvaluator that evaluates the 'Coherence' of a response produced by an AI model.

CompletenessEvaluator

An IEvaluator that evaluates the 'Completeness' of a response produced by an AI model.

CompletenessEvaluatorContext

Contextual information that the CompletenessEvaluator uses to evaluate the 'Completeness' of a response.

EquivalenceEvaluator

An IEvaluator that evaluates the 'Equivalence' of a response produced by an AI model with another response supplied via GroundTruth.

EquivalenceEvaluatorContext

Contextual information that the EquivalenceEvaluator uses to evaluate the 'Equivalence' of a response.

FluencyEvaluator

An IEvaluator that evaluates the 'Fluency' of a response produced by an AI model.

GroundednessEvaluator

An IEvaluator that evaluates the 'Groundedness' of a response produced by an AI model.

GroundednessEvaluatorContext

Contextual information that the GroundednessEvaluator uses to evaluate the 'Groundedness' of a response.

RelevanceEvaluator

An IEvaluator that evaluates the 'Relevance' of a response produced by an AI model.

RelevanceTruthAndCompletenessEvaluator

An IEvaluator that evaluates the 'Relevance', 'Truth' and 'Completeness' of a response produced by an AI model.

RetrievalEvaluator

An IEvaluator that evaluates an AI system's performance in retrieving information for additional context in response to a user request (for example, in a Retrieval Augmented Generation (RAG) scenario).

RetrievalEvaluatorContext

Contextual information that the RetrievalEvaluator uses to evaluate an AI system's performance in retrieving information for additional context.