次の方法で共有


Microsoft.Extensions.AI.Evaluation.Safety Namespace

Classes

CodeVulnerabilityEvaluator

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate code completion responses produced by an AI model for the presence of vulnerable code.

ContentHarmEvaluator

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of a variety of harmful content such as violence, hate speech, etc.

ContentSafetyEvaluator

An abstract base class that can be used to implement IEvaluators that utilize the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of a variety of unsafe content such as protected material, vulnerable code, harmful content etc.

ContentSafetyServiceConfiguration

Specifies configuration parameters such as the Azure AI project that should be used, and the credentials that should be used, when a ContentSafetyEvaluator communicates with the Azure AI Foundry Evaluation service to perform evaluations.

ContentSafetyServiceConfigurationExtensions

Extension methods for ContentSafetyServiceConfiguration.

GroundednessProEvaluator

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate the groundedness of responses produced by an AI model.

GroundednessProEvaluatorContext

Contextual information that the GroundednessProEvaluator uses to evaluate the groundedness of a response.

HateAndUnfairnessEvaluator

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of content that is hateful or unfair.

IndirectAttackEvaluator

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of indirect attacks such as manipulated content, intrusion and information gathering.

ProtectedMaterialEvaluator

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for presence of protected material.

SelfHarmEvaluator

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of content that indicates self harm.

SexualEvaluator

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of sexual content.

UngroundedAttributesEvaluator

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for presence of content that indicates ungrounded inference of human attributes.

UngroundedAttributesEvaluatorContext

Contextual information that the UngroundedAttributesEvaluator uses to evaluate whether a response is ungrounded.

ViolenceEvaluator

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of violent content.