Microsoft.Extensions.AI.Evaluation.Safety Namespace
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Classes
CodeVulnerabilityEvaluator |
An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate code completion responses produced by an AI model for the presence of vulnerable code. |
ContentHarmEvaluator |
An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of a variety of harmful content such as violence, hate speech, etc. |
ContentSafetyEvaluator |
An |
ContentSafetyServiceConfiguration |
Specifies configuration parameters such as the Azure AI project that should be used, and the credentials that should be used, when a ContentSafetyEvaluator communicates with the Azure AI Foundry Evaluation service to perform evaluations. |
ContentSafetyServiceConfigurationExtensions |
Extension methods for ContentSafetyServiceConfiguration. |
GroundednessProEvaluator |
An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate the groundedness of responses produced by an AI model. |
GroundednessProEvaluatorContext |
Contextual information that the GroundednessProEvaluator uses to evaluate the groundedness of a response. |
HateAndUnfairnessEvaluator |
An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of content that is hateful or unfair. |
IndirectAttackEvaluator |
An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of indirect attacks such as manipulated content, intrusion and information gathering. |
ProtectedMaterialEvaluator |
An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for presence of protected material. |
SelfHarmEvaluator |
An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of content that indicates self harm. |
SexualEvaluator |
An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of sexual content. |
UngroundedAttributesEvaluator |
An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for presence of content that indicates ungrounded inference of human attributes. |
UngroundedAttributesEvaluatorContext |
Contextual information that the UngroundedAttributesEvaluator uses to evaluate whether a response is ungrounded. |
ViolenceEvaluator |
An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of violent content. |