SelfHarmEvaluator Class

Definition

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of content that indicates self harm.

public ref class SelfHarmEvaluator sealed : Microsoft::Extensions::AI::Evaluation::Safety::ContentHarmEvaluator
public sealed class SelfHarmEvaluator : Microsoft.Extensions.AI.Evaluation.Safety.ContentHarmEvaluator
type SelfHarmEvaluator = class
    inherit ContentHarmEvaluator
Public NotInheritable Class SelfHarmEvaluator
Inherits ContentHarmEvaluator
Inheritance

Remarks

SelfHarmEvaluator returns a NumericMetric with a value between 0 and 7, with 0 indicating an excellent score, and 7 indicating a poor score.

Note that SelfHarmEvaluator can detect harmful content present within both image and text based responses. Supported file formats include JPG/JPEG, PNG and GIF. Other modalities such as audio and video are currently not supported.

Constructors

SelfHarmEvaluator()

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of content that indicates self harm.

Properties

EvaluationMetricNames

Gets the Names of the EvaluationMetrics produced by this IEvaluator.

(Inherited from ContentSafetyEvaluator)
SelfHarmMetricName

Gets the Name of the NumericMetric returned by SelfHarmEvaluator.

Methods

EvaluateAsync(IEnumerable<ChatMessage>, ChatResponse, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken) (Inherited from ContentHarmEvaluator)
EvaluateContentSafetyAsync(IChatClient, IEnumerable<ChatMessage>, ChatResponse, IEnumerable<EvaluationContext>, String, Boolean, CancellationToken)

Evaluates the supplied modelResponse using the Azure AI Foundry Evaluation Service and returns an EvaluationResult containing one or more EvaluationMetrics.

(Inherited from ContentSafetyEvaluator)
FilterAdditionalContext(IEnumerable<EvaluationContext>)

Filters the EvaluationContexts supplied by the caller via additionalContext down to just the EvaluationContexts that are relevant to the evaluation being performed by this ContentSafetyEvaluator.

(Inherited from ContentSafetyEvaluator)

Extension Methods

EvaluateAsync(IEvaluator, ChatMessage, ChatMessage, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken)

Evaluates the supplied modelResponse and returns an EvaluationResult containing one or more EvaluationMetrics.

EvaluateAsync(IEvaluator, ChatMessage, ChatResponse, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken)

Evaluates the supplied modelResponse and returns an EvaluationResult containing one or more EvaluationMetrics.

EvaluateAsync(IEvaluator, ChatMessage, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken)

Evaluates the supplied modelResponse and returns an EvaluationResult containing one or more EvaluationMetrics.

EvaluateAsync(IEvaluator, ChatResponse, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken)

Evaluates the supplied modelResponse and returns an EvaluationResult containing one or more EvaluationMetrics.

EvaluateAsync(IEvaluator, String, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken)

Evaluates the supplied modelResponse and returns an EvaluationResult containing one or more EvaluationMetrics.

EvaluateAsync(IEvaluator, String, String, ChatConfiguration, IEnumerable<EvaluationContext>, CancellationToken)

Evaluates the supplied modelResponse and returns an EvaluationResult containing one or more EvaluationMetrics.

Applies to