Share via


FAQ for the conversational agent creation experience

These frequently asked questions (FAQ) describe the AI impact of the conversational agent creation experience in Copilot Studio.

What is the conversational agent creation experience?

The conversational agent creation experience is used to help you create a custom agent or Copilot agent. Through the conversation, it predicts the agent's name, description, and instructions, it supports adding knowledge sources, and generates suggested prompts (for Copilot agents).

What are the capabilities of the conversational agent creation experience?

You can get started quickly with a custom agent or Copilot agent configuration through a natural language interface. The system updates the configuration based on your input during the conversation.

What is the conversational agent creation experience's intended use?

You can use this experience to begin your initial agent configuration.

How was the conversational agent creation experience evaluated, and what metrics are used to measure performance?

We evaluated the system for accuracy of how well the predicted configuration represented the requests through the conversation to ensure quality. We also tested to ensure the system doesn't produce harmful or malicious content.

What are the limitations of the conversational agent creation experience, and how can users minimize the impact of limitations when using it?

  • The conversational agent creation experience only supports certain languages. You might be able to use other languages, but the answers generated might be inconsistent or unexpected.

  • This experience can only be used to configure:

    • The name
    • The description
    • Instructions that create the agent
    • A subset of the supported knowledge source types
    • Suggested prompts for Copilot agents
  • See the Responsible AI FAQ for generative answers for other considerations and limitations when using generative answers in agents you create with this feature.

What operational factors and settings allow for effective and responsible use of the conversational agent creation experience?

You can use natural language to converse with the system over chat, or you can directly edit the configuration manually. If you edit manually, the system might update your agent's configuration with additional information as you continue the conversation.

What protections are in place within Copilot Studio for responsible AI?

Generative answers include various protections to ensure admins, makers, and users enjoy a safe, compliant experience. Admins have full control over the features in their tenant and can always turn off the ability to publish agents with generative answers in your organization. Makers can add custom instructions to influence the types of responses their agents return. For more information about best practices for writing custom instructions, see Use prompt modification to provide custom instructions to your agent.

Makers can also limit the knowledge sources that agents can use to answer questions. To enable agents to answer questions outside the scope of their configured knowledge sources, makers can turn on AI General Knowledge feature. To limit agents to only answer questions to the scope of their configured knowledge sources, makers should turn off this feature.

Copilot Studio also applies content moderation policies on all generative AI requests to protect admins, makers, and users against offensive or harmful content. These content moderation policies also extend to malicious attempts at jailbreaking, prompt injection, prompt exfiltration, and copyright infringement. All content is checked twice: first during user input and again when the agent is about to respond. If the system finds harmful, offensive, or malicious content, it prevents your agent from responding.

Finally, it's a best practice to communicate to users that the agent uses artificial intelligence, therefore the following default message informs users: "Just so you are aware, I sometimes use AI to answer your questions."