Share via


Use Agent Bricks: Knowledge Assistant to create a high-quality chatbot over your documents

Important

This feature is in Beta.

This page describes how to use Agent Bricks: Knowledge Assistant to create a question-and-answer chatbot over your documents and improve its quality based on natural language feedback from your subject matter experts.

Agent Bricks provides a simple, no-code approach to build and optimize ___domain-specific, high-quality AI agent systems for common AI use cases.

What is Agent Bricks: Knowledge Assistant?

Use Agent Bricks: Knowledge Assistant to create a chatbot with which you can ask questions on your documents and receive high-quality responses with citations. Knowledge Assistant uses advanced AI and follows a retrieval-augmented generation (RAG) approach to deliver accurate, reliable answers based on the ___domain-specialized knowledge you provide it.

Agent Bricks: Knowledge Assistant is ideal for supporting the following use cases:

  • Answer user questions based on product documentation.
  • Answer employee questions related to HR policies.
  • Answer customer inquiries based on support knowledge bases.

Knowledge Assistant enables you to improve the chat agent's quality and adjust its behavior based on natural language feedback from your subject matter experts. Provide questions for a labeling session and send it to experts to review in the Review App. Their responses provide labeled data that helps optimize the agent's performance.

Agent Bricks: Knowledge Assistant creates an end-to-end RAG agent endpoint that you can use downstream for your applications. For example, the image below shows how you can interact with the endpoint by chatting with it in AI Playground. Ask the agent questions related to your documents, and the agent will answer with citations.

Knowledge Assistant endpoint in Playground.

Requirements

  • Serverless-supported workspace that includes the following:
  • You must have input data ready to use. You can choose to provide either:
    • Files in a Unity Catalog volume or volume directory. Supported file types are txt, pdf, md, ppt/pptx, and doc/docx.
    • A vector search index.

Create a knowledge assistant agent

Go to Agents icon. Agents in the left navigation pane of your workspace and click Knowledge Assistant.

re[ABKA]

Step 1: Configure your agent

On the Configure tab, configure your agent and provide knowledge sources for it to use to answer questions.

Configure knowledge assistant.

  1. In the Name field, enter a name for your agent.

  2. In the Description field, describe what your agent can do.

  3. In the Schema field, select the Unity Catalog catalog and schema to save your evaluation datasets.

  4. In the Knowledge source panel, add your knowledge source. You can choose to provide either Unity Catalog files or a vector search index.

    UC Files

    For UC files, the following file types are supported: txt, pdf, md, ppt/pptx, and doc/docx. Databricks recommends using files smaller than 32 MB.

    Add UC files.

    1. Under Type, select UC Files.
    2. In the Source field, select the Unity Catalog volume or volume directory that contains your files.
    3. In the Name field, enter a name for your knowledge source.
    4. Under Describe the content, describe what content the knowledge source contains to help the agent understand when to use this data source.

    Vector Search Index

    Add vector search index.

    1. Under Type, select Vector Search Index.
    2. In the Source field, select the vector search index you want to provide the agent.
    3. In the Doc URI Column, select the column with a link or reference to where the information came from. The agent will use this in its citations.
    4. In the Text Column field, specify the column that contains the raw text you want the agent to retrieve.
    5. In the Name field, enter a name for your knowledge source.
    6. Under Describe the content, describe what content the knowledge source contains to help the agent understand when to use this data source.
  5. (Optional) If you would like to add more knowledge sources, click Add knowledge source. You can provide up to 10 knowledge sources.

  6. (Optional) In the Instructions field, specify guidelines for how the agent should respond.

    Add instructions.

  7. Click Create Agent.

It can take up to a few hours to create your agent and sync the knowledge sources you provided. The right side panel will update with links to the deployed agent, experiment, and synced knowledge sources.

Updated right panel when agent is ready.

Step 2: Test your agent

After your agent has finished building, you can test it by trying it out in AI Playground. The agent should respond with citations for questions related to its knowledge sources.

  1. Under Deployed agent in the right side panel, click Try in Playground. This opens up AI Playground with your agent endpoint connected. Here, you can chat with your agent and review its responses.

    Try the agent in AI Playground.

  2. If you have AI assistive features enabled, you can enable AI Judge and Synthetic question generation to help you evaluate your agent.

  3. Enter a question for your agent.

  4. Evaluate its response:

    Test agent and evaluation its response in AI Playground.

    1. Click View thoughts to see how your agent approached responding to the question.
    2. Click on the box under Sources to see what files the agent is citing. This opens up the file in a side panel for you to review.
    3. The AI Judge can help quickly evaluate the response for groundedness, safety, and relevance.
    4. Review Suggested questions for additional questions to ask your agent.

If you’re satisfied with your agent’s performance, continue using the agent as-is.

Step 3: Improve quality

Agent Bricks: Knowledge Assistant can adjust the agent's behavior based on natural language feedback. Gather human feedback through a labeling session to improve your agent's quality. Collecting labeled data for your agent can improve its quality. Agent Bricks will retrain and optimize the agent from the new data.

In the Improve quality tab, add questions and start a labeling session.

  1. Add questions to include in your labeling session:

    1. Click + Add to add a question.
    2. In the Add a question modal, enter your question.
    3. Click Add. The question should appear in the UI.
    4. Repeat until you’ve added all the questions you want to evaluate.
    5. To delete a question, click the kebab menu, then Delete.

    Databricks recommends adding at least 20 questions for a labeling session to ensure enough labeled data is collected.

    Add questions for labeling session.

  2. After you’ve finished adding your questions, send the questions to experts for review to help you build a high-quality labeled dataset. On the right, click Start labeling session.

    When your labeling session is ready, the UI will update as shown below.

    Active labeling session.

  3. Share the review app with experts to gather feedback.

    To learn more about labeling sessions and the review app, see Use the review app for human reviews of a gen AI app (legacy).

    Note

    In order for experts to access the labeling session, you need to grant them the following permissions:

    • CAN QUERY permission to the endpoint
    • EDIT permission to the experiment
    • USE CATALOG, USE SCHEMA, and SELECT permissions to the schema
  4. To label the data yourself, click Open labeling session.

    This opens the review app in a new tab. As a reviewer:

    1. Click Start review. For each question, the the reviewer will see the question and the agent’s response.

    2. On the left side, review the question and answer. You can click View thoughts to see how the agent is thinking about the question.

    3. On the right side, under Expectations, review any existing guidelines and add more as you see fit.

      1. To add a guideline, click + Add input.
      2. Enter the guideline in the text box that appears.
      3. Click Save.
    4. Under Feedback, enter your feedback, then click Save.

    5. When you’re done reviewing a question, click Next unreviewed > in the top right to move onto the next one.

    6. When you’re done reviewing all questions, simply exit the review app.

      Review questions and answers in labeling session.

  5. When your reviewers are done with their labeling sessions, return to your agent’s Improve quality tab.

  6. Click Merge to merge feedback from the experts to your labeled dataset. The table of questions on the right side will update with the merged feedback.

    Merged feedback from labeling session.

  7. Review the feedback records.

  8. Test the agent again in AI Playground to see its improved performance. If needed, start another labeling session to gather more labeled data.

Limitations

  • Databricks recommends using files smaller than 32 MB for your source documents.
  • Workspaces that use Azure Private Link, including storage behind PrivateLink, are not supported.
  • Unity Catalog tables are not supported.