Skip to content

Building smarter Copilots with Copilot Studio and Azure OpenAI integration

Copilot Azure Open AI your own data

When designing and publishing a custom copilot (or chatbot), the ability to quickly and accurately present information is crucial. Copilot Studio is a powerful tool that can help makers build their own copilot and help users find answers from documents and websites thanks to the generative actions feature.

However, sometimes the information we need is not readily available in these sources. That is where the integration of Copilot Studio and Azure OpenAI comes in. By leveraging the power of Azure OpenAI, Copilot Studio can now gather data from external sources, such as databases, to provide even more comprehensive and accurate responses. In this blog post, we will show step by step how to configure your Azure OpenAI model with your data, and how to use it from within your custom copilot.

1. Create your model in Azure OpenAI

First of all you need to create and deploy an Azure OpenAI Service resource. Remember that currently in order to use Azure OpenAI in your Azure subscription you need to submit an application completing this form.

Once the Azure OpenAI service resource is created, we can start creating our deployments using Azure AI Studio (Preview). In order to use our own data with the deployment, we must select GPT-4 or GPT-4-32k models and complete some other required fields like model version or tokens per minute rate limit:

At this point, we are ready to test our model using, for instance, the Chat playground. In this example, we asked our model to write a song about Power Platform and licensing:

Of course, we recommend to know the basics of prompt engineering to take the most out of the model.

2. Adding custom data to your model

Would not be great to ground the generated results with our own data? Utilizing our own data, which is stored in files, websites and databases, can provide a solid foundation for the results generated by our AI model. Furthermore, we can ensure that the model is grounded in the specific context and information relevant to our organization, leading to more accurate and reliable results, and in the end, helping us to make better-informed decisions and drive innovation within our organization.

2.1. Import data

In our case we want to use the Customers table in the Northwind database, which is stored in an Azure SQL database instance. In order to use it from within our model, we also need to:

  1. Create an Azure AI Search instance to ingest the data stored in the database and provide search results. You can find an excellent step by step tutorial on how to create it here.
  2. Connect our data to the Azure AI Search instance:
    • On the Overview section, click on the import button just in the Connect your data section.
    • Select Azure SQL Database as the data source, and create or choose an existing connection to connect to the Northwind database, entering the credentials and testing the connection. If everything is correct, you will be able to select which data you want to import (in our case, Customers table).
  1. Finally, we our going to customize our search index (we have omitted the add cognitive skills (optional) section, as we are not going to use it). Basically, we have to define which fields should be added to the index, and whether they are retrievable, filterable, sortable and/or searchable.

In this case we want to return all fields in the search results (retrievable), but we only want filter results by CompanyName, ContactName, City and Country (filterable).

  1. Finally, we have to configure the indexer: How often do we want to extract data and populate the previously created index? We can define an hourly, daily or custom schedule, but in any case, it would be ideal to have a column that can be used to determine if the row has changed since the last indexer run (like a rowversion column).

The import data configuration is done and now we only to need to wait for the indexer to run. If it succeeds in populating the index, we should see a screen like the following:

The indexer execution was successful and it crawled 91 documents (the number of rows in the Customers table).

2.2. Add data to the model

In the Chat playground we need to add a data source before we start asking questions to the model. As we mentioned in previous paragraphs, you can use different data sources: documents (stored in Azure Blob Storage or Uploaded to the model itself), websites and databases (stored in Azure Cosmos DB for MongoDB vCore or Azure AI Search). In our case we will do the following:

  1. Select Azure AI Search as data source:
  1. Select the Azure AI Search Service and Azure AI Search index that we want to use.

You can’t use the Azure AI Search Service free tier to add data to your model. Moreover, it is not possible to change it after you created the resource, so select it carefully when creating it.

  1. Select the search type, that in this case can only be based on keyword. We could create a Semantic Ranker in Azure AI Search service and use it to perform semantic search based.
  2. Click on Save and close button to finish configuration.

Now we are ready to test the model with our own data:

Can you see something interesting in this screen capture? The address field contains not only the address, but the city and the country, which are stored in different columns in the Customers table. Generative AI comes into play!

Anyway, let’s check whether the information is correct or not in SQL Server:

We can see that results are the same (obviously)!

3. Using your model in Copilot Studio

3.1. Create a new copilot

Now we are ready to deploy our model as a new copilot in Copilot Studio, a feature that currently is in preview.

A pop up screen will be shown, stating that we agree to connect our Azure OpenAI subscription with our Copilot tenant, and therefore, that data will be shared between both services, which may lead its processing outside our Copilot Studio tenant’s geographic region.

If we continue in Copilot Studio, we can starting working on the copilot. Basically, we have to create a new copilot and then enable the option to boost conversations with generative answers.

3.2. Add the connection to Azure OpenAI model

Edit the Conversational boosting system topic, that it will be triggered when the copilot does not match any topic with the user intent.

In the Create generative answers node we will be able to select different data sources, like a public website, a SharePoint site, or an Azure OpenAI services on our data. That is great news, as if we select the latter, an existing connection will be shown to connect to it:

3.3. Configure the connection to the Azure OpenAI model

Once we select the connection, we need to configure the connection properties, where we mainly specify the number of tokens we want to use as a maximum in a response, and risks taken when giving an answers (using Temperature and Top P parameters). In this case, we used the following values in the General tab.

  • Deployment: northwind-model (name of the deployment we created in Azure OpenAI)
  • Api version: 2023-06-01-preview
  • Maximum tokens in response: 800
  • Temperature: 0
  • Top P: 1

Please, be careful when adding those parameters. Maximum tokens in response, Temperature and Top P are numbers, so add them as numbers and not strings!

In the model data tab we need to configure the index and data columns that we want to use in the results:

  • Index name: northwind-customers-index (name of the index we created on the Azure AI Search service)
  • Title: CompanyName
  • Content data: ContactName, Address, City, Country, Phone

We could also use URL and filename fields if we had links to websites or files in any of the columns within the Customers table.

3.4. Test the copilot

Once we save the changes, we can test our copilot, even before publishing it.

We get the same result as we got when we tested the model in Azure Open AI, and thanks to the Copilot designer, we can see which topic has been triggered and the nodes that have been executed.

When creating the model in Azure OpenAI, remember to use GPT-4-32k model, as if not, the Copilot Studio integration will not work!

Summary

The integration of Copilot Studio with Azure OpenAI with your own data, allowing for personalized data integration, stands as a remarkable feature with immense potential.

However, its effectiveness is contingent upon further refinement. Enhancements are crucial, particularly in considering various factors such as the selection of the Azure OpenAI model, the tier of Azure AI Search service utilized, and the overarching connection properties within Copilot Studio.

Strengthening these aspects would undoubtedly elevate the functionality and usability of the integration, ensuring a more seamless and tailored experience for users and makers.

AzureAzure OpenAIChatbotCopilotCopilot StudioPower Virtual Agents

One response to "Building smarter Copilots with Copilot Studio and Azure OpenAI integration"

  1. This is a really good tutorial. The difference between the capabilities of Copilot Studio and Azure OpenAI has been well written. Great work.

Leave a Reply

Your email address will not be published. Required fields are marked *