About
Copilot for Power BI is a feature introduced by Microsoft to provide users with AI-powered assistance while creating reports and dashboards in Power BI Desktop. It uses natural language processing (NLP) capabilities to understand user queries and provide relevant suggestions and guidance in real-time as users build their data visualizations.
This feature is designed to make it easier for users, especially those who may not have extensive experience with data analytics or Power BI, to create effective and insightful reports. Copilot can suggest visualizations, recommend data transformations, and offer explanations or insights into the data being analyzed.
This feature need to be enabled in the tenant settings of the Admin Portal either at Org level or can be based on AD groups.
Pricing/Consumption
Requests to Copilot consume Fabric Capacity Units (CU). Copilot usage is measured by the number of tokens processed. Tokens can be thought of as pieces of words. As a reference, 1,000 tokens approximately represent 750 words. The Fabric Copilot cost is calculated per 1,000 tokens, and input and output tokens are consumed at different rates. This table defines how many CUs are consumed as part of Copilot usage.
Operation in Metrics App | Description | Operation Unit of Measure | Consumption rate |
Copilot in Fabric | The input prompt | Per 1,000 Tokens | 400 CU seconds |
Copilot in Fabric | The output completion | Per 1,000 Tokens | 1,200 CU seconds |
Capacity Units (CU) in pricing refer to the computing resources allocated to your Power BI deployment in the cloud. They are a measure of the performance and scalability of your Power BI environment.
If you’re utilizing Copilot for Power BI and your request involves 500 input tokens and 100 output tokens, then you’ll be charged a total of (500*400+100*1,200)/1,000 = 320 CU seconds in Fabric.
The cost of Fabric Capacity Units can vary depending on the region. Regardless of the consumption region where GPU capacity is utilized, customers are billed based on the Fabric Capacity Units pricing in their billing region.
For example, if a customer’s requests are mapped from region 1 to region 2, with region 1 being the billing region and region 2 being the consumption region, the customer is charged based on the pricing in region 1.
Data Security
The data such as prompts, grounding data included in prompts, and AI output will be processed and temporarily stored by Microsoft and may be reviewed by Microsoft employees for abuse monitoring.
To generate a response, Copilot uses:
The user's prompt or input and, when appropriate,
Additional data that is retrieved through the grounding process.
This information is sent to Azure OpenAI Service, where it's processed and an output is generated. Therefore, data processed by Azure OpenAI can include:
The user's prompt or input.
Grounding data.
The AI response or output.
Grounding data may include a combination of dataset schema, specific data points, and other information relevant to the user's current task. Review each experience section for details on what data is accessible to Copilot features in that scenario.
Interactions with Copilot are specific to each user. This means that Copilot can only access data that the current user has permission to access, and its outputs are only visible to that user unless that user shares the output with others, such as sharing a generated Power BI report or generated code. Copilot doesn't use data from other users in the same tenant or other tenants.
Copilot uses Azure OpenAI—not OpenAI's publicly available services—to process all data, including user inputs, grounding data, and Copilot outputs. Copilot currently uses a combination of GPT models, including GPT 3.5. Microsoft hosts the OpenAI models in Microsoft's Azure environment and the Service doesn't interact with any services by OpenAI (for example, ChatGPT or the OpenAI API). Your data isn't used to train models and isn't available to other customers.
Copilot in Power BI Service for Consumers
New Report/Page
Create a blank report by picking any published semantic model. In the copilot tab you can give the input prompt that best describes the business needs that you want to fulfill for the report that you want to create.
The semantic model that we are intended to use for creating any new report needs to be built with all best practices to get the best out of the copilot. The terminologies used in the semantic model and the naming conventions used should be user friendly so that Azure Open AI can recognize.
Summarize Visuals
You can summarize the report visuals into bullet points or you can generate high level gist of the contents for quick references. Most of the cases these quick insights generated on top of the existing reports can be used for executive reporting and for presentations.
Copilot in Power BI Desktop for Developers
If you are unable to see the copilot in the tool bar in Power BI desktop, then you have to explicitly enable it from the Options as in below
EU Data Boundary
Customers can configure their service to be in-scope for the EU Data Boundary by provisioning their tenant and all Microsoft Fabric capacities in an EU datacenter location. Customer Data and pseudonymized personal data is stored and processed in the EU Data Boundary aside from specific residual transfers that are documented in Services that transfer a subset of Customer Data or pseudonymized personal data out of the EU Data Boundary on an ongoing basis.
Fabric also enables the option to select an Azure region where Customer Data is stored when creating new Microsoft Fabric capacity. The default option listed is your tenant home region. If you select that region, all associated data, including Customer Data, is stored in that Geo. If you select a different region, some Customer Data is still stored in the home Geo. By selecting a region in the EU, Customer Data will be stored in the EU Data Boundary.
MS Fabric Capacity Metrics
Starting from February 2024, you can view the total capacity usage for Copilot under the operation name “Copilot in Fabric” in your Fabric Capacity Metrics App.
No comments:
Post a Comment