Loading Knowledge Sharing

Microsoft Fabric

Building a Simple Decision-Support Workflow Using Microsoft Fabric

By Syed Hussnain Sherazi | April 8, 2025 | Microsoft Fabric | Decision Workflow | Power BI

A step-by-step guide to building a simple decision-support workflow using Microsoft Fabric.

A practical walkthrough from raw data to a dashboard your team can use

Many people have heard about Microsoft Fabric and know the main product names: Lakehouse, OneLake, Dataflow Gen2, Shortcuts, and Power BI. What is less obvious is how those pieces fit into a normal working analytics process.

This article walks through a simple retail decision-support workflow in Microsoft Fabric. The aim is to show the flow from raw data to a dashboard, using a scenario that is realistic enough to adapt to your own environment.

What We Are Building

Imagine you work for a retail company. Your team needs to track sales performance across regions, understand which products are underperforming, and get early warnings when a store's revenue is trending below target.

Today, the data lives in a SQL Server database in a datacentre. Someone exports it to Excel every Monday morning. The file gets emailed around. Several people open different versions, and nobody is fully confident in the numbers.

The goal is to replace that manual workflow with one that runs automatically, uses a single source of truth, and gives decision-makers what they need without spreadsheet handoffs.

Microsoft Fabric: A Quick Context

Microsoft Fabric is an all-in-one analytics platform. It brings data engineering, data warehousing, data science, real-time analytics, and business intelligence into one environment. All of it sits on top of OneLake, a unified storage layer that reduces the need to copy data between tools.

You can think of Fabric as Microsoft bringing Azure Data Factory, Azure Synapse, and Power BI closer together under a simpler shared experience.

For this retail use case, we will use:

  • Dataflow Gen2 for ingestion
  • Lakehouse for storage
  • Notebooks / SQL Analytics Endpoint for transformation
  • Semantic Model for business logic
  • Power BI for the dashboard

The Workflow

Microsoft Fabric decision-support workflow
flowchart LR
  subgraph Fabric["Microsoft Fabric workspace"]
    WS["Workspace"]
    LH["Lakehouse"]
    DF["Dataflow Gen2"]
    GOLD["Gold layer tables"]
    SM["Semantic model"]
    PBI["Power BI report"]
  end
  subgraph Business["Business workflow"]
    QUESTION["Business question"]
    REVIEW["Decision review"]
    ACTION["Action and follow-up"]
  end
  QUESTION -->|"defines required data"| WS
  WS --> LH
  LH --> DF -->|"clean and shape"| GOLD
  GOLD --> SM -->|"measures and relationships"| PBI
  PBI --> REVIEW --> ACTION
  SECURITY["Access, sensitivity labels, refresh ownership"] -.-> WS
  SECURITY -.-> SM
  SECURITY -.-> PBI
Workflow architecture
flowchart LR
  subgraph Input["Input"]
    SOURCE["Source data or request"]
  end
  subgraph Process["Process"]
    MODEL["Model and transform"]
    REVIEW["Review"]
  end
  subgraph Output["Output"]
    DECISION["Decision or published asset"]
  end
  SOURCE --> MODEL --> REVIEW --> DECISION
  CONTROL["Governance and quality control"] -.-> Process
  CONTROL -.-> Output

The workflow has five main steps: create the workspace and Lakehouse, ingest the source data, transform it into business-ready tables, define the semantic model, and publish the dashboard.

Step 1: Create a Workspace and Lakehouse

Everything in Fabric lives inside a Workspace. This is your project space. Create one, give it a clear name such as "Retail Analytics Sales Performance", and assign the right people to it.

Inside the workspace, create a Lakehouse. This will be the central storage and processing environment. A clear name such as `lh_sales_retail` will help the team recognise its purpose.

When you create the Lakehouse, Fabric automatically provides:

  • A Files section for raw files
  • A Tables section for structured Delta tables
  • A SQL Analytics Endpoint for querying with SQL

You do not need to configure these manually. Fabric creates the basic structure for you.

Step 2: Ingest Data Using Dataflow Gen2

Next, the data needs to move from SQL Server into the Lakehouse.

Dataflow Gen2 is Fabric's visual data transformation tool. It feels familiar to anyone who has used Power Query, but it is designed for repeatable pipelines that run on a schedule and write to Fabric destinations.

Inside your workspace, create a new Dataflow Gen2. Connect it to the SQL Server data source. For this scenario, we are pulling three tables: `sales_transactions`, `products`, and `stores`.

Configure the important parts carefully:

  • Incremental refresh: Do not pull the full table every time. Set up the dataflow to pull only rows where the transaction date is newer than the last successful run. This keeps the process fast and cost-efficient.
  • Destination: Point each query to a Delta table in the Lakehouse. This becomes the Bronze layer: raw data as it came from the source, without transformations applied.

Once the dataflow is configured, schedule it to run every night at 2am. From that point, the Lakehouse receives fresh data every morning without anyone exporting a spreadsheet.

Step 3: Transform the Data Into the Gold Layer

Raw data is not ready for business users. It needs to be cleaned, joined, and shaped into structures the dashboard can use.

Fabric gives you two common options: Notebooks for Python or Spark, and SQL transformations for T-SQL. For a reporting workflow like this one, SQL is often simpler to read and maintain.

Using the SQL Analytics Endpoint in the Lakehouse, you can write a transformation like this:

-- Create a clean sales fact table
CREATE TABLE gold.fact_sales AS
SELECT
    t.transaction_id,
    t.transaction_date,
    t.store_id,
    t.product_id,
    t.quantity_sold,
    t.unit_price,
    t.quantity_sold * t.unit_price AS revenue,
    p.product_name,
    p.category,
    p.cost_price,
    (t.unit_price - p.cost_price) * t.quantity_sold AS gross_margin,
    s.store_name,
    s.region
FROM bronze.sales_transactions t
LEFT JOIN bronze.products p ON t.product_id = p.product_id
LEFT JOIN bronze.stores s ON t.store_id = s.store_id
WHERE t.transaction_date IS NOT NULL
AND t.unit_price > 0;

This creates a clean table with the fields the dashboard needs. Users do not need to join several tables at report time, and there is less room for ambiguity about product names, regions, or margin calculations.

You can also add pre-calculated aggregations:

-- Daily regional summary for trend analysis
CREATE TABLE gold.daily_regional_summary AS
SELECT
    transaction_date,
    region,
    store_name,
    COUNT(DISTINCT transaction_id) AS transaction_count,
    SUM(revenue) AS total_revenue,
    SUM(gross_margin) AS total_margin,
    SUM(gross_margin) / NULLIF(SUM(revenue), 0) AS margin_pct
FROM gold.fact_sales
GROUP BY transaction_date, region, store_name;

Wrap these transformations inside a Notebook or Data Pipeline and schedule them to run after Dataflow Gen2 completes. Each morning, the gold layer is refreshed with processed data from the previous day.

Step 4: Build the Semantic Model

The Semantic Model is the bridge between the data and the business. It is where you define measures, KPIs, and business logic once, so every report uses the same definitions.

In Power BI Desktop or directly in Fabric, connect to the Lakehouse gold tables and define key measures using DAX:

Total Revenue = SUM(fact_sales[revenue])

Revenue vs Target % = 
DIVIDE([Total Revenue], [Monthly Revenue Target], 0) * 100

Revenue Trend (7D) = 
CALCULATE([Total Revenue], DATESINPERIOD('Date'[Date], LASTDATE('Date'[Date]), -7, DAY))

Stores Below Target = 
CALCULATE(
    DISTINCTCOUNT(fact_sales[store_id]),
    FILTER(VALUES(fact_sales[store_id]),
    [Revenue vs Target %] < 90)
)

These measures are defined centrally and reused everywhere. That prevents each analyst from creating a slightly different version of "revenue" or "stores below target."

Step 5: Build the Dashboard

With the semantic model published to Fabric, the Power BI dashboard becomes much quicker to build.

For this regional sales performance dashboard, I would include:

  • KPI cards at the top: Total Revenue, Margin %, Transaction Count, Stores Below Target
  • Map visual: Revenue by region, colour-coded by performance versus target
  • Line chart: Daily revenue trend for the last 30 days, with a target line overlay
  • Table: Store-level breakdown sorted by performance versus target, so managers can see which stores need attention
  • Conditional formatting: Red highlights for stores below 90% of target

The dashboard refreshes automatically every morning. There are no emailed exports, no duplicate files, and no debate about which spreadsheet is current.

What This Workflow Achieves

Fabric workflow outputs
flowchart LR
  subgraph DataProduct["Reusable data product"]
    GOLD["Gold tables"]
    MODEL["Semantic model"]
    MEASURES["Reusable measures"]
  end
  subgraph DecisionSurface["Decision surface"]
    DASH["Dashboard"]
    ALERTS["Exceptions and alerts"]
    MEETING["Review meeting"]
  end
  GOLD --> MODEL --> MEASURES --> DASH
  MEASURES --> ALERTS
  DASH --> MEETING
  ALERTS --> MEETING
  OWNER["Business owner and data owner"] -.-> MODEL
  OWNER -.-> MEETING

Before this workflow, a regional manager waits until Monday morning, opens an Excel file that may not be the latest version, and tries to understand last week's performance. By the time a decision is made, the data may already be a week old.

After this workflow, the same manager opens a dashboard on a laptop or phone, sees how each store performed yesterday, identifies stores trending below target, and can intervene before the week is over.

That is the difference between reactive and proactive decision-making. The value comes from a workflow that runs consistently once it has been set up.

Common Mistakes to Avoid

These are the mistakes that often trip teams up when building this type of workflow:

Skipping the bronze layer. It is tempting to transform data as you ingest it. Avoid that. Land raw data first so you can reprocess it later without returning to the source system.

Putting business logic in the dashboard. Calculated columns inside a Power BI report are easy to duplicate and hard to reuse. Keep important business logic in the semantic model.

Not setting up monitoring. Add simple data quality checks that alert the team if daily row counts drop unexpectedly. You want to know about a pipeline issue before business users notice it.

Over-engineering from day one. Start with the simplest useful version: three tables, one gold view, one dashboard. Prove the value, then add complexity.

The Main Lesson

Microsoft Fabric makes this kind of workflow more accessible than it used to be. A capable data practitioner can build a useful first version without needing a large team across several specialist disciplines.

The important part is not the platform by itself. The important part is what happens when decision-makers stop passing spreadsheets around and start working from the same trusted, current view of the business.

That is when the quality of decisions begins to improve.

Next in this series: A step-by-step guide to planning a migration from on-premises SQL Server to Microsoft Fabric, covering assessment, planning, and go-live without disrupting the business.

Back to Knowledge SharingContact Syed Hussnain

Reader Comments

Add a comment with your name and email. Your email is used only for basic validation and is not shown publicly.