The challenge is to join each transaction to the promotion active on its order date, and find out how many orders were placed outside of the promotional period.
Disclaimer: This solution is based Aaron Perry’s DAX solution, an instructor from Maven Analytics.
For this challenge, my initial impulse was to unpivot the promotions table, so that the dates can all fall under one column. Once we have them under one column, then we can find a way to build a table that contains the correct promotion for each date, then perform a left join on the orders table.
Suggested Solution Example:
Promotions table
promo_id
promo_name
Dates
BF_2023
Black Friday
24/11/2023
BF_2023
Black Friday
25/11/2023
BF_2023
Black Friday
26/11/2023
BF_2023
Black Friday
27/11/2023
BF_2023
Black Friday
28/11/2023
BF_2023
Black Friday
29/11/2023
Orders Table after left join
order_id
order_date
order_quantity
promo_id
610
24-Nov-23
2
BF_2023
611
25-Nov-23
5
BF_2023
612
25-Nov-23
3
BF_2023
613
27-Nov-23
1
BF_2023
614
28-Nov-23
2
BF_2023
615
29-Nov-23
5
BF_2023
616
29-Nov-23
2
BF_2023
However, I feel, the challenge involves how to accurately and dynamically expand the promotions table for each promotion date.
This approach, although may work, is still an SQL way of thinking. The better approach is to think in DAX. The better way is to create a new column that would check for each row, in the column if it belongs in the promotion range, then return the promotion name.
DEFINE COLUMN 'orders'[promo_ids]=VAR d ='orders'[order_date]RETURNCALCULATE(MAX('promotions'[promo_id]),FILTER(ALL('promotions'),'promotions'[start_date]<= d && d <='promotions'[end_date] ) )EVALUATE{COUNTROWS(FILTER( 'orders', ISBLANK( 'orders'[promo_ids] ) ) )}
First, the current date being evaluated , in the row context and then saved in the variable d. We then remove all of the filters from the promotions table using ALL(), then explicitly filter the promotions table again, based on the condition that the order date is between the start date and end date. once this is true it returns a one row table (since there is no overlap in promotions). Thereafter we get the individual value using max and wrap it around calculate, in order to force the filter.
Once we have the column, we then query the orders table for blank promo ids.
This is my DAX Query solution to Maven Analytics Data Drill #3 – Rolling Up, Looking Back.
The challenge is to find the difference in MoM sales values from April to May for the Astoria location.
More information to the challenge can be found on their website.
For this Data Drill the request and solution is relatively simple in nature, as any BI analyst, in any capacity, will encounter some form of Month-on-Month time intelligence request.
The solution is as follows:
First, we define our measures. Total Sales, Previous Month Sales and the Abs Sales (actually just values, not the mathematical absolute of anything).
Total Sales is just the sum of sales.
PM Total Sales (Previous Month Total Sales) uses the CALCULATE() and the DATEADD() function, The expression to be evaluated is Total Sales, but going back one-month exactly, for each date in the table. e.g. 15-Feb-25 would now be 15-Jan-25.
Abs_Sales (Absolute Sales) finds the difference between the current sales and the previous month sales.
/*-------------Maven Analytics - Data Drills-------------*//*----Data Drills - 3 - Rolling Up, Looking Back----*/DEFINE MEASURE Box[Total Sales]=SUM( Coffee[sales] ) MEASURE Box[PM Total Sales]=CALCULATE([Total Sales],DATEADD( Calendar[Date], -1, MONTH ) ) MEASURE Box[Abs_Sales]=[Total Sales]-[PM Total Sales DA]
Thereafter, we now need to query the data, grouping by Year-Month, and Store.
To minimise the number of rows, we filter the output query to get only the months in question, as well as the store.
EVALUATECALCULATETABLE(SUMMARIZECOLUMNS( Calendar[Year-Month], Coffee[store],"Total Sales", [Total Sales],"PM Total Sales DA", [PM Total Sales DA],"Abs Sales", [Abs_Sales] ), Calendar[Date]>=DATE( 2023, 4, 1 ), Calendar[Date]<DATE( 2023, 6, 1 ), Coffee[store]="Astoria")ORDER BY Calendar[Year-Month], Coffee[store]
This is my DAX Query solution to Maven Analytics Data Drill #7 – Turning Bullish.
The challenge is to find the Golden Crosses in the dataset, and submit the most recent one, as a solution.
More information to the challenge can be found on their website.
Disclaimer: I did use Gen-AI (Microsoft Co-Pilot) in a part of this solution. Why ? Because DAX is just hard sometimes 😜🤷🏾♂️
I have solved this as a DAX query, then will visualise this in Power BI.
According to the instructions, we need to create a table. The table will have columns from the original dataset. It will also include additional columns for the evaluation.
Original Table Columns
Date
Closing Price
New Table Columns
Date
Closing Price
50-day moving average
200-day moving average
Golden Cross
50-day moving average
First, let’s create a measure that refers to the Closing Price, so that we can use it, easily, in other measures.
Then we create a measure that calculates the 50-Day Moving Average (MVA).
My initial approach involved using the time intelligence functions. I used DATEADD() to filter out 50 days for each date in the table. After that, I calculated the average.
The above code, gets the first date and the last date, relative to the filter context, then calculates the average, while removing all the filters. Once I have all the dates, then I just calculate the average.
Sample Result :
This seems correct, on the surface, as it resembles the closing price, so it could be the correct solution. However, it is not correct.
If we take the sum of the closing prices, divided by 50 days, inclusive, for 31/10/2025 , we get 662.172
Average for the past 50 days, all dates inclusive, for the 31st.
But why is on earth, is this happening ? It should be working, right ???!! Oh no, I just got a message from my Manager, she says, she needs the report ready for her meeting later this week. 😫
Calm, down, not to worry, this can be explained.
The problem, is we used DATEADD() to get the required dates. The nature of DATEADD() is to return a date at the row level. This would be sufficient for most use cases, however, for this dataset, there happens to be missing dates. Therefore the expression is being evaluated by a date table with blanks in the closing price. This would be equivalent to the below, demonstrated in a spreadsheet.
If we go 50 days back, from 31/10/25, our start date would be 11/09/25. However, since we have set the expression to less or equal than 31/10/25, we get a count of 51.
Using DATEADD(), with 50 as the first parameter, and having less than or equal to, simulation.
We can still see the average doesn’t match the one we saw previously for the incorrect answer (667.65). This is because the DAX engine, removes the blanks rows from the dataset before calculating the average. This is okay. Blanks are empty values. They are not the same as having a 0 value. A 0 value would be added to the count, I believe. However, this is not what we want to happen for this use case.
Without blanks.
Debugging the incorrect value.
Now we know what is happening. We can proceed to correct the measure. The solution is to not use the DATEADD() function. This is where AI can come in and assist, to look for an alternative. As a DAX developer, this ideally how you can use Gen-AI in your workflow-using it in moderation. Once you understand the inner workings of the problem. Co-Pilot came up with the following solution:
Using TOPN() as table filter, is clever, as we will, now, only consider the available dates when calculating the average. What matters is that we get the correct count (50) and the correct sum (33,108.6).
The variable last date stores the maximum date of the table relative to the filter context. (Which, in this case, would just be the exact date being evaluated e.g 31/10/25). We remove all other filters using ALL(). Then, we get a table with the max date relative to the filter context using FILTER(). We then, use TOPN() to get the first 50 rows, sorted by descending order a.k.a the last 50 entries.
Finally, we return the average closing price using AVERAGEX() which iterates over our new table.
Correct value.
Using this same logic we can find the 200-day moving average as well.
The only difference here is we have changed the first parameter of the TOPN() function to 200. We also changed the variable names.
Sample:
Let’s check this in Excel:
Seems to be correct
Golden Cross
Now to evaluate for the golden cross, in a tabular manner, we need to check for two things
If today’s (i.e. the current date) 50-day MVA is greater than today’s 200-day MVA.
If yesterday’s 50-day MVA was less than today’s 200-day MVA.
For the first part , we can already compute this with our defined measures. However, for the second part, we still need to create the measures for the previous day