The above produces the correct data, but as I said, the load times are immense.
The purpose of the dashboard is to calculate some basic HR data, such as headcount, turnover, open requisitions, etc.
Here is my current data model;
The headcount data works fine (although load times are around 1.5 minutes due to creating unique date fields for each record, maybe there was a better way to do this?)
The main issue centers around integrating the OpenReqs table into the data module. I'm unable to map the OpenReqs table directly to the Calendar and Hierarchy, as it would cause a loop. I attempted to circumvent the issue by loading the OpenReqs table twice, one to connect it to the calendar, and the other to connect it to my hierarchy tables.
I'm using the hierarchy and hierarchybelongsto functions to create a tree-view(s) of my desired data;
I have a feeling that creating a _DateField record for each individual record (resulting in ~2 million records) is part of the problem. ☹️
Any insight would be appreciated, thank you for taking the time to look at my issue! 🙂
improves your performance enough (avoiding the text() & ... might be also useful but is just the second step).
If not you will need to change the datamodel. Without knowing your data and all your requirements it's difficult to say what might be necessary. But I think I would start with concatenating from OpenRegs and Headcount (and/or respectively the OpenRegs2) into a single table with adjusting the fieldnames and by adding a source-field to be able to differ between the different sources.
count(if(condition, Field)) into if(condition, count(Field))
which is often interchangeable and much more performant - but both approaches are not absolutely identically. By count(if(condition, Field)) the condition will be checked for each single record which makes such an approach in general quite slow - especially if it's applied on rather larger datasets and in your case it needs to hop over multiple tables respectively the virtual table which is created in the background on which the calculation is performed could be quite large and even if most of the calculations are multithreading the creation of those virtual tables is a single-threading process.
Therefore my recommendation to consider a change on the datamodel.