This feature would split the storage of an imported table between in-memory (cache) and disk. The dataset designer would be able to set a policy defining the rows (partitions) of the fact table that would be stored in memory similar to an incremental refresh policy. For example, the model could always store the last two years of data in memory but data older than two years would be stored on disk. (Of course most report queries only access the recent data so this is where the most expensive and performant memory resources should be allocated).
The data stored on disk could still be compressed and in columnar format like the clustered columnstore index in SQL database technologies such that the relatively fewer queries against this data could still deliver acceptable performance. Additionally, there could be a licensing option for organizations to purchase fast NVMe Solid State Disks as part of their premium capacity nodes to supplement the vCPUs and RAM of these nodes and these fast drives could be applied for this feature.
This feature would account for scenarios in which any queries against the source system would result in poor performance and it would avoid the additional complexity of models with aggregated tables and their relationships.