Skip to main content

Vote (0) Share
's profile image

on

Comments (0)
's profile image Profile Picture

Vee Collins on 31 Jan 2025 07:56:06

RE:

This would be super helpful, otherwise I need to create two charts to give the necessary granularity choices.

's profile image Profile Picture

on 31 Jan 2025 06:43:56

RE:

Dataverse has a limit on the number of requests and will return a 429 Too Many Requests error if too many requests within a period of time as per documented by Microsoft: Service protection API limits (Microsoft Dataverse) - Power Apps | Microsoft LearnCurrently, we found out that there are no configurations available to users/administrators for limiting the frequency of API calls done to Dataverse as this is set-up at their service level.

's profile image Profile Picture

Vipul Pal on 31 Jan 2025 05:15:56

RE:

Along with Loan management there should be Fixed deposit management as well, this has been a frequent ask from alot of Customers/Prospects I've worked with.

's profile image Profile Picture

Vathsala HM on 31 Jan 2025 04:59:13

RE:

We are aware that there are service limits for Dataverse API based on this documentation:Our understanding is that these limits are there to restrict if there are too many API requests hitting the Data verse service. Can we check if these limits are configurable in any way, or are there any similar settings we could make use of to limit API calls to Data verse in our environment?

's profile image Profile Picture

Tom Augustsson on 31 Jan 2025 04:48:48

RE:

"...finally inner padding controls the gap between the bars in proportion to the bar size. "When looking at the Line and Clustered column chart, I cannot see this control setting in the X-axis format settings.

's profile image Profile Picture

Nick Sturgess on 30 Jan 2025 23:36:13

RE:

3 years later and this rudimentary functionality is still not implemented

's profile image Profile Picture

Tom Dockstader on 30 Jan 2025 20:59:20

RE:

That menuitem does not clean up the WHSlIcensePlateLabel records. It cleans up the InventDim records related to the licenseplate numbers (not the labels). It executes the below code. I think the idea from Suf still needs some consideration.WHSLicensePlate whsLicensePlate;  delete_from inventDimLPCleanupTask    where inventDimLPCleanupTask.SessionId  == _sessionId    exists join whsLicensePlate    where whsLicensePlate.LicensePlateParent == inventDimLPCleanupTask.LicensePlateId;

's profile image Profile Picture

Shyam Prasad on 30 Jan 2025 18:40:17

RE:

In Spark Streaming jobs running on Fabric capacity, there are currently no built-in metrics, graphs, or dashboards to display executor memory details such as free memory, consumed memory, and total memory. Providing these insights would help end users make informed decisions about scaling their environment type or workload based on resource utilization

's profile image Profile Picture

Paul Jacquemain on 30 Jan 2025 18:39:38

RE:

This is a much needed feature we have been waiting for. Often DAX queries can perform 10x or better than MDX in some cases. We have resorted to making users run queries in Power BI reports and manually export them because the MDX from Excel to get the same data can take so long in some cases. Yes there are things we can do to the model and performance tuning but when the result is the exact same data but taking 10x or more time using MDX then something needs to be done.

's profile image Profile Picture

Jake O'Malley on 30 Jan 2025 17:36:04

RE:

Completely agree. This also applies to Power Apps Gen2 Dataflows. I can move dataflows to downstream environments via solutions. Generally I will use Power Automate to orchestrate dataflows for the initial data migration into my downstream environments. The issue?I have to publish each dataflow individually in my target environment before I can run my orchestration Power Automate flow (which essentially just calls each dataflow in the order I want them to run, sometimes with other Power Automate flows running in between for data transformation). If I don't publish (and consequently, run) each dataflow then the orchestration flow fails because it can't run an unpublished dataflow. Publishing also runs each dataflow, so I have to go back through and bulk delete all the data created when I publish the flows. Then I can run my orchestration flow.Allowing us to publish gen2 dataflows without running them would be a massive improvement, in my opinion.Also, finding a fix to having to re-establish dataflow connections every time a solution is imported would be huge as well!