Skip to main content

Support for developing python/R modules files in Spark


Very excited about Fabric in general, and the support for data scientists with delta as the underlying interoperability.


However, it is currently impossible to build a sophisticated Fabric Spark pipeline with more than 2 or 3 python files. This is the current set up really su...

Read more...
0 Comments

Read more...
0 Comments

STATUS DETAILS
Needs Votes

REST API and python SDK for Spark jobs


The management of Spark jobs is done via the GUI Create an Apache Spark job definition - Microsoft Fabric |...

Read more...
0 Comments

Read more...
0 Comments

STATUS DETAILS
Planned

Enable full access to Warehouse tables from Spark


Fabric is a game changer because it brings the traditional warehouse approach and Spark approach together under OneLake. This means our data scientists can get access to all the warehouse tables we built over the years.


While this is a major step, the interface from the Data ...

Read more...
0 Comments

Read more...
0 Comments

STATUS DETAILS
Needs Votes

Support Workspace introspection in Notebook


in a Spark notebook, I would like to be able to call upon a Python module to introspect what is inside the workspace. For example, within a Python notebook I would like to be able to list the warehouses and lakehouses in the same workspace, as well as their OneLake path. It would be a bonus if...

Read more...
0 Comments

Read more...
0 Comments

STATUS DETAILS
Planned

Support Temporal tables/Expose delta logs in warehouse


Great that we have Delta logs which theoretically allow some sort of time travel on a table. However, the warehouse Delta table does not expose the Delta logs. This means that we have no ability to see what records have been deleted from load to load.


If we have Delta tables, ...

Read more...
0 Comments

Read more...
0 Comments

STATUS DETAILS
Planned