Toby Riley on 05 Jun 2023 10:42:23
Name Column Mapping Mode in SQL Endpoints for Lakehouse is not supported so tables with unsupported columns names (spaces, special chars, capitalization) do not work.
To re-create the behaviour.
1. Create a table in Databricks with column mapping enabled.
%python
spark.conf.set("spark.databricks.delta.properties.defaults.minWriterVersion", 5)
spark.conf.set("spark.databricks.delta.properties.defaults.minReaderVersion", 2)
spark.conf.set("spark.databricks.delta.properties.defaults.columnMapping.mode", "name")
2. Add shortcut tables from Databricks to Synapse Lakehouse. The data shows correctly.
3. Go to the SQL Endpoint. Tables fail to load. with the error. Corrective Action: Recreate the table without column mapping property.
4. This has a knock-on effect with data failing to feed into the datasets and Power BI.
Charles Webb (administrator)
We've shipped this idea.
- Comments (10)
RE: Support Name Column Mapping Mode in SQL Endpoints for Lakehouse
https://learn.microsoft.com/en-us/fabric/release-plan/data-engineeringhttps://learn.microsoft.com/en-us/fabric/release-plan/data-warehousehttps://learn.microsoft.com/en-us/fabric/release-plan/data-factoryCan someone tell me which one of these roadmaps states that this is being worked on, because I do not see any evidence of it. And what blog has these updates?
RE: Support Name Column Mapping Mode in SQL Endpoints for Lakehouse
I get issues when trying to rename or drop columns in my Lakehouse table. In order to use %%ALTER TABLE ALTER COLUMN, I need to enable column name mapping. However after I enable the column name mapping, the table stops syncing properly to the SQL Analytics Endpoint.
RE: Support Name Column Mapping Mode in SQL Endpoints for Lakehouse
Telling us vaguely this is on your roadmap does not solve any problems, any news when this will actually be fixed? Again, it works fine in native spark, and in databricks.
RE: Support Name Column Mapping Mode in SQL Endpoints for Lakehouse
I second, third, fourth all the complaints in this idea post. I have a whole infrastructure with column names that contain special characters like spaces or underscores that are already present that are distinct from spaces, and we have to rename all of our columns because Fabric did not enable this simple feature that already works in Databricks.
RE: Support Name Column Mapping Mode in SQL Endpoints for Lakehouse
Even the internal delta tables created using Fabric notebooks in lakehouse does not appear in the sql endpointdf.write.format("delta").option('delta.columnMapping.mode' , 'name').saveAsTable("TABLE_NAME")Can you please share an ETA, when can we expect the feature to be available. This pose severe limitations in integration of data from different sources
RE: Support Name Column Mapping Mode in SQL Endpoints for Lakehouse
I don't see this called out in the roadmap. Which item does this connect to?https://learn.microsoft.com/en-us/fabric/release-plan/data-engineering
RE: Support Name Column Mapping Mode in SQL Endpoints for Lakehouse
More details in case others are looking for this same issue:(1) The tables in question were created from a python notebook (PySpark) with the columnMapping mode set to "name".(2) Tables load successfully and show up properly in the explorer window on the left pane in the same window where the notebook executed.(3) Switching over to the Lakehouse SQL Query window, we get errors popping up that tables could not be loaded.Example error popups have this message for each table that was loaded by the notebook.Table uses column mapping which is not supported.Warehouse: ProductError Code:Subcode: 0Exception Type:Sync Error Time: Wed May 22 2024 17:58:55 GMT-0500 (Central Daylight Time)Hresult: -2147467259Table Sync State: FailureSql Sync State: FailureLast Sync Time:Corrective Action: Recreate the table without column mapping property.
RE: Support Name Column Mapping Mode in SQL Endpoints for Lakehouse
This is super annoying... We're trying to port tables using a process we used on Databricks, and it's not working here in Fabric due to now supporting this feature. There are columns, for example, that have spaces or international characters. Workarounds to replace with underscores or something of that nature are costly and have downstream consequences to other processes and prevent basic queries from being portable across systems, including those used in the source SQL Server where this data originated.
RE: Support Name Column Mapping Mode in SQL Endpoints for Lakehouse
Would this also enable "table name mapping". Apologies I know little about spark, but I'm currently having to build a warehouse on top of a lakehouse to serve as the "presentation layer" to clean up table and column names for use in Semantic Models via direct lake mode. Would be much, MUCH nicer if we could simply use "pretty" names with spaces and special characters directly in the Lakehouse for both columns AND tables.Thanks!Scott
RE: Support Name Column Mapping Mode in SQL Endpoints for Lakehouse
This is on our roadmap!