Frithjof Aarrestad Vassbø on 02 Aug 2024 05:46:05
I wish that it was an easier and more efficient way to make changes to existing columns in a Fabric Lakehouse table.
Currently, if we want to alter or drop existing columns in a table, the best option for me is to use overwriteSchema. However this requires me to rewrite all the data in the table (overwrite). Not efficient if the table has a large volume of data.
Another alternative would be to use %%sql or spark.sql() ALTER TABLE DROP COLUMN / RENAME COLUMN / ALTER COLUMN.
However, this requires enabling column name mapping mode on the table. Enabling column name mapping mode seems to break the sync between Lakehouse table and SQL Analytics Endpoint and Power BI.
I'm hoping it can be made possible to use these ALTER TABLE features (or some other equivalent feature) without breaking the sync with the SQL Analytics Endpoint and Power BI.