Your complex data integration projects may have dependencies, which makes them an important aspect in job scheduling. Now, it’s possible to create dependent pipelines in your Azure Data Factories by adding dependencies among tumbling window triggers in your pipelines. By creating a dependency, you’re able to guarantee that a trigger is executed only after the successful execution of a dependent trigger in your data factory.
Gantt views are now available for monitoring data factory pipelines. Use them to quickly visualize your data factory pipelines and activity runs. See the Gantt view per pipeline or group by annotations or tags that you created on your pipelines.
The Azure Data Factory team has added parameter support to the Mapping Data Flows public preview feature that will now allow you to build configurable data transformation logic in a code-free design environment. So, if your requirements involve logic that is based on frequently changing attributes like time, date, location, price, cost, etc., it is easy to design transformation logic once and parameterize those values inside of your Data Flows.
Azure Data Factory upgraded the Teradata connector with new feature adds and enhancement. More specifically:
The Teradata connector is now empowered by a built-in driver, which save you from installing the driver manually to get started.
You can now use copy activity to ingest data from Teradata with out-of-box parallel copy to boost performance. With hash partition and dynamic range partition support, data factory can run parallel queries against your Teradata source to load data by partitions concurrently to achieve better performance.
It also addressed the issues like connection and query timeout that some customers hit earlier.
Now ADF further enriches the PolyBase integration to support:
Loading data from Azure Data Lake Storage Gen2 with account key or managed identity authentication;
Loading data from Azure Blob configured with VNet service endpoint, either as the original source or as staging store, in which case underneath ADF automatically switch to abfss:// scheme to create external data source as required by PolyBase.
Azure Data Factory copy activity now supports built-in data partitioning to performantly ingest data from Oracle database. With physical partition and dynamic range partition support, data factory can run parallel queries against your Oracle source to load data by partitions concurrently to achieve great performance.
Azure Data Factory empowers you to copy data from Azure Data Lake Storage (ADLS) Gen1 to Gen2 easily and performantly. To address one of the common asks, now you can further choose to preserve the access control lists (ACLs) set on the ADLS Gen1 files/directories using Data Factory, in which way the same access control applies after upgrading from Gen1 to Gen2.