Qlik Community

Qlik Product Innovation Blog

Learn about what's new across all of the products in our growing Qlik product portfolio.


About 2,000 organizations use Attunity solutions to automatically replicate and transform data for analytics.  The biggest hands-on beneficiaries are architects and DBAs.  They can create and manage pipelines that copy live transactions from SAP, mainframe or Oracle to targets such as cloud-based data warehouses and data lakes, then refine it for analytics – without waiting on ETL developers to hand-script everything.  (To explore the automated process and GUI, check out this video by my colleague Mike Tarallo.)

A big part of the value is integrating with many platforms.  Things change – you might start your analytics on AWS, then try certain workloads on Azure and Google.  Your company might acquire a business unit that runs production data on SQL Server rather than Oracle.  Your CIO might institute a data modernization effort that requires you to offload analytics queries from your DB2 z/OS systems to Kafka.  Whatever direction you take, it helps to have a consistent, automated data pipeline process.


Here at Attunity, now part of Qlik, we cast an ever-widening net with our pipeline automation.  Attunity version 6.5 extends our leadership in enterprise data integration with enhanced platform support, performance and security capabilities, as well as additional platform configuration options for flexibility.


Platform Integration Leadership in Enterprise Data Integration

Attunity now integrates with new end points that organizations are embracing for both operations and analytics.  We have added support for Salesforce, MongoDB and Google Cloud SQL as sources, and Google Cloud BigQuery, Azure Databricks (beta) and Azure Data Lake Storage (ADLS) Gen2 as targets.

Attunity Data Integration: Any Source to Any Target

Attunitiy image.png


Here are a few example use cases to see what this looks like in action. 


  • Next-Generation Data Lakes: Enterprises are shifting from complex, Hadoop-based data lakes on premises to simpler, more scalable and more efficient data lakes based on cloud-native data stores and Apache Spark. Databricks is a prime target.  With Attunity 6.5, you can now automatically create an operational data store in the Databricks Delta lake, then load, merge and format data in it from Salesforce and other sources – mainframe production systems, SAP SRM applications, you name it.  Source data and schema updates are automatically propagated into the ODS.  So your analytics are based on a current, accurate view of the business, spanning sales trends, customer activity, supply chain updates and more.


  • Multi-Cloud Initiatives: Many organizations are adopting a 2nd or even 3rd Cloud Service Provider in order to control cost, test one CSP’s advanced analytics tools, or reduce exposure to a CSP that is also a competitor. We continue to broaden our portfolio to flexibly support these decisions.  With Attunity 6.5, you can migrate your data warehouse from Amazon Redshift to Google BigQuery, or migrate your database from Google Cloud SQL to Azure SQL DB. 



We continue to reduce latency and increase throughput via deep integration with the most common enterprise platforms.  Here is a look at the performance enhancements of Attunity Replicate 6.5.

  • Kafka Targets:  We have significantly improved performance when using the Avro format to encode messages produced by the Kafka target.
  • Log Stream Targets:  Replicate users now can adjust data compression levels with an intuitive slider.  Higher compression levels conserve disk space, while lower compression reduces latency.
  • SQL Server Sources:  Replicate has reduced latency by reading SQL Server Large Objects (LOBs) directly from transaction logs instead of performing a lookup.
  • Extended Parallel Load Support:  This throughput-enhancing feature has been extended to the following endpoints:
    • Sources: Teradata and Amazon RDS for SQL Server
    • Targets: Azure Data Warehouse, and Snowflake on AWS and Azure
  • Target Change Processing:  Replicate now can increase throughput by applying batched changes to multiple tables concurrently on Amazon Redshift and Microsoft Azure SQL Data Warehouse targets, in addition to previously-supported targets.



We have also improved our management flexibility, pushing out the envelope with new configuration options for some of our most popular sources:

  • SQL Server:  Replicate now supports dynamic data masking, as well as non-sysadmin users configured with the Always On option.
  • Oracle:  Reset log operations are now captured by default, and Replicate can detect open transactions on all RAC nodes rather than just the primary node.
  • SAP HANA:  Users now can:
    • Replicate from both single and multi-tenant architectures on a single host.
    • Replicate partition tables and header columns with log-based CDC.



Finally, Attunity 6.5 hardens security controls on a number of fronts to ensure data integrity and support compliance efforts across all platforms.  For example, Attunity Replicate now includes a Secure Sockets Layer (SSL) option to encrypt client-server connections for email notifications.  This SSL capability also includes the ability to verify “peer” and “host” fields by validating server certificates.  We also have added non-privileged accounts on Windows, enabling users to run Replicate with restricted capabilities compared with the default account.  Finally, new installations now require strong passwords.


To download the software and try it for yourself, check out the Attunity Replicate Test Drive.


Article by Kevin Petrie

Product Marketing Director


Check out this video to see it in action: