Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello,
I am looking for some help related to transformations. So i want to add more column in target data for field let say "transformid" and in this field i want to use Auto Increment so if there any insert in Source Data, the Target Data will count 1,2,3 etc
Can anyone let me know if this is possible or not? and if possible how do i build expression
Thanks you
Dendy
If all rows are to be unique then you could use rowno() to get a 1 for the first row, 2 for the second and so on.
LOAD
RowNo() as IncrementID,
FieldA,
FieldB,
Etc
FROM [Source Data];
Edit: I accidentally wrote RecNo() in my orignial script sample, I've changed it to RowNo(). RecNo() will give you the row number of the source table not the target table.
Hello Dendy,
Are you talking about Qlik Replicate transformation?
It's not difficult in Replicate but I think the best approach is using the database itself auto-increasing functionalities (auto-increasing columns in SQL Server, MySQL, or sequence column in Oracle etc...), the advantage of using DB's functions than using Qlik Replicate transformation are:
1. The task definition is simple and easy maintanence;
2. Gain better performance (you need not do the compute in Replicate and no unnecessary network taffic especailly for huge changes)
I'm not sure if your target DB have such ability. If not we will try to find other options.
thanks,
John WANG.
I am sorry. I missed the detail that you where posting in the Qlik Replicate section. My suggestion above might not be a doable solution for you. It could however be useful when working with the QlikView /Qlik Sense script.
Hi john,
So can we generate a sequence number within Qlik replicate?
If yes,then can you show us?
Thanks,
Harsh
Hello @harsh2 ,
Personally, I do not think it's a good approach to generate such a sequence number by Qlik Replicate. It's much better if we can do that in target DB, the advantages are: easier (need not complex expression in Replicate); faster (need not transfer these information via network); stronger (to avoid the duplicate number if any network error or other exceptions).
Regards,
John.
Hi @john_wang
I am interested in this post, we have tried to add identity column on Azure SQL Server with configuration on column RECORD_ID BIGINT IDENTITY(1,1) NOT NULL, and it works as expected
but we have target table using Azure Databricks Lakehouse (Delta), with configuration column RECORD_ID NOT NULL GENERATED BY DEFAULT AS IDENTITY (START WITH 1 INCREMENT BY 1) the result fails with error message "[DELTA_NOT_NULL_CONSTRAINT_VIOLATED] NOT NULL constraint violation for column: RECORD_ID."
You should know that we have created a ticket to Qlik Support but the issue is still under analysis in the lap
can you help me faster? because this is a production issue
Hello Anto, @AgusBudianto
Thanks for the update. I'm pasting the update from your another article.
It seems that using the current timestamp is the best approach to identify the latest updated rows. In Databricks, a column defined as GENERATED BY DEFAULT AS IDENTITY cannot be directly updated for existing data. Additionally, it cannot be modified when using the LOAD method to insert records into a table. However, it can be updated through INSERT operations, but Qlik Replicate does not support transactional apply mode for Databricks endpoint.
Regards,
John.
Thank you @john_wang for convincing us that using current timestamp is the best approach.