<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>idea Re: Support tomb-stoning methodology for Kafka target endpoints in Suggest an Idea</title>
    <link>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1797195#M5686</link>
    <description>&lt;P&gt;I vote for this also!&lt;/P&gt;</description>
    <pubDate>Tue, 06 Apr 2021 13:17:50 GMT</pubDate>
    <dc:creator>pgalluzzo</dc:creator>
    <dc:date>2021-04-06T13:17:50Z</dc:date>
    <item>
      <title>Support tomb-stoning methodology for Kafka target endpoints</title>
      <link>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idi-p/1720101</link>
      <description>&lt;P&gt;When there is a change to PK fields on the source system, Replicate sends to Kafka the previous PK data as part of the envelope for the update message. However, the update message has the new PK data as key. The result is that it can go to a different Kafka partition than the old message. This can impact to downstream consumers since there are no ordering guarantees for messages on different partitions.&lt;BR /&gt;&lt;BR /&gt;The idea is to send to Kafka a tombstone record for keys that no longer exist on the source system, not just to add the previous PK as metadata in the message envelope on a completely different key (the “beforeData” for each message).&lt;/P&gt;&lt;P&gt;Here is a use case: suppose there is an Email table in the source database where primary key fields are PersonId and EmailType. Suppose there is a record where PersonId=1 and EmailType=Office. Last, suppose someone were to UPDATE that record to have PersonId=1 and EmailType=Personal.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Current behavior:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Partition A:&lt;BR /&gt;&lt;EM&gt;Message1: Key - { “personId”: 1, “emailType”: “office” } / Value – { “address”: something@domain.com , “beforeData”: null}&lt;/EM&gt;&lt;BR /&gt;&lt;BR /&gt;Partition B:&lt;BR /&gt;&lt;EM&gt;Message1: Key - { “personId”: 1, “emailType”: “personal” } / Value – { “address”: something@domain.com , “beforeData”: { “personId”: 1, “emailType”: “office” }&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;In current behavior, the consumer is required to look at the “beforeData” for each message and create a tombstone record, with a potential processing for the original key from the new partition (B). This adds complexity on the consuming side.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Tomb-stoning methodology:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Partition A:&lt;BR /&gt;&lt;EM&gt;Message1: Key - { “personId”: 1, “emailType”: “office” } / Value – { “address”: something@domain.com }&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;Message2: Key - { “personId”: 1, “emailType”: “office” } / Value – null&lt;/EM&gt;&lt;BR /&gt;&lt;BR /&gt;Partition B:&lt;BR /&gt;&lt;EM&gt;Message1: Key - { “personId”: 1, “emailType”: “personal” } / Value – { “address”: something@domain.com }&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;With the tomb-stoning methodology, Message2 would be the tombstone message for the key { “personId”: 1, “emailType”: “office” }, handled by the producer and avoiding extra complexity on the producing side.&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jun 2020 19:29:22 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idi-p/1720101</guid>
      <dc:creator>Jose_Pena</dc:creator>
      <dc:date>2020-06-18T19:29:22Z</dc:date>
    </item>
    <item>
      <title>Re: Support tomb-stoning methodology for Kafka target endpoints - Status changed to: Open - Collecting Feedback</title>
      <link>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1725382#M2766</link>
      <description />
      <pubDate>Mon, 06 Jul 2020 13:46:58 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1725382#M2766</guid>
      <dc:creator>Ola_Mayer</dc:creator>
      <dc:date>2020-07-06T13:46:58Z</dc:date>
    </item>
    <item>
      <title>Re: Support tomb-stoning methodology for Kafka target endpoints</title>
      <link>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1797195#M5686</link>
      <description>&lt;P&gt;I vote for this also!&lt;/P&gt;</description>
      <pubDate>Tue, 06 Apr 2021 13:17:50 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1797195#M5686</guid>
      <dc:creator>pgalluzzo</dc:creator>
      <dc:date>2021-04-06T13:17:50Z</dc:date>
    </item>
    <item>
      <title>Re: Support tomb-stoning methodology for Kafka target endpoints</title>
      <link>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1797205#M5689</link>
      <description>&lt;P&gt;I'd at least like to see source endpoint deletes replicated to Kafka configurable to use either the Replicate classic method for generating "delete" messages or to generate a true Kafka tombstone message where the key is provided, but the message body is NULL.&amp;nbsp; &amp;nbsp; &amp;nbsp;This configuration option should be applied regardless of whether the source system PK was changed or just deleted.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 06 Apr 2021 13:31:33 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1797205#M5689</guid>
      <dc:creator>BradA</dc:creator>
      <dc:date>2021-04-06T13:31:33Z</dc:date>
    </item>
    <item>
      <title>Re: Support tomb-stoning methodology for Kafka target endpoints</title>
      <link>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1825321#M6886</link>
      <description>&lt;P&gt;Any movement on this? This is a common feature in most other change data capture products including several of your competitors.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Without this feature, data written into kafka that is maintained and compacted violates the data rules when it is partitioned at all, as a record will show as active in 2 partitions, thought it has truly just had a primary key update.&lt;/P&gt;</description>
      <pubDate>Wed, 28 Jul 2021 21:24:10 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1825321#M6886</guid>
      <dc:creator>MichaelMockus</dc:creator>
      <dc:date>2021-07-28T21:24:10Z</dc:date>
    </item>
    <item>
      <title>Re: Support tomb-stoning methodology for Kafka target endpoints</title>
      <link>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1839103#M7416</link>
      <description>&lt;P&gt;This is causing data integrity issues throughout our system?&amp;nbsp; When will it be fixed?&lt;/P&gt;</description>
      <pubDate>Fri, 24 Sep 2021 17:23:21 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1839103#M7416</guid>
      <dc:creator>tkdrahn</dc:creator>
      <dc:date>2021-09-24T17:23:21Z</dc:date>
    </item>
    <item>
      <title>Re: Support tomb-stoning methodology for Kafka target endpoints</title>
      <link>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1859801#M7808</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;Thanks to all that commented and voted on this ideation. We hear you and are looking to address this as one of the higher priority items in our backlog. I can't provide an ETA yet, but I will when I have one.&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Tzachi&lt;/P&gt;</description>
      <pubDate>Tue, 16 Nov 2021 10:25:36 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1859801#M7808</guid>
      <dc:creator>Tzachi_Nissim</dc:creator>
      <dc:date>2021-11-16T10:25:36Z</dc:date>
    </item>
    <item>
      <title>Re: Support tomb-stoning methodology for Kafka target endpoints</title>
      <link>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1926534#M9431</link>
      <description>&lt;P&gt;Hi folks,&lt;/P&gt;&lt;P&gt;This has made it up the priority list and we are planning to enable the Replicate Engine to support updates ok PKs in the Nov 22 release - which will enable support of Tombstoning.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;--bobv--&lt;/P&gt;</description>
      <pubDate>Wed, 04 May 2022 21:08:46 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/1926534#M9431</guid>
      <dc:creator>bobvecchione</dc:creator>
      <dc:date>2022-05-04T21:08:46Z</dc:date>
    </item>
    <item>
      <title>From now on, please track this idea from the Ideation por...</title>
      <link>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/2101565#M14804</link>
      <description>&lt;P&gt;From now on, please track this idea from the Ideation portal.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;A title="Link to new idea" href="https://ideation.qlik.com/app/#/case/274680" target="_blank" rel="noopener"&gt;Link to new idea&lt;/A&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Meghann&lt;/P&gt;&lt;P data-unlink="true"&gt;&lt;EM&gt;NOTE: Upon clicking this link 2 tabs may open - please feel free to close the one with a login page. If you &lt;STRONG&gt;only&lt;/STRONG&gt; see 1 tab with the login page, please try clicking this link first: &lt;STRONG&gt;&lt;A title="Authenticate me!" href="#" target="_blank" rel="noopener"&gt;Authenticate me!&lt;/A&gt;&lt;/STRONG&gt;&amp;nbsp;t&lt;/EM&gt;&lt;EM&gt;hen try the link above again. Ensure pop-up blocker is off.&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 02 Aug 2023 16:35:35 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/2101565#M14804</guid>
      <dc:creator>Meghann_MacDonald</dc:creator>
      <dc:date>2023-08-02T16:35:35Z</dc:date>
    </item>
    <item>
      <title>Re: Support tomb-stoning methodology for Kafka target endpoints - Status changed to: Closed - Archived</title>
      <link>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/2101566#M14805</link>
      <description />
      <pubDate>Wed, 02 Aug 2023 16:35:36 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Suggest-an-Idea/Support-tomb-stoning-methodology-for-Kafka-target-endpoints/idc-p/2101566#M14805</guid>
      <dc:creator>Ideation</dc:creator>
      <dc:date>2023-08-02T16:35:36Z</dc:date>
    </item>
  </channel>
</rss>

