<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Reached Time out while waiting for acks from kafka in Qlik Replicate</title>
    <link>https://community.qlik.com/t5/Qlik-Replicate/Reached-Time-out-while-waiting-for-acks-from-kafka/m-p/1889957#M1814</link>
    <description>&lt;P&gt;I know nothing specific about loading to Kafka, but in an other recent post also by&amp;nbsp;&lt;a href="https://community.qlik.com/t5/user/viewprofilepage/user-id/152714"&gt;@RichJ&lt;/a&gt;&amp;nbsp; he mentions "&lt;SPAN&gt;batch.size and linger.ms for kafka target" which according to&amp;nbsp;&lt;a href="https://community.qlik.com/t5/user/viewprofilepage/user-id/113691"&gt;@lyka&lt;/a&gt;&amp;nbsp;can be set through&amp;nbsp;the internal parameter rdkafkaProperties. That sounds relevant. What are those values for the test?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;In the Task design there might be a relevant parameter under FullLoad - Tuning: "Commit rate during full load". What is the selected value there (default 10000). If NOT default, then you can find this in the exported task json as "max_transaction_size" under&amp;nbsp;"target_settings". Maybe try to reduce to for example 1000 or 1234 just to try? Any guesses as to why I might suggest 1234? Well, it is a value which does not occur&amp;nbsp;in nature so to speak so if you use that then you can quickly find is back with a grep/findstr in json files. There are just too many '1000's out there &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;hth,&lt;/P&gt;
&lt;P&gt;Hein&lt;/P&gt;</description>
    <pubDate>Mon, 07 Feb 2022 21:01:44 GMT</pubDate>
    <dc:creator>Heinvandenheuvel</dc:creator>
    <dc:date>2022-02-07T21:01:44Z</dc:date>
    <item>
      <title>Reached Time out while waiting for acks from kafka</title>
      <link>https://community.qlik.com/t5/Qlik-Replicate/Reached-Time-out-while-waiting-for-acks-from-kafka/m-p/1889941#M1813</link>
      <description>&lt;P&gt;I have a table with a CLOB column to replicate from MS SQL db to confluent Kafka in Azure.&amp;nbsp; The CLOB size can be 1MB,&amp;nbsp;&lt;SPAN&gt;but&amp;nbsp; when I use &lt;SPAN class="rColor"&gt; "Limit LOB size to (KB) = 32K" &lt;/SPAN&gt;or above, the replica failed with the following error:&lt;/SPAN&gt;&lt;/P&gt;
&lt;DIV id="txtView67" class="tvRow ng-binding ng-scope ng-isolate-scope fontEffects" tabindex="20067"&gt;&lt;SPAN class="oColor"&gt;Task 'KAFKA_TGT_MS_SRC' encountered a fatal error (repository.c:5794) &lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV id="txtView68" class="tvRow ng-binding ng-scope ng-isolate-scope fontEffects" tabindex="20068"&gt;&lt;SPAN class="rColor"&gt;00014612: 2022-02-07T11:06:56 [TARGET_LOAD ]E: Reached Time out while waiting for acks from kafka. [1020401] (queue_utils.c:158) &lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV id="txtView69" class="tvRow ng-binding ng-scope ng-isolate-scope fontEffects" tabindex="20069"&gt;&lt;SPAN class="rColor"&gt;00014612: 2022-02-07T11:06:56 [TARGET_LOAD ]E: Handling End of table 'dbo'.'Source_data' loading failed by subtask 1 thread 1 [1020401] (endpointshell.c:2977) &lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV id="txtView70" class="tvRow ng-binding ng-scope ng-isolate-scope fontEffects textRecordSelected" tabindex="20070"&gt;&lt;SPAN class="rColor"&gt;00014612: 2022-02-07T11:06:56 [TARGET_LOAD ]E: Error executing data handler [1020401] (streamcomponent.c:1998)&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV class="tvRow ng-binding ng-scope ng-isolate-scope fontEffects textRecordSelected" tabindex="20070"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="tvRow ng-binding ng-scope ng-isolate-scope fontEffects textRecordSelected" tabindex="20070"&gt;&lt;SPAN class="rColor"&gt; The above error won't show up if using "&lt;SPAN&gt;Limit LOB size to (KB) = 16K" ; however, CLOB column data was truncated even with compression.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV class="tvRow ng-binding ng-scope ng-isolate-scope fontEffects textRecordSelected" tabindex="20070"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="tvRow ng-binding ng-scope ng-isolate-scope fontEffects textRecordSelected" tabindex="20070"&gt;&lt;SPAN class="rColor"&gt;&lt;SPAN&gt;Thanks for help,&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV class="tvRow ng-binding ng-scope ng-isolate-scope fontEffects textRecordSelected" tabindex="20070"&gt;&lt;SPAN class="rColor"&gt;&lt;SPAN&gt;Richard&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;</description>
      <pubDate>Mon, 07 Feb 2022 20:10:46 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Qlik-Replicate/Reached-Time-out-while-waiting-for-acks-from-kafka/m-p/1889941#M1813</guid>
      <dc:creator>RichJ</dc:creator>
      <dc:date>2022-02-07T20:10:46Z</dc:date>
    </item>
    <item>
      <title>Re: Reached Time out while waiting for acks from kafka</title>
      <link>https://community.qlik.com/t5/Qlik-Replicate/Reached-Time-out-while-waiting-for-acks-from-kafka/m-p/1889957#M1814</link>
      <description>&lt;P&gt;I know nothing specific about loading to Kafka, but in an other recent post also by&amp;nbsp;&lt;a href="https://community.qlik.com/t5/user/viewprofilepage/user-id/152714"&gt;@RichJ&lt;/a&gt;&amp;nbsp; he mentions "&lt;SPAN&gt;batch.size and linger.ms for kafka target" which according to&amp;nbsp;&lt;a href="https://community.qlik.com/t5/user/viewprofilepage/user-id/113691"&gt;@lyka&lt;/a&gt;&amp;nbsp;can be set through&amp;nbsp;the internal parameter rdkafkaProperties. That sounds relevant. What are those values for the test?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;In the Task design there might be a relevant parameter under FullLoad - Tuning: "Commit rate during full load". What is the selected value there (default 10000). If NOT default, then you can find this in the exported task json as "max_transaction_size" under&amp;nbsp;"target_settings". Maybe try to reduce to for example 1000 or 1234 just to try? Any guesses as to why I might suggest 1234? Well, it is a value which does not occur&amp;nbsp;in nature so to speak so if you use that then you can quickly find is back with a grep/findstr in json files. There are just too many '1000's out there &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;hth,&lt;/P&gt;
&lt;P&gt;Hein&lt;/P&gt;</description>
      <pubDate>Mon, 07 Feb 2022 21:01:44 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Qlik-Replicate/Reached-Time-out-while-waiting-for-acks-from-kafka/m-p/1889957#M1814</guid>
      <dc:creator>Heinvandenheuvel</dc:creator>
      <dc:date>2022-02-07T21:01:44Z</dc:date>
    </item>
    <item>
      <title>Re: Reached Time out while waiting for acks from kafka</title>
      <link>https://community.qlik.com/t5/Qlik-Replicate/Reached-Time-out-while-waiting-for-acks-from-kafka/m-p/1892199#M1844</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.qlik.com/t5/user/viewprofilepage/user-id/152714"&gt;@RichJ&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Seems like the issue is with your Kafka broker server performance.&lt;/P&gt;
&lt;P&gt;Please add the below internal parameters to your kafka endpoint:&lt;/P&gt;
&lt;P&gt;set internal parameter resultsWaitMaxTimes=1000 and resultsWaitTimeoutMS=20000&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;NOTE: You can tune values based on the timeout response.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks,&lt;/P&gt;
&lt;P&gt;Swathi&lt;/P&gt;</description>
      <pubDate>Sat, 12 Feb 2022 03:37:00 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Qlik-Replicate/Reached-Time-out-while-waiting-for-acks-from-kafka/m-p/1892199#M1844</guid>
      <dc:creator>SwathiPulagam</dc:creator>
      <dc:date>2022-02-12T03:37:00Z</dc:date>
    </item>
  </channel>
</rss>

