<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Upload column as json  to target ( store changes). in Qlik Replicate</title>
    <link>https://community.qlik.com/t5/Qlik-Replicate/Upload-column-as-json-to-target-store-changes/m-p/2541270#M15801</link>
    <description>&lt;P&gt;&lt;SPAN data-teams="true"&gt;I am trying to load in Qlik Replicate from mongo db to Azure Data Lake Storage (changes feed) as json or parquet. I am trying to load document part into column _doc but it keeps being NCLOB type read as string/text in databricks.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Is there any way to save this document/column as proper json format readable downstream by databricks?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-teams="true"&gt;I would appreciate any tips on settings I can try?&lt;BR /&gt;&lt;BR /&gt;Second&amp;nbsp; aspect is to check the length of the column we mentioned above and add this as a value of new column and write to targer&amp;nbsp;&lt;BR /&gt;length($_doc) as global rule or on the&amp;nbsp; table level doesn t work .&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 21 Jan 2026 15:41:24 GMT</pubDate>
    <dc:creator>data_engineer_whop</dc:creator>
    <dc:date>2026-01-21T15:41:24Z</dc:date>
    <item>
      <title>Upload column as json  to target ( store changes).</title>
      <link>https://community.qlik.com/t5/Qlik-Replicate/Upload-column-as-json-to-target-store-changes/m-p/2541270#M15801</link>
      <description>&lt;P&gt;&lt;SPAN data-teams="true"&gt;I am trying to load in Qlik Replicate from mongo db to Azure Data Lake Storage (changes feed) as json or parquet. I am trying to load document part into column _doc but it keeps being NCLOB type read as string/text in databricks.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Is there any way to save this document/column as proper json format readable downstream by databricks?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-teams="true"&gt;I would appreciate any tips on settings I can try?&lt;BR /&gt;&lt;BR /&gt;Second&amp;nbsp; aspect is to check the length of the column we mentioned above and add this as a value of new column and write to targer&amp;nbsp;&lt;BR /&gt;length($_doc) as global rule or on the&amp;nbsp; table level doesn t work .&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 21 Jan 2026 15:41:24 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Qlik-Replicate/Upload-column-as-json-to-target-store-changes/m-p/2541270#M15801</guid>
      <dc:creator>data_engineer_whop</dc:creator>
      <dc:date>2026-01-21T15:41:24Z</dc:date>
    </item>
    <item>
      <title>Re: Upload column as json  to target ( store changes).</title>
      <link>https://community.qlik.com/t5/Qlik-Replicate/Upload-column-as-json-to-target-store-changes/m-p/2541303#M15802</link>
      <description>&lt;P&gt;Hi ,&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;It seems that since mongodb stores the json in binary format and this is the reason that you see the column stored as NCLOB data type.&amp;nbsp; I am not sure if it can be changed to json format. I suggest you open a salesforce case and attach to the case the task diagnostic package with the DDL of the table and description of the request so we can further investigare and see if we can find a way to meet your requirement.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks &amp;amp; regards,&lt;BR /&gt;Orit&lt;/P&gt;</description>
      <pubDate>Thu, 22 Jan 2026 08:50:02 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Qlik-Replicate/Upload-column-as-json-to-target-store-changes/m-p/2541303#M15802</guid>
      <dc:creator>OritA</dc:creator>
      <dc:date>2026-01-22T08:50:02Z</dc:date>
    </item>
  </channel>
</rss>

