Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi Community,
I'm facing an issue with my Talend job while trying to bulk load nearly 100 tables from MySQL to Snowflake. The problem is that when tDBOutput creates tables in Snowflake, some columns are automatically set as NOT NULL, even though I haven't explicitly defined them as such.
This is causing errors during data loading when NULL values are present in those columns.
Snowflake table query:
Job Design & Settings:
tDBInput (MySQL Source):
Using Dynamic Schema Type to handle all 100 tables.
No manual column definitions to avoid complexity.
tMap (Transformation Layer):
No transformations or modifications applied.
tDBOutput - snowflake:
Advanced settings in tDBOutput :
Error:
Caused by: net.snowflake.client.loader.Loader$ConnectionError: State: COPY_INTO_TABLE, COPY INTO "TEST_20250313_105751_794_1" FROM '@~/TEST/INSERT/20250313_105751_794_2BVBLI_1' on_error='ABORT_STATEMENT' file_format=( field_optionally_enclosed_by='"' empty_field_as_null=true), NULL result in a non-nullable column
Challenges:Since I'm dealing with 100+ tables, manually defining column constraints is not feasible. I need a way to ensure that all columns are created as NULLABLE by default in Snowflake.
Questions:Would appreciate any guidance or best practices for handling this issue efficiently.
Thanks in advance!
Best regards,
Nikhil
Hello,
Pre-create target tables manually in Snowflake with all columns explicitly defined as NULLABLE before loading data.
In your Talend job, don’t let tDBOutput auto-create tables — instead point it to the pre-defined schema.
If you must create tables dynamically, use custom SQL (e.g., ALTER TABLE) after creation to set columns to NULLABLE before loading.
Consider handling column metadata in a tMap or routine to mark fields as NULLABLE before output.
Talend help on defining nullable values / outputting nulls in mappings
https://help.qlik.com/talend/en-US/components/8.0/tmap/outputting-blank-values-as-null-or-fixed-valu...
Thanks,
Gourav