Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
April 13–15 - Dare to Unleash a New Professional You at Qlik Connect 2026: Register Now!
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

tS3List Connectivity Issues

All,

I am attempting to read from Amazon S3, and am receiving the following error

connecting to socket on port 3997
connected
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
disconnected

I can use the same details with Cloudberry and get access.

Any thoughts appreciated.
Labels (3)
6 Replies
Anonymous
Not applicable
Author

Hi,
Your component setting screenshot is preferred.
In addition that, Amazon S3 component is available since 5.4.0M3 https://jira.talendforge.org/browse/TDI-22143.
Best regards
Sabrina
Anonymous
Not applicable
Author

Thankyou for replying.

Yes I am using 5.4 and the S3 components.

Job screenshot attached.
0683p000009MBM6.gif
Anonymous
Not applicable
Author

Can we have a screenshot of the component: tS3Connection? This will help us a lot.
Anonymous
Not applicable
Author

Thanks for looking.

Screenshot attached, it does not show much as I have obviously had to remove the keys.

Regards.
0683p000009MBIB.gif
Anonymous
Not applicable
Author

Hi All,
I can replicate this error message, to me its not an issue. You have this behaviour when you specify in the bucket name field the bucketName+Sub Folder where you have the file to process.
To avoid this error you just need to use the key prefix field to specify the file to process, or just put the sub folder name where you have your files to prrocess.
To me it is more design stuff than bug, there is lot of way to go over this error message. See the screenshots of the configuration that i use in my tS3List component: List all the file in folder or filter on one file

Thanks,
0683p000009MBUO.png 0683p000009MBRu.png
Anonymous
Not applicable
Author

Has anyone resolved this? It appears that Amazon S3 limits the "GET" operation of a bucket to 1,000. It's unclear (at least in the link below) if the "max-keys" parameter can be set to an amount greater than 1,000. Another option would be to set a "marker" which would start the next search at the 1,001 object in alphabetical order.