Skip to main content
Announcements
See what Drew Clarke has to say about the Qlik Talend Cloud launch! READ THE BLOG
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

Import external component

Hi,

i need to install the downloaded custom component from internet in my open studio.How to do it?

Labels (3)
1 Solution

Accepted Solutions
Anonymous
Not applicable
Author

Create component directory, unzip component here

In Talend, Window - Preferences - Talend - Components, point to component directory then apply

View solution in original post

10 Replies
Jesperrekuh
Creator III
Creator III

In talend: Windows > Preferences -> (search for) custom components.
Restart Talend, (reload)otherwise you wont find the component.
Really easy!
Anonymous
Not applicable
Author

No in components sections i pointed out to location of where the components zip archived.and restarted but still facing error.
Jesperrekuh
Creator III
Creator III

I always unzip the component.
<select components directory> , place/unzip the archive here
/yourcustomcomponentsfolder/tYourDownloadedComponent
Make sure your talend version is able to load the component (supported versions of component).

Anonymous
Not applicable
Author

Create component directory, unzip component here

In Talend, Window - Preferences - Talend - Components, point to component directory then apply

Anonymous
Not applicable
Author

Hello,

Please refer to online document about:TalendHelpCenter:How to install and update a custom component.

Best regards

Sabrina

Anonymous
Not applicable
Author

i have reloaded my open studio but still i don't see the component

Anonymous
Not applicable
Author

Same thing happening with me with TOS DI version 7.1.1

I have tried 3 different paths:

1) Load file on Preferences -> Talend -> Components -> Identify folder w/ unzipped files -> Apply

2) Open Exchange -> find tSshTunnel -> install (it appears as installed)

3) Clear cache as indicated in https://help.talend.com/reader/u7qYt~8TglUIu~EtaRHbiw/U9xXXPdhtpghDCnaNcNY4Q

 

Nothing happens and no error is thrown

Anonymous
Not applicable
Author

Solved the issue by simply copying the component files to the official components folder in  <Talent Open Studio folder>/plugins/org.talend.designer.components.localprovider_xxxx/components/tSshTunnel

 

I have then hit CTRL + SHIFT + F3 to reload the components and it is now being shown under the Components -> Internet section

 

Somehow the 'additional components' folder indicated in Preferences is being ignored...


2.png
1.png
Anonymous
Not applicable
Author

Thanks a lot.. It working now
And I'm having another issue, please see the details below :
Note: i'm actually using aws cluster and trying to tranfer data from my
local mysql database into hdfs

Unable to transfer data from Mysql database to HDFS, using tSqoopImport
component and bellow is the error that i'm getting :

Starting job SqoopImport at 14:06 15/03/2019.

[statistics] connecting to socket on port 3568
[statistics] connected
[WARN ]: org.apache.hadoop.util.NativeCodeLoader - Unable to load
native-hadoop library for your platform... using builtin-java classes where
applicable
[WARN ]: org.apache.sqoop.ConnFactory - $SQOOP_CONF_DIR has not been set in
the environment. Cannot check for additional configuration.
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver
class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered
via the SPI and manual loading of the driver class is generally unnecessary.
Fri Mar 15 14:06:54 GMT 2019 WARN: Establishing SSL connection without
server's identity verification is not recommended. According to MySQL
5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established
by default if explicit option isn't set. For compliance with existing
applications not using SSL the verifyServerCertificate property is set to
'false'. You need either to explicitly disable SSL by setting useSSL=false,
or set useSSL=true and provide truststore for server certificate
verification.
Skipping table: ContactDetails
Skipping table: CustomerS3Metadata
Skipping table: Customers
Note:
/tmp/sqoop-cbukasa/compile/44c3c4b58b55312abbb0ab52ce659068/Customers_Sqoop.java
uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
[WARN ]: org.apache.sqoop.manager.MySQLManager - It looks like you are
importing from mysql.
[WARN ]: org.apache.sqoop.manager.MySQLManager - This transfer can be
faster! Use the --direct
[WARN ]: org.apache.sqoop.manager.MySQLManager - option to exercise a
MySQL-specific fast path.
[WARN ]: org.apache.sqoop.mapreduce.JobBase - SQOOP_HOME is unset. May not
be able to find all job dependencies.
Exception in component tSqoopImportAllTables_1 (SqoopImport)
java.lang.Exception: The Sqoop import job has failed. Please check the logs.
at
local_project.sqoopimport_0_1.SqoopImport.tSqoopImportAllTables_1Process(SqoopImport.java:616)
at
local_project.sqoopimport_0_1.SqoopImport.runJobInTOS(SqoopImport.java:950)
at local_project.sqoopimport_0_1.SqoopImport.main(SqoopImport.java:776)
[WARN ]: org.apache.hadoop.hdfs.DFSClient - DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/user/charlene/.staging/job_1552636104840_0003/libjars/sqoop-1.4.6-6.0.0.jar
could only be replicated to 0 nodes instead of minReplication (=1). There
are 2 datanode(s) running and 2 node(s) are excluded in this operation.
at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1580)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)

at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy8.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1455)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1251)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
[ERROR]: org.apache.sqoop.tool.ImportAllTablesTool - Encountered
IOException running import job:
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/user/charlene/.staging/job_1552636104840_0003/libjars/sqoop-1.4.6-6.0.0.jar
could only be replicated to 0 nodes instead of minReplication (=1). There
are 2 datanode(s) running and 2 node(s) are excluded in this operation.
at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1580)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031)
at org.apache.hadoop.hdfs.server.
Show quoted text