<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Issue running kafka with spark streaming in Talend Studio</title>
    <link>https://community.qlik.com/t5/Talend-Studio/Issue-running-kafka-with-spark-streaming/m-p/2248788#M33536</link>
    <description>Please find below the screenshots of : the job design and job properties, and the tKafkaInput properties 
&lt;BR /&gt; 
&lt;span class="lia-inline-image-display-wrapper" image-alt="0683p000009MBWU.jpg"&gt;&lt;img src="https://community.qlik.com/t5/image/serverpage/image-id/152589i23C7F7CA8500B030/image-size/large?v=v2&amp;amp;px=999" role="button" title="0683p000009MBWU.jpg" alt="0683p000009MBWU.jpg" /&gt;&lt;/span&gt; 
&lt;span class="lia-inline-image-display-wrapper" image-alt="0683p000009MBWZ.jpg"&gt;&lt;img src="https://community.qlik.com/t5/image/serverpage/image-id/136616i47D1CE7E13EF280A/image-size/large?v=v2&amp;amp;px=999" role="button" title="0683p000009MBWZ.jpg" alt="0683p000009MBWZ.jpg" /&gt;&lt;/span&gt;</description>
    <pubDate>Mon, 12 Sep 2016 09:40:35 GMT</pubDate>
    <dc:creator>Anonymous</dc:creator>
    <dc:date>2016-09-12T09:40:35Z</dc:date>
    <item>
      <title>Issue running kafka with spark streaming</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Issue-running-kafka-with-spark-streaming/m-p/2248786#M33534</link>
      <description>Hello,
&lt;BR /&gt;I am running Talend Real Time for Big Data, and I have tried a simple spark streaming job to read messages from a kafka topic and write them into a file.
&lt;BR /&gt;My job contains a tHDFSConfiguration, and a tKafkaInput linked to a tFileOutputJSON.
&lt;BR /&gt;When I build the job and launch it in my MapR VM, I'm stuck with the following error which aborts the job :&amp;nbsp;
&lt;BR /&gt;
&lt;PRE&gt;java.lang.ClassCastException: kafka.cluster.BrokerEndPoint cannot be cast to kafka.cluster.Broker&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6$$anonfun$apply$7.apply(KafkaCluster.scala:90)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.Option.map(Option.scala:145)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6.apply(KafkaCluster.scala:90)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6.apply(KafkaCluster.scala:87)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3.apply(KafkaCluster.scala:87)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3.apply(KafkaCluster.scala:86)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2.apply(KafkaCluster.scala:86)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2.apply(KafkaCluster.scala:85)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.util.Either$RightProjection.flatMap(Either.scala:523)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaCluster.findLeaders(KafkaCluster.scala:85)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaCluster.getLeaderOffsets(KafkaCluster.scala:179)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaCluster.getLeaderOffsets(KafkaCluster.scala:161)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaCluster.getEarliestLeaderOffsets(KafkaCluster.scala:155)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$8.apply(KafkaUtils.scala:411)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$8.apply(KafkaUtils.scala:409)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at scala.util.Either$RightProjection.flatMap(Either.scala:523)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:409)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:532)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.spark.streaming.kafka.KafkaUtils.createDirectStream(KafkaUtils.scala)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at klf.test_kafka_0_1.test_kafka.tKafkaInput_1Process(test_kafka.java:539)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at klf.test_kafka_0_1.test_kafka.run(test_kafka.java:910)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at klf.test_kafka_0_1.test_kafka.runJobInTOS(test_kafka.java:863)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at klf.test_kafka_0_1.test_kafka.main(test_kafka.java:744)&lt;BR /&gt;&lt;/PRE&gt;
&lt;BR /&gt;I have tried with no result to change my pom.xml settings in the project properties according the answer given here :&amp;nbsp;.
&lt;BR /&gt;Has anyone encountered this problem ? or any idea on how to solve it ?</description>
      <pubDate>Sat, 16 Nov 2024 10:24:08 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Issue-running-kafka-with-spark-streaming/m-p/2248786#M33534</guid>
      <dc:creator>_AnonymousUser</dc:creator>
      <dc:date>2024-11-16T10:24:08Z</dc:date>
    </item>
    <item>
      <title>Re: Issue running kafka with spark streaming</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Issue-running-kafka-with-spark-streaming/m-p/2248787#M33535</link>
      <description>Hi,
&lt;BR /&gt;Would you mind posting your setting screenshots into forum which will be helpful for us to address your issue?
&lt;BR /&gt;Best regards
&lt;BR /&gt;Sabrina</description>
      <pubDate>Mon, 12 Sep 2016 05:08:37 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Issue-running-kafka-with-spark-streaming/m-p/2248787#M33535</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2016-09-12T05:08:37Z</dc:date>
    </item>
    <item>
      <title>Re: Issue running kafka with spark streaming</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Issue-running-kafka-with-spark-streaming/m-p/2248788#M33536</link>
      <description>Please find below the screenshots of : the job design and job properties, and the tKafkaInput properties 
&lt;BR /&gt; 
&lt;span class="lia-inline-image-display-wrapper" image-alt="0683p000009MBWU.jpg"&gt;&lt;img src="https://community.qlik.com/t5/image/serverpage/image-id/152589i23C7F7CA8500B030/image-size/large?v=v2&amp;amp;px=999" role="button" title="0683p000009MBWU.jpg" alt="0683p000009MBWU.jpg" /&gt;&lt;/span&gt; 
&lt;span class="lia-inline-image-display-wrapper" image-alt="0683p000009MBWZ.jpg"&gt;&lt;img src="https://community.qlik.com/t5/image/serverpage/image-id/136616i47D1CE7E13EF280A/image-size/large?v=v2&amp;amp;px=999" role="button" title="0683p000009MBWZ.jpg" alt="0683p000009MBWZ.jpg" /&gt;&lt;/span&gt;</description>
      <pubDate>Mon, 12 Sep 2016 09:40:35 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Issue-running-kafka-with-spark-streaming/m-p/2248788#M33536</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2016-09-12T09:40:35Z</dc:date>
    </item>
    <item>
      <title>Re: Issue running kafka with spark streaming</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Issue-running-kafka-with-spark-streaming/m-p/2248789#M33537</link>
      <description>Did you download the mapr sandbox from talend website or mapr webesite?
&lt;BR /&gt;What version of MapR are you using?&amp;nbsp;
&lt;BR /&gt;Keep in mind MapR implementation of kafka different from other Hadoop vendors.The kafka input and output may not work. Support for &amp;nbsp;MapR streams is on roadmap
&lt;BR /&gt;
&lt;A href="https://jira.talendforge.org/browse/TBD-3927" rel="nofollow noopener noreferrer"&gt;https://jira.talendforge.org/browse/TBD-3927&lt;/A&gt;</description>
      <pubDate>Mon, 12 Sep 2016 16:49:53 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Issue-running-kafka-with-spark-streaming/m-p/2248789#M33537</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2016-09-12T16:49:53Z</dc:date>
    </item>
  </channel>
</rss>

