Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi,
We have talend server V6.3.1 running in linux with java jdk1.8.0_171 version. which runs tsqlrow fine but we have another env where we are running the same in SunSolaris and none of the tsqlrow components work . This is not OOM.
and exception we get is very confusing.
Checking ports...
Sending job 'SF_expo_ce_data' to server (acredit-etl21:8001)...
File transfer completed.
Deploying job 'SF_expo_ce_data' on server (158.137.74.35:8000)...
Running job 'SF_expo_ce_data'...
Starting job SF_expo_ce_data at 10:40 14/05/2018.
[statistics] connecting to socket on port 3954
[statistics] connected
[WARN ]: org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[WARN ]: org.apache.spark.SparkConf - In Spark 1.0 and later spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and LOCAL_DIRS in YARN).
[Stage 0 (0 + 0) / 2]
[Stage 0 (0 + 2) / 2]
[Stage 0 (0 + 2) / 2][Stage 1
(0 + 2) / 2]
[Stage 1 (0 + 2) / 2]
[Stage 2 (0 + 0) / 200]
[Stage 2 (0 + 😎 / 200][thread 86 also had an error]
[thread 62 also had an error]
[thread 88 also had an error]
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGBUS (0xa) at pc=0xffffffff6be8cb40, pid=5570, tid=0x0000000000000055
#
# JRE version: Java(TM) SE Runtime Environment (8.0_171-b11) (build 1.8.0_171-b11)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.171-b11 mixed mode solaris-sparc compressed oops)
# Problematic frame:
# J 6141 C1 org.apache.spark.unsafe.bitset.BitSetMethods.isSet(Ljava/lang/Object;JI)Z (87 bytes) @ 0xffffffff6be8cb40 [0xffffffff6be8c9e0+0x160]
#
# Core dump written. Default location: /home/XXXX/talend/Talend-JobServer-20161216_1026-V6.3.1/TalendJobServersFiles/repository/SAMPLE2_SF_expo_ce_data_20180514_104022_CBvlX/SF_expo_ce_data/core or core.5570
#
# An error report file with more information is saved as:
# /home/XXXXX/talend/Talend-JobServer-20161216_1026-V6.3.1/TalendJobServersFiles/repository/SAMPLE2_SF_expo_ce_data_20180514_104022_CBvlX/SF_expo_ce_data/hs_err_pid5570.log
AHE@0x00000001002cfce0: 0xba000000 i2c: 0xffffffff6b83c920 c2i: 0xffffffff6b83c9c4 c2iUV: 0xffffffff6b83c93c
[thread 87 also had an error][thread 63 also had an error]
[thread 61 also had an error][thread 60 also had an error]
Compiled method (c1) 24578 6141 3 org.apache.spark.unsafe.bitset.BitSetMethods::isSet (87 bytes)
total in heap [0xffffffff6be8c850,0xffffffff6be8cd68] = 1304
relocation [0xffffffff6be8c978,0xffffffff6be8c9c8] = 80
main code [0xffffffff6be8c9e0,0xffffffff6be8cc40] = 608
stub code [0xffffffff6be8cc40,0xffffffff6be8cca8] = 104
oops [0xffffffff6be8cca8,0xffffffff6be8ccb0] = 8
metadata [0xffffffff6be8ccb0,0xffffffff6be8ccd0] = 32
scopes data [0xffffffff6be8ccd0,0xffffffff6be8cd20] = 80
scopes pcs [0xffffffff6be8cd20,0xffffffff6be8cd60] = 64
dependencies [0xffffffff6be8cd60,0xffffffff6be8cd68] = 8
Compiled method (c1) 24580 6138 3 org.apache.spark.sql.catalyst.expressions.UnsafeRow::isNullAt (18 bytes)
total in heap [0xffffffff6c501cd0,0xffffffff6c5021b8] = 1256
relocation [0xffffffff6c501df8,0xffffffff6c501e48] = 80
main code [0xffffffff6c501e60,0xffffffff6c502020] = 448
stub code [0xffffffff6c502020,0xffffffff6c502110] = 240
oops [0xffffffff6c502110,0xffffffff6c502118] = 8
metadata [0xffffffff6c502118,0xffffffff6c502138] = 32
scopes data [0xffffffff6c502138,0xffffffff6c502160] = 40
scopes pcs [0xffffffff6c502160,0xffffffff6c5021b0] = 80
dependencies [0xffffffff6c5021b0,0xffffffff6c5021b8] = 8
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
Job SF_expo_ce_data ended at 10:41 14/05/2018. [exit code=6]
Yes we build job locally and export it to Solaris.
It has to be OS issue .
Not sure who to raise the bug to. Spark or Talend ?