Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Save $650 on Qlik Connect, Dec 1 - 7, our lowest price of the year. Register with code CYBERWEEK: Register
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

The function POSIX.open() is not supported on Windows

Hi,

 

I've been trying to run a Spark job on Talend Data Fabric for the past few days, but am unable to due to an error.

I was originally going to create a simple classification Spark job, but the job failed to run. Tried simplifying the job to only using a tInputFileDelimited and tLogRow component but it still outputs the same error.

 

 

[statistics] connecting to socket on port 3551
[statistics] connected
[WARN ]: org.apache.spark.SparkConf - In Spark 1.0 and later spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and LOCAL_DIRS in YARN).
java.lang.RuntimeException: /C:/tmp/spark-daaf35e8-507b-4e6c-abea-822492ee2b51/userFiles-85ae0f6c-7ab6-4339-abce-de14b8542220/Default.properties
	at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadNativePermissionInfo(RawLocalFileSystem.java:717)
	at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:654)
	at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:630)
	at org.apache.hadoop.fs.permission.ChmodParser.applyNewPermission(ChmodParser.java:49)
	at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:912)
	at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:889)
	at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:863)
	at org.apache.spark.util.Utils$.fetchFile(Utils.scala:406)
	at org.apache.spark.SparkContext.addFile(SparkContext.scala:1386)
	at org.apache.spark.SparkContext.addFile(SparkContext.scala:1340)
	at org.apache.spark.api.java.JavaSparkContext.addFile(JavaSparkContext.scala:662)
[ERROR]: org.apache.hadoop.fs.FileSystem - Failed to fstat on: /C:/tmp/spark-daaf35e8-507b-4e6c-abea-822492ee2b51/userFiles-85ae0f6c-7ab6-4339-abce-de14b8542220/Default.properties
java.io.IOException: The function POSIX.open() is not supported on Windows
	at org.apache.hadoop.io.nativeio.NativeIO$POSIX.open(Native Method)
	at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadNativePermissionInfo(RawLocalFileSystem.java:712)
	at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:654)
	at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:630)
	at org.apache.hadoop.fs.permission.ChmodParser.applyNewPermission(ChmodParser.java:49)
	at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:912)
	at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:889)
	at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:863)
	at org.apache.spark.util.Utils$.fetchFile(Utils.scala:406)
	at org.apache.spark.SparkContext.addFile(SparkContext.scala:1386)
	at neww.awa_0_1.awa.setContext(awa.java:1452)
	at neww.awa_0_1.awa.run(awa.java:1148)
	at neww.awa_0_1.awa.runJobInTOS(awa.java:1122)
	at neww.awa_0_1.awa.main(awa.java:1007)
Caused by: java.io.IOException: The function POSIX.open() is not supported on Windows
	at org.apache.hadoop.io.nativeio.NativeIO$POSIX.open(Native Method)
	at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadNativePermissionInfo(RawLocalFileSystem.java:712)
	... 14 more
Exception in thread "main" java.lang.RuntimeException: TalendJob: 'awa' - Failed with exit code: 1.
	at neww.awa_0_1.awa.main(awa.java:1017)
	at org.apache.spark.SparkContext.addFile(SparkContext.scala:1340)
	at org.apache.spark.api.java.JavaSparkContext.addFile(JavaSparkContext.scala:662)
	at neww.awa_0_1.awa.setContext(awa.java:1452)
	at neww.awa_0_1.awa.run(awa.java:1148)
	at neww.awa_0_1.awa.runJobInTOS(awa.java:1122)
	at neww.awa_0_1.awa.main(awa.java:1007)
[ERROR]: neww.awa_0_1.awa - TalendJob: 'awa' - Failed with exit code: 1.

 

I'm using a MapR 5.2.0 cluster on multiple nodes with Spark 2.11.

Any ideas on what's causing the problem? Some input on this would be helpful.

 

Labels (3)
4 Replies
Anonymous
Not applicable
Author

Hello,

Would you mind posting your current spark job design screenshots into forum which will be helpful for us to address your issue?

Best regards

Sabrina

Anonymous
Not applicable
Author

Hi Sabrina,

 

Here's a screenshot of the Spark job I was trying to run:

0683p000009Lr33.png

Anonymous
Not applicable
Author

I have a client facing the same issue. Would be interested in the solution. 

Anonymous
Not applicable
Author

I'm having similar issue and would be interested in solution.

 

WARN ]: org.apache.spark.SparkConf - In Spark 1.0 and later spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and LOCAL_DIRS in YARN).
[WARN ]: org.apache.spark.metrics.MetricsSystem - Using default name DAGScheduler for source because spark.app.id is not set.
java.lang.RuntimeException: /C:/tmp/spark-63b96b62-19b8-4943-b470-f982b0c4ab5e/userFiles-a1ac2087-729c-451a-bd56-5a012f084fb7/Default.properties
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadNativePermissionInfo(RawLocalFileSystem.java:717)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:654)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:630)
at org.apache.hadoop.fs.permission.ChmodParser.applyNewPermission(ChmodParser.java:49)
at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:912)
at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:889)
at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:863)
at org.apache.spark.util.Utils$.fetchFile(Utils.scala:381)
at org.apache.spark.SparkContext.addFile(SparkContext.scala:1387)
at org.apache.spark.SparkContext.addFile(SparkContext.scala:1341)
at org.apache.spark.api.java.JavaSparkContext.addFile(JavaSparkContext.scala:671)