alitouka / spark_dbscan

DBSCAN clustering algorithm on top of Apache Spark

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

spark job running error in spark 1.6.0 after spark-submit

wanshi87 opened this issue · comments

Hi, after I submit the job in yarn-cluster mode, I met the problems as follows:

1. Failed: union at DistanceAnalyzer.scala:89

details:
java.lang.AbstractMethodError
at org.apache.spark.Logging$class.log(Logging.scala:50)
at org.alitouka.spark.dbscan.util.debug.Clock.log(Clock.scala:5)
at org.apache.spark.Logging$class.logInfo(Logging.scala:58)
at org.alitouka.spark.dbscan.util.debug.Clock.logInfo(Clock.scala:5)
at org.alitouka.spark.dbscan.util.debug.Clock.logTimeSinceStart(Clock.scala:16)
at org.alitouka.spark.dbscan.spatial.PartitionIndex.populate(PartitionIndex.scala:53)
at org.alitouka.spark.dbscan.spatial.DistanceAnalyzer.countClosePointsWithinPartition(DistanceAnalyzer.scala:115)
at org.alitouka.spark.dbscan.spatial.DistanceAnalyzer$$anonfun$countClosePointsWithinEachBox$1.apply(DistanceAnalyzer.scala:103)
at org.alitouka.spark.dbscan.spatial.DistanceAnalyzer$$anonfun$countClosePointsWithinEachBox$1.apply(DistanceAnalyzer.scala:98)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$22.apply(RDD.scala:745)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$22.apply(RDD.scala:745)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:229)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

2.Failed: subtract at DistanceAnalyzer.scala:35

ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Container marked as failed: container_1496725696311_0713_02_000007 on host: cdhn2. Exit status: 50. Diagnostics: Exception from container-launch.
Container id: container_1496725696311_0713_02_000007
Exit code: 50
Stack trace: ExitCodeException exitCode=50:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:601)
at org.apache.hadoop.util.Shell.run(Shell.java:504)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:786)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Container exited with a non-zero exit code 50

I very appreciate if anyone can help me!

commented

i have same problem~~have you resoulve this problem??

commented

i just comment /spatial/DistanceAnalyzer/countClosePointsWithinpartition/ populate clock & log finally it worked!!!

@suai723 yes, it is because of the logging function. Annotating related log command would work!!

@suai723 yes, it is because of the logging function. Annotating related log command would work!!

I don't occur error here, but my task cost too much time here!
1. Failed: union at DistanceAnalyzer.scala:89
So I have to kill this task, did you occur the same problem