druid-io / tranquility

Tranquility helps you send real-time event streams to Druid and handles partitioning, replication, service discovery, and schema rollover, seamlessly and without downtime.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Freeze during Spark - Druid injection

erleichde opened this issue · comments

Hello,

I am trying to read data from Kafka, do transformations on Spark (Streaming) and write result data to Druid.
I am able to create the dataframe that I want and write it to a text file.

But when I try to inject to Druid, Spark job hangs indefinitely.

My code is as follows:

   object MyDirectStreamDriver {
  def main(args:Array[String]) {
    
    val sc = new SparkContext()
    
    val ssc = new StreamingContext(sc, Minutes(5))
     
     val kafkaParams = Map[String, Object](
              "bootstrap.servers" -> "[$hadoopURL]:6667",
              "key.deserializer" -> classOf[StringDeserializer],
              "value.deserializer" -> classOf[StringDeserializer],
              "group.id" -> "use_a_separate_group_id_for_each_stream",
              "auto.offset.reset" -> "latest",
              "enable.auto.commit" -> (false: java.lang.Boolean)
             
     )
    

    val eventStream = KafkaUtils.createDirectStream[String, String](
                         ssc,
                         PreferConsistent,
                         Subscribe[String, String](Array("events_test"), kafkaParams)) 
                         
                         
    val t = eventStream.map(record => record.value).flatMap(_.split("(?<=\\}),(?=\\{)")).
                           map(JSON.parseRaw(_).getOrElse(new JSONObject(Map(""-> ""))).asInstanceOf[JSONObject]).                  
                           map(x => (x.obj.getOrElse("OID", "").asInstanceOf[String], new DateTime(timeZone), x.obj.getOrElse("STATUS", "").asInstanceOf[Double].toInt)).
                           map(x => MyEvent(x._1, x._2, x._3))
    
    t.foreachRDD(rdd => rdd.propagate(new MyEventBeamFactory)) 

    
    ssc.start
    ssc.awaitTermination
  }
}

case class MyEvent (oid:String, optime: DateTime, status: Int) {
  
  @JsonValue
  def toMap: Map[String, Any] = Map(
    "optime_value" -> (optime.getMillis / 1000),
    "oid" -> oid,
    "status" -> status
  )
}  
  object MyEvent {
    implicit val myEventTimestamper = new Timestamper[MyEvent] {
    def timestamp(a: MyEvent) = a.optime
  }
    
    val Columns = Seq("oid", "optime", "status")
    
    def fromMap(d: Dict): MyEvent = {
    MyEvent(
      str(d("oid")),
      new DateTime(long(d("optime_value")) * 1000),      
      int(d("status"))
    )  
  }
}

class MyEventBeamFactory extends BeamFactory[MyEvent]
{
  // Return a singleton, so the same connection is shared across all tasks in the same JVM.
  def makeBeam: Beam[MyEvent] = MyEventBeamFactory.BeamInstance
  
  object MyEventBeamFactory {
  val BeamInstance: Beam[MyEvent] = {
    // Tranquility uses ZooKeeper (through Curator framework) for coordination.
    val curator = CuratorFrameworkFactory.newClient(
      "{IP_0}:2181",
      new BoundedExponentialBackoffRetry(100, 3000, 5)
    )
    curator.start()

    val indexService = DruidEnvironment("druid/overlord") // Your overlord's druid.service, with slashes replaced by colons.
    val discoveryPath = "/druid/discovery"     // Your overlord's druid.discovery.curator.path
    val dataSource = "events"
    val dimensions = IndexedSeq("oid")
    val aggregators = Seq(new LongSumAggregatorFactory("status", "status"))

    // Expects simpleEvent.timestamp to return a Joda DateTime object.
    DruidBeams
      .builder((event: MyEvent) => event.optime)
      .curator(curator)
      .discoveryPath(discoveryPath)
      .location(DruidLocation(indexService, dataSource))
      .rollup(DruidRollup(SpecificDruidDimensions(dimensions), aggregators, QueryGranularities.MINUTE))
      .tuning(
        ClusteredBeamTuning(
          segmentGranularity = Granularity.HOUR,
          windowPeriod = new Period("PT10M"),
          partitions = 1,
          replicants = 1
        )
      )
      .buildBeam()
  }   
}
}

I submit this as a fatjar using

spark-submit --class MyDirectStreamDriver --master yarn  --properties-file {$PATH}/kafka-streaming-conf {$PATH}/StructuredStreaming-1.0-SNAPSHOT-jar-with-dependencies.jar

Spark executes map tasks and freezes at foreachRDD job. It does not write anything to Druid.

I would really appreciate if you can help me.

@erleichde would be helpful if you can also attach the complete stack trace and logs when the job is appears hanging.

I also copy <index_realtime_events_2017-12-27T13:00:00.000Z_0_0> log from druid coordinator console:

2017-12-27T13:25:12,786 WARN [qtp1106488049-76] org.eclipse.jetty.servlet.ServletHandler - /druid/worker/v1/chat/firehose:druid:overlord:events-013-0000-0000/push-events
io.druid.java.util.common.parsers.ParseException: Unparseable timestamp found!
    at io.druid.data.input.impl.MapInputRowParser.parse(MapInputRowParser.java:74) ~[druid-api-0.10.1.2.6.3.0-235.jar:0.10.1.2.6.3.0-235]
    at io.druid.segment.realtime.firehose.EventReceiverFirehoseFactory$EventReceiverFirehose.addAll(EventReceiverFirehoseFactory.java:193) ~[druid-server-0.10.1.2.6.3.0-235.jar:0.10.1.2.6.3.0-235]
    at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source) ~[?:?]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_121]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
    at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) ~[jersey-server-1.19.3.jar:1.19.3]
    at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) ~[jersey-servlet-1.19.3.jar:1.19.3]
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) ~[jersey-servlet-1.19.3.jar:1.19.3]
    at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) ~[jersey-servlet-1.19.3.jar:1.19.3]
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) ~[javax.servlet-api-3.1.0.jar:3.1.0]
    at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:286) ~[guice-servlet-4.1.0.jar:?]
    at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:276) ~[guice-servlet-4.1.0.jar:?]
    at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:181) ~[guice-servlet-4.1.0.jar:?]
    at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91) ~[guice-servlet-4.1.0.jar:?]
    at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:120) ~[guice-servlet-4.1.0.jar:?]
    at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:135) ~[guice-servlet-4.1.0.jar:?]
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) ~[jetty-servlet-9.3.19.v20170502.jar:9.3.19.v20170502]
    at io.druid.server.initialization.jetty.ResponseHeaderFilterHolder$ResponseHeaderFilter.doFilter(ResponseHeaderFilterHolder.java:100) ~[druid-server-0.10.1.2.6.3.0-235.jar:0.10.1.2.6.3.0-235]
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) ~[jetty-servlet-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) [jetty-servlet-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224) [jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) [jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) [jetty-servlet-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) [jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) [jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) [jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:493) [jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52) [jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) [jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.server.Server.handle(Server.java:534) [jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) [jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) [jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) [jetty-io-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) [jetty-io-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) [jetty-io-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) [jetty-util-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) [jetty-util-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) [jetty-util-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) [jetty-util-9.3.19.v20170502.jar:9.3.19.v20170502]
    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) [jetty-util-9.3.19.v20170502.jar:9.3.19.v20170502]
    at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: java.lang.NullPointerException: Null timestamp in input: {optime_value=1514381100, oid=0010, status=1}
    at io.druid.data.input.impl.MapInputRowParser.parse(MapInputRowParser.java:66) ~[druid-api-0.10.1.2.6.3.0-235.jar:0.10.1.2.6.3.0-235]
    ... 53 more

Hi Nishant,

I copied log for one container in Spark. It gives the same error in 3 containers and exits spark job. Basically, it is

java.io.IOException: Failed to send request to task[index_realtime_events_2017-12-20T12:00:00.000Z_0_0]: 500 Internal Server Error

for all of them.
I suspect I am just setting configuration wrong in . But I cannot figure out which property.
PS: I changed ip and hostname references in logs to {$IP_n} etc.
******************************************************************************

Container: container_e58_1512485869804_4766_01_000002 on hadooptest3{$HOSTNAME}_45454
LogAggregationType: AGGREGATED
========================================================================================
LogType:stderr
LogLastModifiedTime:Wed Dec 20 14:51:19 +0200 2017
LogLength:99692
LogContents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:{{$PATH}/spark2-hdp-yarn-archive.tar.gz/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.3.0-235/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
17/12/20 14:42:23 INFO CoarseGrainedExecutorBackend: Started daemon with process name: 22473@hadooptest3{$HOSTNAME}
17/12/20 14:42:23 INFO SignalUtils: Registered signal handler for TERM
17/12/20 14:42:23 INFO SignalUtils: Registered signal handler for HUP
17/12/20 14:42:23 INFO SignalUtils: Registered signal handler for INT
17/12/20 14:42:23 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/12/20 14:42:23 INFO SecurityManager: Changing view acls to: yarn,hdfs
17/12/20 14:42:23 INFO SecurityManager: Changing modify acls to: yarn,hdfs
17/12/20 14:42:23 INFO SecurityManager: Changing view acls groups to: 
17/12/20 14:42:23 INFO SecurityManager: Changing modify acls groups to: 
17/12/20 14:42:23 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(yarn, hdfs); groups with view permissions: Set(); users  with modify permissions: Set(yarn, hdfs); groups with modify permissions: Set()
17/12/20 14:42:24 INFO TransportClientFactory: Successfully created connection to /{$IP_1}:42635 after 70 ms (0 ms spent in bootstraps)
17/12/20 14:42:24 INFO SecurityManager: Changing view acls to: yarn,hdfs
17/12/20 14:42:24 INFO SecurityManager: Changing modify acls to: yarn,hdfs
17/12/20 14:42:24 INFO SecurityManager: Changing view acls groups to: 
17/12/20 14:42:24 INFO SecurityManager: Changing modify acls groups to: 
17/12/20 14:42:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(yarn, hdfs); groups with view permissions: Set(); users  with modify permissions: Set(yarn, hdfs); groups with modify permissions: Set()
17/12/20 14:42:24 INFO TransportClientFactory: Successfully created connection to /{$IP_1}:42635 after 1 ms (0 ms spent in bootstraps)
17/12/20 14:42:24 INFO DiskBlockManager: Created local directory at /data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/blockmgr-a5bec2d5-e277-4529-a894-9c1686563428
17/12/20 14:42:24 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
17/12/20 14:42:24 INFO CoarseGrainedExecutorBackend: Connecting to driver: spark://CoarseGrainedScheduler@{$IP}:42635
17/12/20 14:42:25 INFO CoarseGrainedExecutorBackend: Successfully registered with driver
17/12/20 14:42:25 INFO Executor: Starting executor ID 1 on host hadooptest3{$HOSTNAME}
17/12/20 14:42:25 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41839.
17/12/20 14:42:25 INFO NettyBlockTransferService: Server created on hadooptest3{$HOSTNAME}:41839
17/12/20 14:42:25 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/12/20 14:42:25 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(1, {$URL}, 41839, None)
17/12/20 14:42:25 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(1, {$URL}, 41839, None)
17/12/20 14:42:25 INFO BlockManager: Initialized BlockManager: BlockManagerId(1, {$URL}, 41839, None)
17/12/20 14:45:00 INFO CoarseGrainedExecutorBackend: Got assigned task 0
17/12/20 14:45:00 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
17/12/20 14:45:00 INFO Executor: Fetching spark://{$IP_1}:42635/jars/StructuredStreaming-1.0-SNAPSHOT-jar-with-dependencies.jar with timestamp 1513773733343
17/12/20 14:45:00 INFO TransportClientFactory: Successfully created connection to /{$IP_1}:42635 after 3 ms (0 ms spent in bootstraps)
17/12/20 14:45:00 INFO Utils: Fetching spark://{$IP_1}:42635/jars/StructuredStreaming-1.0-SNAPSHOT-jar-with-dependencies.jar to /data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/spark-af83077f-9b92-4b46-816b-9e25c3396055/fetchFileTemp6003723702715899041.tmp
17/12/20 14:45:01 INFO Utils: Copying /data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/spark-af83077f-9b92-4b46-816b-9e25c3396055/16117228611513773733343_cache to /data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/./StructuredStreaming-1.0-SNAPSHOT-jar-with-dependencies.jar
17/12/20 14:45:01 INFO Executor: Adding file:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/./StructuredStreaming-1.0-SNAPSHOT-jar-with-dependencies.jar to class loader
17/12/20 14:45:01 INFO TorrentBroadcast: Started reading broadcast variable 0
17/12/20 14:45:01 INFO TransportClientFactory: Successfully created connection to /{$IP_1}:45627 after 3 ms (0 ms spent in bootstraps)
17/12/20 14:45:01 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 2.5 KB, free 366.3 MB)
17/12/20 14:45:01 INFO TorrentBroadcast: Reading broadcast variable 0 took 198 ms
17/12/20 14:45:01 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 4.3 KB, free 366.3 MB)
17/12/20 14:45:01 INFO KafkaRDD: Beginning offset 2413 is the same as ending offset skipping events_test 0
17/12/20 14:45:01 INFO CuratorFrameworkImpl: Starting
17/12/20 14:45:01 INFO ZooKeeper: Client environment:zookeeper.version=3.4.6-235--1, built on 10/30/2017 01:54 GMT
17/12/20 14:45:01 INFO ZooKeeper: Client environment:host.name=hadooptest3{$HOSTNAME}
17/12/20 14:45:01 INFO ZooKeeper: Client environment:java.version=1.8.0_40
17/12/20 14:45:01 INFO ZooKeeper: Client environment:java.vendor=Oracle Corporation
17/12/20 14:45:01 INFO ZooKeeper: Client environment:java.home=/usr/jdk64/jdk1.8.0_40/jre
17/12/20 14:45:01 INFO ZooKeeper: Client environment:java.class.path=/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_conf__:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hk2-api-2.4.0-b34.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/JavaEWAH-0.3.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/datanucleus-api-jdo-3.2.6.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/RoaringBitmap-0.5.11.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hk2-locator-2.4.0-b34.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/ST4-4.0.4.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hk2-utils-2.4.0-b34.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/activation-1.1.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/datanucleus-core-3.2.10.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/aircompressor-0.8.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/httpcore-4.4.4.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/antlr-2.7.7.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/datanucleus-rdbms-3.2.9.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/antlr-runtime-3.4.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/derby-10.12.1.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/antlr4-runtime-4.5.3.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/httpclient-4.5.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/aopalliance-1.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-crypto-1.0.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/aopalliance-repackaged-2.4.0-b34.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-dbcp-1.4.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jpam-1.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/apache-log4j-extras-1.2.17.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/eigenbase-properties-1.1.5.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/apacheds-i18n-2.0.0-M15.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-mapreduce-client-app-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/apacheds-kerberos-codec-2.0.0-M15.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/gson-2.2.4.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/api-asn1-api-1.0.0-M20.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/guava-14.0.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/api-util-1.0.0-M20.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/guice-3.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jta-1.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/arpack_combined_all-0.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/ivy-2.4.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/avro-1.7.7.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jackson-core-2.6.5.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/avro-ipc-1.7.7.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-digester-1.8.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/avro-mapred-1.7.7-hadoop2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/guice-servlet-3.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/aws-java-sdk-core-1.10.6.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-auth-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/aws-java-sdk-kms-1.10.6.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-aws-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/aws-java-sdk-s3-1.10.6.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-math3-3.4.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/azure-keyvault-core-0.8.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-azure-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/azure-storage-5.4.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jackson-annotations-2.6.5.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/base64-2.3.8.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-client-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/bcprov-jdk15on-1.51.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-common-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/bonecp-0.8.0.RELEASE.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-net-2.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/lz4-1.3.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/breeze-macros_2.11-0.13.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-hdfs-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/breeze_2.11-0.13.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-pool-1.5.4.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/calcite-avatica-1.2.0-incubating.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/compress-lzf-1.0.3.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/calcite-core-1.2.0-incubating.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/core-1.1.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/oro-2.0.8.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/calcite-linq4j-1.2.0-incubating.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jackson-core-asl-1.9.13.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/chill-java-0.8.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jackson-databind-2.6.5.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/chill_2.11-0.8.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-yarn-api-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-beanutils-1.7.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/curator-client-2.6.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-beanutils-core-1.8.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jackson-jaxrs-1.9.13.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-cli-1.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hive-beeline-1.21.2.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-codec-1.10.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/curator-framework-2.6.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-collections-3.2.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hive-cli-1.21.2.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-compiler-3.0.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hive-exec-1.21.2.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-compress-1.4.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/curator-recipes-2.6.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-configuration-1.6.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-httpclient-3.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jackson-mapper-asl-1.9.13.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-io-2.4.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jackson-xc-1.9.13.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-lang-2.6.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hive-jdbc-1.21.2.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-lang3-3.5.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hive-metastore-1.21.2.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/commons-logging-1.1.3.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/java-xmlbuilder-1.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/json4s-core_2.11-3.2.11.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-annotations-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/scalap-2.11.8.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-mapreduce-client-common-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/json4s-jackson_2.11-3.2.11.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/xz-1.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-mapreduce-client-core-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/parquet-common-1.8.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/paranamer-2.6.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-mapreduce-client-jobclient-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/json4s-ast_2.11-3.2.11.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/parquet-column-1.8.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-mapreduce-client-shuffle-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jtransforms-2.4.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/slf4j-api-1.7.16.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-openstack-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jul-to-slf4j-1.7.16.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-yarn-client-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/kryo-shaded-3.0.3.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/snappy-0.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-yarn-common-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/leveldbjni-all-1.8.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-yarn-registry-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jsp-api-2.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/parquet-encoding-1.8.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-yarn-server-common-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jsr305-1.3.9.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/parquet-format-2.3.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/hadoop-yarn-server-web-proxy-2.7.3.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/log4j-1.2.17.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/htrace-core-3.1.0-incubating.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/machinist_2.11-0.6.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jackson-dataformat-cbor-2.6.5.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/macro-compat_2.11-1.1.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jackson-module-paranamer-2.6.5.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/mail-1.4.7.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jackson-module-scala_2.11-2.6.5.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/janino-3.0.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/metrics-json-3.1.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/javassist-3.18.1-GA.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/metrics-jvm-3.1.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/javax.annotation-api-1.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/parquet-hadoop-1.8.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/javax.inject-1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/minlog-1.3.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/javax.inject-2.4.0-b34.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/mx4j-3.0.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/javax.servlet-api-3.1.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/netty-3.9.9.Final.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/javax.ws.rs-api-2.0.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/parquet-jackson-1.8.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/javolution-5.5.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/pmml-model-1.2.15.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jaxb-api-2.2.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/netty-all-4.0.43.Final.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jcip-annotations-1.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/nimbus-jose-jwt-3.9.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jcl-over-slf4j-1.7.16.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/pmml-schema-1.2.15.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jdo-api-3.0.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/objenesis-2.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jersey-client-2.22.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/okhttp-2.4.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jersey-common-2.22.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/metrics-core-3.1.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jersey-container-servlet-2.22.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/libfb303-0.9.3.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spire_2.11-0.13.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jersey-container-servlet-core-2.22.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/okio-1.4.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jersey-guava-2.22.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/opencsv-2.3.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jersey-media-jaxb-2.22.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/orc-core-1.4.1-nohive.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jersey-server-2.22.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/protobuf-java-2.5.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jets3t-0.9.3.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/pyrolite-4.13.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jetty-6.1.26.hwx.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/metrics-graphite-3.1.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jetty-sslengine-6.1.26.hwx.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/orc-mapreduce-1.4.1-nohive.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jetty-util-6.1.26.hwx.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/py4j-0.10.4.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jline-2.12.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/scala-library-2.11.8.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/joda-time-2.9.3.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/scala-compiler-2.11.8.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/jodd-core-3.5.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/scala-reflect-2.11.8.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/json-smart-1.1.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/libthrift-0.9.3.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/stax-api-1.0-2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/osgi-resource-locator-1.0.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/parquet-hadoop-bundle-1.6.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/scala-parser-combinators_2.11-1.0.4.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/scala-xml_2.11-1.0.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/shapeless_2.11-2.3.2.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/slf4j-log4j12-1.7.16.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/snappy-java-1.1.2.6.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-catalyst_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-cloud_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-core_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-graphx_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-hive-thriftserver_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-hive_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-launcher_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-mllib-local_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-mllib_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-network-common_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-network-shuffle_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-repl_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-sketch_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-sql_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-streaming_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-tags_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-unsafe_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spark-yarn_2.11-2.2.0.2.6.3.0-235.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/spire-macros_2.11-0.13.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/stax-api-1.0.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/stream-2.7.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/stringtemplate-3.2.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/super-csv-2.2.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/univocity-parsers-2.2.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/validation-api-1.1.0.Final.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/xbean-asm5-shaded-4.4.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/xercesImpl-2.9.1.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/xmlenc-0.52.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/__spark_libs__/zookeeper-3.4.6.2.6.3.0-235.jar:/etc/hadoop/conf:/usr/hdp/2.6.3.0-235/hadoop/azure-data-lake-store-sdk-2.1.4.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-annotations-2.7.3.2.6.3.0-235.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-annotations.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-auth-2.7.3.2.6.3.0-235.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-auth.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-aws-2.7.3.2.6.3.0-235.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-aws.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-azure-2.7.3.2.6.3.0-235.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-azure-datalake-2.7.3.2.6.3.0-235.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-azure-datalake.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-azure.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-common-2.7.3.2.6.3.0-235-tests.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-common-2.7.3.2.6.3.0-235.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-common-tests.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-common.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-nfs-2.7.3.2.6.3.0-235.jar:/usr/hdp/2.6.3.0-235/hadoop/hadoop-nfs.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/ojdbc6.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/ranger-hdfs-plugin-shim-0.7.0.2.6.3.0-235.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/ranger-plugin-classloader-0.7.0.2.6.3.0-235.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/ranger-yarn-plugin-shim-0.7.0.2.6.3.0-235.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/activation-1.1.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jcip-annotations-1.0.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/asm-3.2.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/aws-java-sdk-core-1.10.6.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/aws-java-sdk-kms-1.10.6.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/aws-java-sdk-s3-1.10.6.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/azure-keyvault-core-0.8.0.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/azure-storage-5.4.0.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/junit-4.11.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/xz-1.0.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/nimbus-jose-jwt-3.9.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-lang3-3.4.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/curator-client-2.7.1.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/slf4j-api-1.7.10.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/curator-framework-2.7.1.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/httpclient-4.5.2.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/httpcore-4.4.4.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/json-smart-1.1.1.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/zookeeper-3.4.6.2.6.3.0-235.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/joda-time-2.9.4.jar:/usr/hdp/2.6.3.0-235/hadoop/lib/jsch-0.1.54.jar:/usr/hdp/current/hadoop-hdfs-client/hadoop-hdfs-2.7.3.2.6.3.0-235-tests.jar:/usr/hdp/current/hadoop-hdfs-client/hadoop-hdfs-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-hdfs-client/hadoop-hdfs-nfs-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-hdfs-client/hadoop-hdfs-nfs.jar:/usr/hdp/current/hadoop-hdfs-client/hadoop-hdfs-tests.jar:/usr/hdp/current/hadoop-hdfs-client/hadoop-hdfs.jar:/usr/hdp/current/hadoop-hdfs-client/lib/asm-3.2.jar:/usr/hdp/current/hadoop-hdfs-client/lib/commons-cli-1.2.jar:/usr/hdp/current/hadoop-hdfs-client/lib/commons-codec-1.4.jar:/usr/hdp/current/hadoop-hdfs-client/lib/commons-daemon-1.0.13.jar:/usr/hdp/current/hadoop-hdfs-client/lib/commons-io-2.4.jar:/usr/hdp/current/hadoop-hdfs-client/lib/commons-lang-2.6.jar:/usr/hdp/current/hadoop-hdfs-client/lib/commons-logging-1.1.3.jar:/usr/hdp/current/hadoop-hdfs-client/lib/guava-11.0.2.jar:/usr/hdp/current/hadoop-hdfs-client/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/current/hadoop-hdfs-client/lib/jackson-annotations-2.2.3.jar:/usr/hdp/current/hadoop-hdfs-client/lib/jackson-core-2.2.3.jar:/usr/hdp/current/hadoop-hdfs-client/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hadoop-hdfs-client/lib/jackson-databind-2.2.3.jar:/usr/hdp/current/hadoop-hdfs-client/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hadoop-hdfs-client/lib/jersey-core-1.9.jar:/usr/hdp/current/hadoop-hdfs-client/lib/jersey-server-1.9.jar:/usr/hdp/current/hadoop-hdfs-client/lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/hadoop-hdfs-client/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/hadoop-hdfs-client/lib/jsr305-3.0.0.jar:/usr/hdp/current/hadoop-hdfs-client/lib/leveldbjni-all-1.8.jar:/usr/hdp/current/hadoop-hdfs-client/lib/log4j-1.2.17.jar:/usr/hdp/current/hadoop-hdfs-client/lib/netty-3.6.2.Final.jar:/usr/hdp/current/hadoop-hdfs-client/lib/netty-all-4.0.52.Final.jar:/usr/hdp/current/hadoop-hdfs-client/lib/okhttp-2.4.0.jar:/usr/hdp/current/hadoop-hdfs-client/lib/okio-1.4.0.jar:/usr/hdp/current/hadoop-hdfs-client/lib/protobuf-java-2.5.0.jar:/usr/hdp/current/hadoop-hdfs-client/lib/servlet-api-2.5.jar:/usr/hdp/current/hadoop-hdfs-client/lib/xercesImpl-2.9.1.jar:/usr/hdp/current/hadoop-hdfs-client/lib/xml-apis-1.3.04.jar:/usr/hdp/current/hadoop-hdfs-client/lib/xmlenc-0.52.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-api-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-api.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-client-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-client.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-common-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-common.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-registry-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-registry.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-applicationhistoryservice-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-common-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-common.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-nodemanager-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-nodemanager.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-resourcemanager-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-resourcemanager.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-sharedcachemanager-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-tests-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-tests.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-timeline-pluginstorage-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-timeline-pluginstorage.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-web-proxy-2.7.3.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-server-web-proxy.jar:/usr/hdp/current/hadoop-yarn-client/lib/activation-1.1.jar:/usr/hdp/current/hadoop-yarn-client/lib/aopalliance-1.0.jar:/usr/hdp/current/hadoop-yarn-client/lib/jersey-guice-1.9.jar:/usr/hdp/current/hadoop-yarn-client/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/current/hadoop-yarn-client/lib/javassist-3.18.1-GA.jar:/usr/hdp/current/hadoop-yarn-client/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/current/hadoop-yarn-client/lib/jersey-json-1.9.jar:/usr/hdp/current/hadoop-yarn-client/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/current/hadoop-yarn-client/lib/jersey-server-1.9.jar:/usr/hdp/current/hadoop-yarn-client/lib/api-util-1.0.0-M20.jar:/usr/hdp/current/hadoop-yarn-client/lib/asm-3.2.jar:/usr/hdp/current/hadoop-yarn-client/lib/avro-1.7.4.jar:/usr/hdp/current/hadoop-yarn-client/lib/javax.inject-1.jar:/usr/hdp/current/hadoop-yarn-client/lib/azure-keyvault-core-0.8.0.jar:/usr/hdp/current/hadoop-yarn-client/lib/jets3t-0.9.0.jar:/usr/hdp/current/hadoop-yarn-client/lib/azure-storage-5.4.0.jar:/usr/hdp/current/hadoop-yarn-client/lib/log4j-1.2.17.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-beanutils-1.7.0.jar:/usr/hdp/current/hadoop-yarn-client/lib/jaxb-api-2.2.2.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-cli-1.2.jar:/usr/hdp/current/hadoop-yarn-client/lib/leveldbjni-all-1.8.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-codec-1.4.jar:/usr/hdp/current/hadoop-yarn-client/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-collections-3.2.2.jar:/usr/hdp/current/hadoop-yarn-client/lib/metrics-core-3.0.1.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-compress-1.4.1.jar:/usr/hdp/current/hadoop-yarn-client/lib/json-smart-1.1.1.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-configuration-1.6.jar:/usr/hdp/current/hadoop-yarn-client/lib/netty-3.6.2.Final.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-digester-1.8.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-io-2.4.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-lang-2.6.jar:/usr/hdp/current/hadoop-yarn-client/lib/nimbus-jose-jwt-3.9.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-lang3-3.4.jar:/usr/hdp/current/hadoop-yarn-client/lib/objenesis-2.1.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-logging-1.1.3.jar:/usr/hdp/current/hadoop-yarn-client/lib/paranamer-2.3.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-math3-3.1.1.jar:/usr/hdp/current/hadoop-yarn-client/lib/commons-net-3.1.jar:/usr/hdp/current/hadoop-yarn-client/lib/protobuf-java-2.5.0.jar:/usr/hdp/current/hadoop-yarn-client/lib/curator-client-2.7.1.jar:/usr/hdp/current/hadoop-yarn-client/lib/servlet-api-2.5.jar:/usr/hdp/current/hadoop-yarn-client/lib/curator-framework-2.7.1.jar:/usr/hdp/current/hadoop-yarn-client/lib/snappy-java-1.0.4.1.jar:/usr/hdp/current/hadoop-yarn-client/lib/curator-recipes-2.7.1.jar:/usr/hdp/current/hadoop-yarn-client/lib/fst-2.24.jar:/usr/hdp/current/hadoop-yarn-client/lib/gson-2.2.4.jar:/usr/hdp/current/hadoop-yarn-client/lib/guava-11.0.2.jar:/usr/hdp/current/hadoop-yarn-client/lib/guice-3.0.jar:/usr/hdp/current/hadoop-yarn-client/lib/stax-api-1.0-2.jar:/usr/hdp/current/hadoop-yarn-client/lib/guice-servlet-3.0.jar:/usr/hdp/current/hadoop-yarn-client/lib/jcip-annotations-1.0.jar:/usr/hdp/current/hadoop-yarn-client/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/current/hadoop-yarn-client/lib/httpclient-4.5.2.jar:/usr/hdp/current/hadoop-yarn-client/lib/httpcore-4.4.4.jar:/usr/hdp/current/hadoop-yarn-client/lib/jersey-client-1.9.jar:/usr/hdp/current/hadoop-yarn-client/lib/jackson-annotations-2.2.3.jar:/usr/hdp/current/hadoop-yarn-client/lib/xmlenc-0.52.jar:/usr/hdp/current/hadoop-yarn-client/lib/jackson-core-2.2.3.jar:/usr/hdp/current/hadoop-yarn-client/lib/xz-1.0.jar:/usr/hdp/current/hadoop-yarn-client/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hadoop-yarn-client/lib/zookeeper-3.4.6.2.6.3.0-235.jar:/usr/hdp/current/hadoop-yarn-client/lib/jackson-databind-2.2.3.jar:/usr/hdp/current/hadoop-yarn-client/lib/zookeeper-3.4.6.2.6.3.0-235-tests.jar:/usr/hdp/current/hadoop-yarn-client/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/current/hadoop-yarn-client/lib/jersey-core-1.9.jar:/usr/hdp/current/hadoop-yarn-client/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hadoop-yarn-client/lib/jackson-xc-1.9.13.jar:/usr/hdp/current/hadoop-yarn-client/lib/java-xmlbuilder-0.4.jar:/usr/hdp/current/hadoop-yarn-client/lib/jettison-1.1.jar:/usr/hdp/current/hadoop-yarn-client/lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/hadoop-yarn-client/lib/jsp-api-2.1.jar:/usr/hdp/current/hadoop-yarn-client/lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/current/hadoop-yarn-client/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/hadoop-yarn-client/lib/jsch-0.1.54.jar:/usr/hdp/current/hadoop-yarn-client/lib/jsr305-3.0.0.jar:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/mr-framework/hadoop/share/hadoop/mapreduce/*:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/mr-framework/hadoop/share/hadoop/common/*:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/mr-framework/hadoop/share/hadoop/common/lib/*:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/mr-framework/hadoop/share/hadoop/yarn/*:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/mr-framework/hadoop/share/hadoop/yarn/lib/*:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/mr-framework/hadoop/share/hadoop/hdfs/*:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/mr-framework/hadoop/share/hadoop/hdfs/lib/*:/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/2.6.3.0-235/hadoop/lib/hadoop-lzo-0.6.0.2.6.3.0-235.jar:/etc/hadoop/conf/secure
17/12/20 14:45:01 INFO ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
17/12/20 14:45:01 INFO ZooKeeper: Client environment:java.io.tmpdir=/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002/tmp
17/12/20 14:45:01 INFO ZooKeeper: Client environment:java.compiler=<NA>
17/12/20 14:45:01 INFO ZooKeeper: Client environment:os.name=Linux
17/12/20 14:45:01 INFO ZooKeeper: Client environment:os.arch=amd64
17/12/20 14:45:01 INFO ZooKeeper: Client environment:os.version=3.10.0-514.10.2.el7.x86_64
17/12/20 14:45:01 INFO ZooKeeper: Client environment:user.name=yarn
17/12/20 14:45:01 INFO ZooKeeper: Client environment:user.home=/home/yarn
17/12/20 14:45:01 INFO ZooKeeper: Client environment:user.dir=/data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/container_e58_1512485869804_4766_01_000002
17/12/20 14:45:01 INFO ZooKeeper: Initiating client connection, connectString={$IP_2}:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@4bc1a36b
17/12/20 14:45:01 INFO ClientCnxn: Opening socket connection to server {$IP_2}/{$IP_2}:2181. Will not attempt to authenticate using SASL (unknown error)
17/12/20 14:45:01 INFO ClientCnxn: Socket connection established, initiating session, client: /{$IP_3}:55826, server: {$IP_2}/{$IP_2}:2181
17/12/20 14:45:02 INFO ClientCnxn: Session establishment complete on server {$IP_2}/{$IP_2}:2181, sessionid = 0x25fa4ea159800c4, negotiated timeout = 40000
17/12/20 14:45:02 INFO ConnectionStateManager: State change: CONNECTED
17/12/20 14:45:02 INFO Version: HV000001: Hibernate Validator 5.1.3.Final
17/12/20 14:45:02 INFO JsonConfigurator: Loaded class[class io.druid.guice.ExtensionsConfig] from props[druid.extensions.] as [ExtensionsConfig{searchCurrentClassloader=true, directory='extensions', hadoopDependenciesDir='hadoop-dependencies', hadoopContainerDruidClasspath='null', loadList=null}]
17/12/20 14:45:03 INFO LoggingEmitter: Start: started [true]
17/12/20 14:45:04 INFO FinagleRegistry: Adding resolver for scheme[disco].
17/12/20 14:45:04 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1584 bytes result sent to driver
17/12/20 14:50:00 INFO CoarseGrainedExecutorBackend: Got assigned task 1
17/12/20 14:50:00 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
17/12/20 14:50:00 INFO TorrentBroadcast: Started reading broadcast variable 1
17/12/20 14:50:00 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.5 KB, free 366.3 MB)
17/12/20 14:50:00 INFO TorrentBroadcast: Reading broadcast variable 1 took 15 ms
17/12/20 14:50:00 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.3 KB, free 366.3 MB)
17/12/20 14:50:00 INFO KafkaRDD: Computing topic events_test, partition 0 offsets 2413 -> 2414
17/12/20 14:50:00 INFO CachedKafkaConsumer: Initializing cache 16 64 0.75
17/12/20 14:50:00 INFO CachedKafkaConsumer: Cache miss for CacheKey(spark-executor-use_a_separate_group_id_for_each_stream,events_test,0)
17/12/20 14:50:00 INFO ConsumerConfig: ConsumerConfig values: 
    metric.reporters = []
    metadata.max.age.ms = 300000
    partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
    reconnect.backoff.ms = 50
    sasl.kerberos.ticket.renew.window.factor = 0.8
    max.partition.fetch.bytes = 1048576
    bootstrap.servers = [{$URL2}:6667]
    ssl.keystore.type = JKS
    enable.auto.commit = false
    sasl.mechanism = GSSAPI
    interceptor.classes = null
    exclude.internal.topics = true
    ssl.truststore.password = null
    client.id = 
    ssl.endpoint.identification.algorithm = null
    max.poll.records = 2147483647
    check.crcs = true
    request.timeout.ms = 40000
    heartbeat.interval.ms = 3000
    auto.commit.interval.ms = 5000
    receive.buffer.bytes = 65536
    ssl.truststore.type = JKS
    ssl.truststore.location = null
    ssl.keystore.password = null
    fetch.min.bytes = 1
    send.buffer.bytes = 131072
    value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
    group.id = spark-executor-use_a_separate_group_id_for_each_stream
    retry.backoff.ms = 100
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    ssl.trustmanager.algorithm = PKIX
    ssl.key.password = null
    fetch.max.wait.ms = 500
    sasl.kerberos.min.time.before.relogin = 60000
    connections.max.idle.ms = 540000
    session.timeout.ms = 30000
    metrics.num.samples = 2
    key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
    ssl.protocol = TLS
    ssl.provider = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.keystore.location = null
    ssl.cipher.suites = null
    security.protocol = PLAINTEXT
    ssl.keymanager.algorithm = SunX509
    metrics.sample.window.ms = 30000
    auto.offset.reset = none

17/12/20 14:50:00 INFO ConsumerConfig: ConsumerConfig values: 
    metric.reporters = []
    metadata.max.age.ms = 300000
    partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
    reconnect.backoff.ms = 50
    sasl.kerberos.ticket.renew.window.factor = 0.8
    max.partition.fetch.bytes = 1048576
    bootstrap.servers = [hadooptest191{$HOSTNAME}:6667]
    ssl.keystore.type = JKS
    enable.auto.commit = false
    sasl.mechanism = GSSAPI
    interceptor.classes = null
    exclude.internal.topics = true
    ssl.truststore.password = null
    client.id = consumer-1
    ssl.endpoint.identification.algorithm = null
    max.poll.records = 2147483647
    check.crcs = true
    request.timeout.ms = 40000
    heartbeat.interval.ms = 3000
    auto.commit.interval.ms = 5000
    receive.buffer.bytes = 65536
    ssl.truststore.type = JKS
    ssl.truststore.location = null
    ssl.keystore.password = null
    fetch.min.bytes = 1
    send.buffer.bytes = 131072
    value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
    group.id = spark-executor-use_a_separate_group_id_for_each_stream
    retry.backoff.ms = 100
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    ssl.trustmanager.algorithm = PKIX
    ssl.key.password = null
    fetch.max.wait.ms = 500
    sasl.kerberos.min.time.before.relogin = 60000
    connections.max.idle.ms = 540000
    session.timeout.ms = 30000
    metrics.num.samples = 2
    key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
    ssl.protocol = TLS
    ssl.provider = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.keystore.location = null
    ssl.cipher.suites = null
    security.protocol = PLAINTEXT
    ssl.keymanager.algorithm = SunX509
    metrics.sample.window.ms = 30000
    auto.offset.reset = none

17/12/20 14:50:00 INFO AppInfoParser: Kafka version : 0.10.0.1
17/12/20 14:50:00 INFO AppInfoParser: Kafka commitId : a7a17cdec9eaa6c5
17/12/20 14:50:00 INFO CuratorFrameworkImpl: Starting
17/12/20 14:50:00 INFO ZooKeeper: Initiating client connection, connectString={$IP_2}:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@243e188b
17/12/20 14:50:00 INFO ClientCnxn: Opening socket connection to server {$IP_2}/{$IP_2}:2181. Will not attempt to authenticate using SASL (unknown error)
17/12/20 14:50:00 INFO ClientCnxn: Socket connection established, initiating session, client: /{$IP_3}:56064, server: {$IP_2}/{$IP_2}:2181
17/12/20 14:50:00 INFO LoggingEmitter: Start: started [true]
17/12/20 14:50:00 INFO FinagleRegistry: Adding resolver for scheme[disco].
17/12/20 14:50:00 INFO ClientCnxn: Session establishment complete on server {$IP_2}/{$IP_2}:2181, sessionid = 0x25fa4ea159800c5, negotiated timeout = 40000
17/12/20 14:50:00 INFO ConnectionStateManager: State change: CONNECTED
17/12/20 14:50:00 INFO CachedKafkaConsumer: Initial fetch for spark-executor-use_a_separate_group_id_for_each_stream events_test 0 2413
17/12/20 14:50:00 INFO AbstractCoordinator: Discovered coordinator hadooptest191{$HOSTNAME}:6667 (id: 2147482646 rack: null) for group spark-executor-use_a_separate_group_id_for_each_stream.
17/12/20 14:50:01 INFO ClusteredBeam: Creating new merged beam for identifier[druid:overlord/events] timestamp[2017-12-20T12:00:00.000Z] (target = 1, actual = 0)
17/12/20 14:50:01 INFO control$: Creating druid indexing task (service = druid:overlord): {
  "type" : "index_realtime",
  "id" : "index_realtime_events_2017-12-20T12:00:00.000Z_0_0",
  "resource" : {
    "availabilityGroup" : "events-2017-12-20T12:00:00.000Z-0000",
    "requiredCapacity" : 1
  },
  "spec" : {
    "dataSchema" : {
      "dataSource" : "events",
      "parser" : {
        "type" : "map",
        "parseSpec" : {
          "format" : "json",
          "timestampSpec" : {
            "column" : "timestamp",
            "format" : "iso",
            "missingValue" : null
          },
          "dimensionsSpec" : {
            "dimensions" : [ "oid" ],
            "spatialDimensions" : [ ]
          }
        }
      },
      "metricsSpec" : [ {
        "type" : "longSum",
        "name" : "durum",
        "fieldName" : "durum"
      } ],
      "granularitySpec" : {
        "type" : "uniform",
        "segmentGranularity" : "HOUR",
        "queryGranularity" : {
          "type" : "duration",
          "duration" : 60000,
          "origin" : "1970-01-01T02:00:00.000+02:00"
        }
      }
    },
    "ioConfig" : {
      "type" : "realtime",
      "plumber" : null,
      "firehose" : {
        "type" : "clipped",
        "interval" : "2017-12-20T12:00:00.000Z/2017-12-20T13:00:00.000Z",
        "delegate" : {
          "type" : "timed",
          "shutoffTime" : "2017-12-20T13:15:00.000Z",
          "delegate" : {
            "type" : "receiver",
            "serviceName" : "firehose:druid:overlord:events-012-0000-0000",
            "bufferSize" : 100000
          }
        }
      }
    },
    "tuningConfig" : {
      "shardSpec" : {
        "type" : "linear",
        "partitionNum" : 0
      },
      "rejectionPolicy" : {
        "type" : "none"
      },
      "buildV9Directly" : false,
      "maxPendingPersists" : 0,
      "intermediatePersistPeriod" : "PT10M",
      "windowPeriod" : "PT10M",
      "type" : "realtime",
      "maxRowsInMemory" : 75000
    }
  }
}
17/12/20 14:50:01 INFO finagle: Finagle version 6.31.0 (rev=50d3bb0eea5ad3ed332111d707184c80fed6a506) built at 20151203-164135
17/12/20 14:50:02 INFO DiscoResolver: Updating instances for service[druid:overlord] to Set(ServiceInstance{name='druid:overlord', id='22e99668-3a03-42df-83d8-dc75f31bddfd', address='hadooptest{$HOSTNAME}', port=8090, sslPort=null, payload=null, registrationTimeUTC=1510663432591, serviceType=DYNAMIC, uriSpec=null})
17/12/20 14:50:02 INFO FinagleRegistry: Created client for service: disco!druid:overlord
17/12/20 14:50:03 INFO control$: Created druid indexing task with id: index_realtime_events_2017-12-20T12:00:00.000Z_0_0 (service = druid:overlord)
17/12/20 14:50:03 INFO DiscoResolver: Updating instances for service[firehose:druid:overlord:events-012-0000-0000] to Set()
17/12/20 14:50:03 INFO FinagleRegistry: Created client for service: disco!firehose:druid:overlord:events-012-0000-0000
17/12/20 14:50:03 INFO ClusteredBeam: Created beam: {"interval":"2017-12-20T12:00:00.000Z/2017-12-20T13:00:00.000Z","partition":0,"tasks":[{"id":"index_realtime_events_2017-12-20T12:00:00.000Z_0_0","firehoseId":"events-012-0000-0000"}],"timestamp":"2017-12-20T12:00:00.000Z"}
17/12/20 14:50:03 INFO DruidBeam: Closing Druid beam for datasource[events] interval[2017-12-20T12:00:00.000Z/2017-12-20T13:00:00.000Z] (tasks = index_realtime_events_2017-12-20T12:00:00.000Z_0_0)
17/12/20 14:50:03 INFO FinagleRegistry: Closing client for service: disco!firehose:druid:overlord:events-012-0000-0000
17/12/20 14:50:03 INFO DiscoResolver: No longer monitoring service[firehose:druid:overlord:events-012-0000-0000]
17/12/20 14:50:03 INFO ClusteredBeam: Writing new beam data to[/tranquility/beams/druid:overlord/events/data]: {"latestTime":"2017-12-20T12:00:00.000Z","latestCloseTime":"1970-01-01T00:00:00.000Z","beams":{"2017-12-20T12:00:00.000Z":[{"interval":"2017-12-20T12:00:00.000Z/2017-12-20T13:00:00.000Z","partition":0,"tasks":[{"id":"index_realtime_events_2017-12-20T12:00:00.000Z_0_0","firehoseId":"events-012-0000-0000"}],"timestamp":"2017-12-20T12:00:00.000Z"}]}}
17/12/20 14:50:03 INFO ClusteredBeam: Adding beams for identifier[druid:overlord/events] timestamp[2017-12-20T12:00:00.000Z]: List(Map(interval -> 2017-12-20T12:00:00.000Z/2017-12-20T13:00:00.000Z, partition -> 0, tasks -> ArraySeq(Map(id -> index_realtime_events_2017-12-20T12:00:00.000Z_0_0, firehoseId -> events-012-0000-0000)), timestamp -> 2017-12-20T12:00:00.000Z))
17/12/20 14:50:03 INFO DiscoResolver: Updating instances for service[firehose:druid:overlord:events-012-0000-0000] to Set()
17/12/20 14:50:03 INFO FinagleRegistry: Created client for service: disco!firehose:druid:overlord:events-012-0000-0000
17/12/20 14:50:03 WARN MapPartitioner: Cannot partition object of class[class eventsEvent] by time and dimensions. Consider implementing a Partitioner.
17/12/20 14:50:04 INFO ClusteredBeam: Merged beam already created for identifier[druid:overlord/events] timestamp[2017-12-20T12:00:00.000Z], with sufficient partitions (target = 1, actual = 1)
17/12/20 14:50:04 INFO ClusteredBeam: Merged beam already created for identifier[druid:overlord/events] timestamp[2017-12-20T12:00:00.000Z], with sufficient partitions (target = 1, actual = 1)
17/12/20 14:50:04 INFO ClusteredBeam: Merged beam already created for identifier[druid:overlord/events] timestamp[2017-12-20T12:00:00.000Z], with sufficient partitions (target = 1, actual = 1)
17/12/20 14:50:04 INFO ClusteredBeam: Merged beam already created for identifier[druid:overlord/events] timestamp[2017-12-20T12:00:00.000Z], with sufficient partitions (target = 1, actual = 1)
17/12/20 14:50:22 INFO DiscoResolver: Updating instances for service[firehose:druid:overlord:events-012-0000-0000] to Set(ServiceInstance{name='firehose:druid:overlord:events-012-0000-0000', id='0f6e0c8f-553f-45e6-a092-815b4cfb534f', address='hadooptest9{$HOSTNAME}', port=8100, sslPort=null, payload=null, registrationTimeUTC=1513774221961, serviceType=DYNAMIC, uriSpec=null})
17/12/20 14:51:07 WARN ClusteredBeam: Emitting alert: [anomaly] Failed to propagate events: druid:overlord/events
{
  "eventCount" : 1,
  "timestamp" : "2017-12-20T12:00:00.000Z",
  "beams" : "MergingPartitioningBeam(DruidBeam(interval = 2017-12-20T12:00:00.000Z/2017-12-20T13:00:00.000Z, partition = 0, tasks = [index_realtime_events_2017-12-20T12:00:00.000Z_0_0/events-012-0000-0000]))"
}
java.io.IOException: Failed to send request to task[index_realtime_events_2017-12-20T12:00:00.000Z_0_0]: 500 Internal Server Error
    at com.metamx.tranquility.druid.TaskClient$$anonfun$apply$2$$anonfun$apply$3.apply(TaskClient.scala:87)
    at com.metamx.tranquility.druid.TaskClient$$anonfun$apply$2$$anonfun$apply$3.apply(TaskClient.scala:73)
    at com.twitter.util.Future$$anonfun$map$1$$anonfun$apply$6.apply(Future.scala:950)
    at com.twitter.util.Try$.apply(Try.scala:13)
    at com.twitter.util.Future$.apply(Future.scala:97)
    at com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:950)
    at com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:949)
    at com.twitter.util.Promise$Transformer.liftedTree1$1(Promise.scala:112)
    at com.twitter.util.Promise$Transformer.k(Promise.scala:112)
    at com.twitter.util.Promise$Transformer.apply(Promise.scala:122)
    at com.twitter.util.Promise$Transformer.apply(Promise.scala:103)
    at com.twitter.util.Promise$$anon$1.run(Promise.scala:366)
    at com.twitter.concurrent.LocalScheduler$Activation.run(Scheduler.scala:178)
    at com.twitter.concurrent.LocalScheduler$Activation.submit(Scheduler.scala:136)
    at com.twitter.concurrent.LocalScheduler.submit(Scheduler.scala:207)
    at com.twitter.concurrent.Scheduler$.submit(Scheduler.scala:92)
    at com.twitter.util.Promise.runq(Promise.scala:350)
    at com.twitter.util.Promise.updateIfEmpty(Promise.scala:721)
    at com.twitter.util.Promise.update(Promise.scala:694)
    at com.twitter.util.Promise.setValue(Promise.scala:670)
    at com.twitter.concurrent.AsyncQueue.offer(AsyncQueue.scala:111)
    at com.twitter.finagle.netty3.transport.ChannelTransport.handleUpstream(ChannelTransport.scala:55)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
    at com.twitter.finagle.netty3.channel.ChannelStatsHandler.messageReceived(ChannelStatsHandler.scala:78)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
    at com.twitter.finagle.netty3.channel.ChannelRequestStatsHandler.messageReceived(ChannelRequestStatsHandler.scala:35)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
17/12/20 14:51:07 INFO LoggingEmitter: Event [{"feed":"alerts","timestamp":"2017-12-20T14:51:07.534+02:00","service":"tranquility","host":"localhost","severity":"anomaly","description":"Failed to propagate events: druid:overlord/events","data":{"exceptionType":"java.io.IOException","exceptionStackTrace":"java.io.IOException: Failed to send request to task[index_realtime_events_2017-12-20T12:00:00.000Z_0_0]: 500 Internal Server Error\n\tat com.metamx.tranquility.druid.TaskClient$$anonfun$apply$2$$anonfun$apply$3.apply(TaskClient.scala:87)\n\tat com.metamx.tranquility.druid.TaskClient$$anonfun$apply$2$$anonfun$apply$3.apply(TaskClient.scala:73)\n\tat com.twitter.util.Future$$anonfun$map$1$$anonfun$apply$6.apply(Future.scala:950)\n\tat com.twitter.util.Try$.apply(Try.scala:13)\n\tat com.twitter.util.Future$.apply(Future.scala:97)\n\tat com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:950)\n\tat com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:949)\n\tat com.twitter.util.Promise$Transformer.liftedTree1$1(Promise.scala:112)\n\tat com.twitter.util.Promise$Transformer.k(Promise.scala:112)\n\tat com.twitter.util.Promise$Transformer.apply(Promise.scala:122)\n\tat com.twitter.util.Promise$Transformer.apply(Promise.scala:103)\n\tat com.twitter.util.Promise$$anon$1.run(Promise.scala:366)\n\tat com.twitter.concurrent.LocalScheduler$Activation.run(Scheduler.scala:178)\n\tat com.twitter.concurrent.LocalScheduler$Activation.submit(Scheduler.scala:136)\n\tat com.twitter.concurrent.LocalScheduler.submit(Scheduler.scala:207)\n\tat com.twitter.concurrent.Scheduler$.submit(Scheduler.scala:92)\n\tat com.twitter.util.Promise.runq(Promise.scala:350)\n\tat com.twitter.util.Promise.updateIfEmpty(Promise.scala:721)\n\tat com.twitter.util.Promise.update(Promise.scala:694)\n\tat com.twitter.util.Promise.setValue(Promise.scala:670)\n\tat com.twitter.concurrent.AsyncQueue.offer(AsyncQueue.scala:111)\n\tat com.twitter.finagle.netty3.transport.ChannelTransport.handleUpstream(ChannelTransport.scala:55)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)\n\tat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)\n\tat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)\n\tat org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)\n\tat org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)\n\tat org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)\n\tat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n\tat org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)\n\tat com.twitter.finagle.netty3.channel.ChannelStatsHandler.messageReceived(ChannelStatsHandler.scala:78)\n\tat org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)\n\tat com.twitter.finagle.netty3.channel.ChannelRequestStatsHandler.messageReceived(ChannelRequestStatsHandler.scala:35)\n\tat org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)\n\tat org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)\n\tat org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)\n\tat org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)\n\tat org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)\n\tat org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)\n\tat org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)\n\tat org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)\n\tat org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)\n\tat org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:745)\n","timestamp":"2017-12-20T12:00:00.000Z","beams":"MergingPartitioningBeam(DruidBeam(interval = 2017-12-20T12:00:00.000Z/2017-12-20T13:00:00.000Z, partition = 0, tasks = [index_realtime_events_2017-12-20T12:00:00.000Z_0_0/events-012-0000-0000]))","eventCount":1,"exceptionMessage":"Failed to send request to task[index_realtime_events_2017-12-20T12:00:00.000Z_0_0]: 500 Internal Server Error"}}]
17/12/20 14:51:13 WARN ClusteredBeam: Emitting alert: [anomaly] Failed to propagate events: druid:overlord/events
{
  "eventCount" : 1,
  "timestamp" : "2017-12-20T12:00:00.000Z",
  "beams" : "MergingPartitioningBeam(DruidBeam(interval = 2017-12-20T12:00:00.000Z/2017-12-20T13:00:00.000Z, partition = 0, tasks = [index_realtime_events_2017-12-20T12:00:00.000Z_0_0/events-012-0000-0000]))"
}
java.io.IOException: Failed to send request to task[index_realtime_events_2017-12-20T12:00:00.000Z_0_0]: 500 Internal Server Error
    at com.metamx.tranquility.druid.TaskClient$$anonfun$apply$2$$anonfun$apply$3.apply(TaskClient.scala:87)
    at com.metamx.tranquility.druid.TaskClient$$anonfun$apply$2$$anonfun$apply$3.apply(TaskClient.scala:73)
    at com.twitter.util.Future$$anonfun$map$1$$anonfun$apply$6.apply(Future.scala:950)
    at com.twitter.util.Try$.apply(Try.scala:13)
    at com.twitter.util.Future$.apply(Future.scala:97)
    at com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:950)
    at com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:949)
    at com.twitter.util.Promise$Transformer.liftedTree1$1(Promise.scala:112)
    at com.twitter.util.Promise$Transformer.k(Promise.scala:112)
    at com.twitter.util.Promise$Transformer.apply(Promise.scala:122)
    at com.twitter.util.Promise$Transformer.apply(Promise.scala:103)
    at com.twitter.util.Promise$$anon$1.run(Promise.scala:366)
    at com.twitter.concurrent.LocalScheduler$Activation.run(Scheduler.scala:178)
    at com.twitter.concurrent.LocalScheduler$Activation.submit(Scheduler.scala:136)
    at com.twitter.concurrent.LocalScheduler.submit(Scheduler.scala:207)
    at com.twitter.concurrent.Scheduler$.submit(Scheduler.scala:92)
    at com.twitter.util.Promise.runq(Promise.scala:350)
    at com.twitter.util.Promise.updateIfEmpty(Promise.scala:721)
    at com.twitter.util.Promise.update(Promise.scala:694)
    at com.twitter.util.Promise.setValue(Promise.scala:670)
    at com.twitter.concurrent.AsyncQueue.offer(AsyncQueue.scala:111)
    at com.twitter.finagle.netty3.transport.ChannelTransport.handleUpstream(ChannelTransport.scala:55)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
    at com.twitter.finagle.netty3.channel.ChannelStatsHandler.messageReceived(ChannelStatsHandler.scala:78)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
    at com.twitter.finagle.netty3.channel.ChannelRequestStatsHandler.messageReceived(ChannelRequestStatsHandler.scala:35)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
17/12/20 14:51:13 INFO LoggingEmitter: Event [{"feed":"alerts","timestamp":"2017-12-20T14:51:13.399+02:00","service":"tranquility","host":"localhost","severity":"anomaly","description":"Failed to propagate events: druid:overlord/events","data":{"exceptionType":"java.io.IOException","exceptionStackTrace":"java.io.IOException: Failed to send request to task[index_realtime_events_2017-12-20T12:00:00.000Z_0_0]: 500 Internal Server Error\n\tat com.metamx.tranquility.druid.TaskClient$$anonfun$apply$2$$anonfun$apply$3.apply(TaskClient.scala:87)\n\tat com.metamx.tranquility.druid.TaskClient$$anonfun$apply$2$$anonfun$apply$3.apply(TaskClient.scala:73)\n\tat com.twitter.util.Future$$anonfun$map$1$$anonfun$apply$6.apply(Future.scala:950)\n\tat com.twitter.util.Try$.apply(Try.scala:13)\n\tat com.twitter.util.Future$.apply(Future.scala:97)\n\tat com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:950)\n\tat com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:949)\n\tat com.twitter.util.Promise$Transformer.liftedTree1$1(Promise.scala:112)\n\tat com.twitter.util.Promise$Transformer.k(Promise.scala:112)\n\tat com.twitter.util.Promise$Transformer.apply(Promise.scala:122)\n\tat com.twitter.util.Promise$Transformer.apply(Promise.scala:103)\n\tat com.twitter.util.Promise$$anon$1.run(Promise.scala:366)\n\tat com.twitter.concurrent.LocalScheduler$Activation.run(Scheduler.scala:178)\n\tat com.twitter.concurrent.LocalScheduler$Activation.submit(Scheduler.scala:136)\n\tat com.twitter.concurrent.LocalScheduler.submit(Scheduler.scala:207)\n\tat com.twitter.concurrent.Scheduler$.submit(Scheduler.scala:92)\n\tat com.twitter.util.Promise.runq(Promise.scala:350)\n\tat com.twitter.util.Promise.updateIfEmpty(Promise.scala:721)\n\tat com.twitter.util.Promise.update(Promise.scala:694)\n\tat com.twitter.util.Promise.setValue(Promise.scala:670)\n\tat com.twitter.concurrent.AsyncQueue.offer(AsyncQueue.scala:111)\n\tat com.twitter.finagle.netty3.transport.ChannelTransport.handleUpstream(ChannelTransport.scala:55)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)\n\tat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)\n\tat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)\n\tat org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)\n\tat org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)\n\tat org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)\n\tat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n\tat org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)\n\tat com.twitter.finagle.netty3.channel.ChannelStatsHandler.messageReceived(ChannelStatsHandler.scala:78)\n\tat org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)\n\tat com.twitter.finagle.netty3.channel.ChannelRequestStatsHandler.messageReceived(ChannelRequestStatsHandler.scala:35)\n\tat org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)\n\tat org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)\n\tat org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)\n\tat org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)\n\tat org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)\n\tat org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)\n\tat org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)\n\tat org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)\n\tat org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)\n\tat org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:745)\n","timestamp":"2017-12-20T12:00:00.000Z","beams":"MergingPartitioningBeam(DruidBeam(interval = 2017-12-20T12:00:00.000Z/2017-12-20T13:00:00.000Z, partition = 0, tasks = [index_realtime_events_2017-12-20T12:00:00.000Z_0_0/events-012-0000-0000]))","eventCount":1,"exceptionMessage":"Failed to send request to task[index_realtime_events_2017-12-20T12:00:00.000Z_0_0]: 500 Internal Server Error"}}]
17/12/20 14:51:19 ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM
17/12/20 14:51:19 INFO DiskBlockManager: Shutdown hook called
17/12/20 14:51:19 INFO ShutdownHookManager: Shutdown hook called
17/12/20 14:51:19 INFO ShutdownHookManager: Deleting directory /data1/hadoop/yarn/local/usercache/hdfs/appcache/application_1512485869804_4766/spark-af83077f-9b92-4b46-816b-9e25c3396055

End of LogType:stderr
***********************************************************************

I have tried a few things. Mainly, I have made the key of timestamp column in eventMap to <"timestamp"> as such:

case class MyEvent (time: DateTime,oid: String,  status: Int)
{
  
  @JsonValue
  def toMap: Map[String, Any] = Map(
    "timestamp" -> (time.getMillis / 1000),
    "oid" -> oid,
    "status" -> status
  )
}  
object MyEvent {
    implicit val MyEventTimestamper = new Timestamper[MyEvent] {
    def timestamp(a: MyEvent) = a.time
  }
    
    val Columns = Seq("time", "oid",  "status")
    
    def fromMap(d: Dict): MyEvent = {
    MyEvent(
       new DateTime(long(d("timestamp")) * 1000), 
      str(d("oid")),          
      int(d("status"))
    )  
  }
} 

I had to do that because <event.timestamp> in

DruidBeams
      .builder((event: MyEvent) => event.timestamp)
      .curator(curator)
      .discoveryPath(discoveryPath)
      .location(DruidLocation(indexService, dataSource))
      .rollup(DruidRollup(SpecificDruidDimensions(dimensions), aggregators, QueryGranularities.MINUTE))
      .tuning(
        ClusteredBeamTuning(
          segmentGranularity = Granularity.HOUR,
          windowPeriod = new Period("PT10M"),
          partitions = 1,
          replicants = 1
        )
      )
      .buildBeam()

cannot find timestamp method in MyEvent object.

After I changed this, spark job finished successfully (as before) and hands rdd to Druid, and there are no records written to Druid datasource. Here is the druid indexing task log: (index_realtime_events_druid_2017-12-28T13:00:00.000Z_0_0)

    2017-12-28T13:05:19,299 INFO [main] io.druid.indexing.worker.executor.ExecutorLifecycle - Running with task: {
  "type" : "index_realtime",
  "id" : "index_realtime_events_druid_2017-12-28T13:00:00.000Z_0_0",
  "resource" : {
    "availabilityGroup" : "events_druid-2017-12-28T13:00:00.000Z-0000",
    "requiredCapacity" : 1
  },
  "spec" : {
    "dataSchema" : {
      "dataSource" : "events_druid",
      "parser" : {
        "type" : "map",
        "parseSpec" : {
          "format" : "json",
          "timestampSpec" : {
            "column" : "timestamp",
            "format" : "iso",
            "missingValue" : null
          },
          "dimensionsSpec" : {
            "dimensions" : [ "oid" ],
            "spatialDimensions" : [ ]
          }
        }
      },
      "metricsSpec" : [ {
        "type" : "longSum",
        "name" : "status",
        "fieldName" : "status",
        "expression" : null
      } ],
      "granularitySpec" : {
        "type" : "uniform",
        "segmentGranularity" : "HOUR",
        "queryGranularity" : {
          "type" : "duration",
          "duration" : 60000,
          "origin" : "1970-01-01T00:00:00.000Z"
        },
        "rollup" : true,
        "intervals" : null
      }
    },
    "ioConfig" : {
      "type" : "realtime",
      "firehose" : {
        "type" : "clipped",
        "delegate" : {
          "type" : "timed",
          "delegate" : {
            "type" : "receiver",
            "serviceName" : "firehose:druid:overlord:events_druid-013-0000-0000",
            "bufferSize" : 100000
          },
          "shutoffTime" : "2017-12-28T14:15:00.000Z"
        },
        "interval" : "2017-12-28T13:00:00.000Z/2017-12-28T14:00:00.000Z"
      },
      "firehoseV2" : null
    },
    "tuningConfig" : {
      "type" : "realtime",
      "maxRowsInMemory" : 75000,
      "intermediatePersistPeriod" : "PT10M",
      "windowPeriod" : "PT10M",
      "basePersistDirectory" : "/tmp/1514466313873-0",
      "versioningPolicy" : {
        "type" : "intervalStart"
      },
      "rejectionPolicy" : {
        "type" : "none"
      },
      "maxPendingPersists" : 0,
      "shardSpec" : {
        "type" : "linear",
        "partitionNum" : 0
      },
      "indexSpec" : {
        "bitmap" : {
          "type" : "concise"
        },
        "dimensionCompression" : "lz4",
        "metricCompression" : "lz4",
        "longEncoding" : "longs"
      },
      "buildV9Directly" : true,
      "persistThreadPriority" : 0,
      "mergeThreadPriority" : 0,
      "reportParseExceptions" : false,
      "handoffConditionTimeout" : 0,
      "alertTimeout" : 0
    }
  },
  "context" : null,
  "groupId" : "index_realtime_events_druid",
  "dataSource" : "events_druid"
}
2017-12-28T13:05:19,312 INFO [main] io.druid.indexing.worker.executor.ExecutorLifecycle - Attempting to lock file[/apps/druid/tasks/index_realtime_events_druid_2017-12-28T13:00:00.000Z_0_0/lock].
2017-12-28T13:05:19,313 INFO [main] io.druid.indexing.worker.executor.ExecutorLifecycle - Acquired lock file[/apps/druid/tasks/index_realtime_events_druid_2017-12-28T13:00:00.000Z_0_0/lock] in 1ms.
2017-12-28T13:05:19,317 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Running task: index_realtime_events_druid_2017-12-28T13:00:00.000Z_0_0
2017-12-28T13:05:19,323 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_realtime_events_druid_2017-12-28T13:00:00.000Z_0_0] location changed to [TaskLocation{host='hadooptest9.{host}', port=8100}].
2017-12-28T13:05:19,323 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_realtime_events_druid_2017-12-28T13:00:00.000Z_0_0] status changed to [RUNNING].
2017-12-28T13:05:19,327 INFO [main] org.eclipse.jetty.server.Server - jetty-9.3.19.v20170502
2017-12-28T13:05:19,350 INFO [task-runner-0-priority-0] io.druid.segment.realtime.plumber.RealtimePlumber - Creating plumber using rejectionPolicy[io.druid.segment.realtime.plumber.NoopRejectionPolicyFactory$1@7925d517]
2017-12-28T13:05:19,351 INFO [task-runner-0-priority-0] io.druid.server.coordination.CuratorDataSegmentServerAnnouncer - Announcing self[DruidServerMetadata{name='hadooptest9.{host}:8100', host='hadooptest9.{host}:8100', maxSize=0, tier='_default_tier', type='realtime', priority='0'}] at [/druid/announcements/hadooptest9.{host}:8100]
2017-12-28T13:05:19,382 INFO [task-runner-0-priority-0] io.druid.segment.realtime.plumber.RealtimePlumber - Expect to run at [2017-12-28T14:10:00.000Z]
2017-12-28T13:05:19,392 INFO [task-runner-0-priority-0] io.druid.segment.realtime.plumber.RealtimePlumber - Starting merge and push.
2017-12-28T13:05:19,392 INFO [task-runner-0-priority-0] io.druid.segment.realtime.plumber.RealtimePlumber - Found [0] segments. Attempting to hand off segments that start before [1970-01-01T00:00:00.000Z].
2017-12-28T13:05:19,392 INFO [task-runner-0-priority-0] io.druid.segment.realtime.plumber.RealtimePlumber - Found [0] sinks to persist and merge
2017-12-28T13:05:19,451 INFO [task-runner-0-priority-0] io.druid.segment.realtime.firehose.EventReceiverFirehoseFactory - Connecting firehose: firehose:druid:overlord:events_druid-013-0000-0000
2017-12-28T13:05:19,453 INFO [task-runner-0-priority-0] io.druid.segment.realtime.firehose.EventReceiverFirehoseFactory - Found chathandler of class[io.druid.segment.realtime.firehose.ServiceAnnouncingChatHandlerProvider]
2017-12-28T13:05:19,453 INFO [task-runner-0-priority-0] io.druid.segment.realtime.firehose.ServiceAnnouncingChatHandlerProvider - Registering Eventhandler[firehose:druid:overlord:events_druid-013-0000-0000]
2017-12-28T13:05:19,454 INFO [task-runner-0-priority-0] io.druid.curator.discovery.CuratorServiceAnnouncer - Announcing service[DruidNode{serviceName='firehose:druid:overlord:events_druid-013-0000-0000', host='hadooptest9.{host}', port=8100}]
2017-12-28T13:05:19,502 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider as a provider class
2017-12-28T13:05:19,502 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering com.fasterxml.jackson.jaxrs.smile.JacksonSmileProvider as a provider class
2017-12-28T13:05:19,502 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering io.druid.server.initialization.jetty.CustomExceptionMapper as a provider class
2017-12-28T13:05:19,502 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering io.druid.server.StatusResource as a root resource class
2017-12-28T13:05:19,505 INFO [main] com.sun.jersey.server.impl.application.WebApplicationImpl - Initiating Jersey application, version 'Jersey: 1.19.3 10/24/2016 03:43 PM'
2017-12-28T13:05:19,515 INFO [task-runner-0-priority-0] io.druid.segment.realtime.firehose.ServiceAnnouncingChatHandlerProvider - Registering Eventhandler[events_druid-013-0000-0000]
2017-12-28T13:05:19,515 INFO [task-runner-0-priority-0] io.druid.curator.discovery.CuratorServiceAnnouncer - Announcing service[DruidNode{serviceName='events_druid-013-0000-0000', host='hadooptest9.{host}', port=8100}]
2017-12-28T13:05:19,529 WARN [task-runner-0-priority-0] org.apache.curator.utils.ZKPaths - The version of ZooKeeper being used doesn't support Container nodes. CreateMode.PERSISTENT will be used instead.
2017-12-28T13:05:19,535 INFO [task-runner-0-priority-0] io.druid.server.metrics.EventReceiverFirehoseRegister - Registering EventReceiverFirehoseMetric for service [firehose:druid:overlord:events_druid-013-0000-0000]
2017-12-28T13:05:19,536 INFO [task-runner-0-priority-0] io.druid.data.input.FirehoseFactory - Firehose created, will shut down at: 2017-12-28T14:15:00.000Z
2017-12-28T13:05:19,574 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding io.druid.server.initialization.jetty.CustomExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton"
2017-12-28T13:05:19,576 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider to GuiceManagedComponentProvider with the scope "Singleton"
2017-12-28T13:05:19,583 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding com.fasterxml.jackson.jaxrs.smile.JacksonSmileProvider to GuiceManagedComponentProvider with the scope "Singleton"
2017-12-28T13:05:19,845 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding io.druid.server.http.security.StateResourceFilter to GuiceInstantiatedComponentProvider
2017-12-28T13:05:19,863 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding io.druid.server.http.SegmentListerResource to GuiceInstantiatedComponentProvider
2017-12-28T13:05:19,874 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding io.druid.server.QueryResource to GuiceInstantiatedComponentProvider
2017-12-28T13:05:19,876 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding io.druid.segment.realtime.firehose.ChatHandlerResource to GuiceInstantiatedComponentProvider
2017-12-28T13:05:19,880 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding io.druid.query.lookup.LookupListeningResource to GuiceInstantiatedComponentProvider
2017-12-28T13:05:19,882 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding io.druid.query.lookup.LookupIntrospectionResource to GuiceInstantiatedComponentProvider
2017-12-28T13:05:19,883 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding io.druid.server.StatusResource to GuiceManagedComponentProvider with the scope "Undefined"
2017-12-28T13:05:19,896 WARN [main] com.sun.jersey.spi.inject.Errors - The following warnings have been detected with resource and/or provider classes:
  WARNING: A HTTP GET method, public void io.druid.server.http.SegmentListerResource.getSegments(long,long,long,javax.servlet.http.HttpServletRequest) throws java.io.IOException, MUST return a non-void type.
2017-12-28T13:05:19,905 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@2fba0dac{/,null,AVAILABLE}
2017-12-28T13:05:19,914 INFO [main] org.eclipse.jetty.server.AbstractConnector - Started ServerConnector@25218a4d{HTTP/1.1,[http/1.1]}{0.0.0.0:8100}
2017-12-28T13:05:19,914 INFO [main] org.eclipse.jetty.server.Server - Started @6014ms
2017-12-28T13:05:19,915 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking start method[public void io.druid.server.listener.announcer.ListenerResourceAnnouncer.start()] on object[io.druid.query.lookup.LookupResourceListenerAnnouncer@426710f0].
2017-12-28T13:05:19,919 INFO [main] io.druid.server.listener.announcer.ListenerResourceAnnouncer - Announcing start time on [/druid/listeners/lookups/__default/hadooptest9.{host}:8100]
2017-12-28T13:05:20,517 WARN [task-runner-0-priority-0] io.druid.segment.realtime.firehose.PredicateFirehose - [0] InputRow(s) ignored as they do not satisfy the predicate

I have also written result to a text file to make sure data is coming and formatted. Here are a few lines of text file:

MyEvent(2017-12-28T16:10:00.387+03:00,0010,1)
MyEvent(2017-12-28T16:10:00.406+03:00,0030,1)
MyEvent(2017-12-28T16:10:00.417+03:00,0010,1)
MyEvent(2017-12-28T16:10:00.431+03:00,0010,1)
MyEvent(2017-12-28T16:10:00.448+03:00,0010,1)
MyEvent(2017-12-28T16:10:00.464+03:00,0030,1)    

Help is much appreciated. Thanks.

Hello,
This problem was solved by adding to as such:

DruidBeams
	  .builder((event: MyEvent) => event.time)
	  .curator(curator)
	  .discoveryPath(discoveryPath)
	  .location(DruidLocation(indexService, dataSource))
	  .rollup(DruidRollup(SpecificDruidDimensions(dimensions), aggregators, QueryGranularities.MINUTE))
	  .tuning(
		ClusteredBeamTuning(
		  segmentGranularity = Granularity.HOUR,
		  windowPeriod = new Period("PT10M"),
		  partitions = 1,
		  replicants = 1
		)
	  )
	  .timestampSpec(new TimestampSpec("timestamp", "posix", null))
	  .buildBeam()