hbutani / spark-druid-olap

Sparkline BI Accelerator provides fast ad-hoc query capability over Logical Cubes. This has been folded into our SNAP Platform(http://bit.ly/2oBJSpP) an Integrated BI platform on Apache Spark.

Home Page:http://sparklinedata.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

/ by zero when run query of sample retail dataset

redlion99 opened this issue · comments

I got an error when I run "select count(*) from sp_demo_retail;" in beeline.

The error message is:

Error: java.lang.ArithmeticException: / by zero (state=,code=0)
java.sql.SQLException: java.lang.ArithmeticException: / by zero
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:296)
at org.apache.hive.beeline.Commands.execute(Commands.java:848)
at org.apache.hive.beeline.Commands.sql(Commands.java:713)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:973)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:813)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:771)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:484)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:467)

Here is my ddl:

`CREATE TABLE sp_demo_retail_base (
invoiceno string
,stockcode string
,description string
, quantity bigint
, invoicedate string
, unitprice double
, customerid string
, country string
, count int
)
USING com.databricks.spark.csv
OPTIONS (path "/opt/retails.csv",
header "false", delimiter ",")

CREATE TABLE sp_demo_retail
USING org.sparklinedata.druid
OPTIONS (
sourceDataframe "sp_demo_retail_base",
timeDimensionColumn "invoicedate",
druidDatasource "retail",
druidHost "10.25.2.91",
zkQualifyDiscoveryNames "false",
queryHistoricalServers "true",
numSegmentsPerHistoricalQuery "1",
columnMapping '{ } ',
functionalDependencies '[] ',
starSchema ' { "factTable" : "sp_demo_retail_base", "relations" : [] } ')`

0: jdbc:hive2://localhost:10000/> explain select * from sp_demo_retail limit 10;
Getting log thread is interrupted, since query is done!
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--+
| plan |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--+
| == Physical Plan == |
| Limit 10 |
| +- ConvertToSafe |
| +- Project [invoiceno#9,stockcode#10,description#11,quantity#12L,invoicedate#13,unitprice#14,customerid#15,country#16,count#17] |
| +- Scan DruidRelationInfo(fullName = DruidRelationName(sp_demo_retail_base,10.25.2.91,retail), sourceDFName = sp_demo_retail_base, |
| timeDimensionCol = invoicedate, |
| options = DruidRelationOptions(1000000,100000,true,true,true,30000,true,/druid,true,false,1,None))[invoiceno#9,stockcode#10,description#11,quantity#12L,invoicedate#13,unitprice#14,customerid#15,country#16,count#17] |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--+
7 rows selected (0.144 seconds)
0: jdbc:hive2://localhost:10000/> explain select count(1) from sp_demo_retail;
Getting log thread is interrupted, since query is done!
+-------------------------------------------+--+
| plan |
+-------------------------------------------+--+
| == Physical Plan == |
| java.lang.ArithmeticException: / by zero |
+-------------------------------------------+--+
2 rows selected (0.069 seconds)
0: jdbc:hive2://localhost:10000/>

Can you let us know the version of Spark you are using? We support 1.6.x.
Also
the DDL should be
CREATE TABLE sp_demo_retail
USING org.sparklinedata.druid
OPTIONS (
sourceDataframe "sp_demo_retail_base",
timeDimensionColumn "invoicedate",
druidDatasource "retail",
druidHost "10.25.2.91",
zkQualifyDiscoveryNames "false",
queryHistoricalServers "true",
numSegmentsPerHistoricalQuery "1",
columnMapping '{ } ',
functionalDependencies '[] ',
starSchema ' { "factTable" : "sp_demo_retail", "relations" : [] } ')`

I'm using spark 1.6.0

I also changed the DDL, it still doesn't work.

0: jdbc:hive2://localhost:10000/> CREATE TABLE sp_demo_retail2
0: jdbc:hive2://localhost:10000/> USING org.sparklinedata.druid
0: jdbc:hive2://localhost:10000/> OPTIONS (
0: jdbc:hive2://localhost:10000/> sourceDataframe "sp_demo_retail_base",
0: jdbc:hive2://localhost:10000/> timeDimensionColumn "invoicedate",
0: jdbc:hive2://localhost:10000/> druidDatasource "retail",
0: jdbc:hive2://localhost:10000/> druidHost "10.25.2.91",
0: jdbc:hive2://localhost:10000/> zkQualifyDiscoveryNames "false",
0: jdbc:hive2://localhost:10000/> queryHistoricalServers "true",
0: jdbc:hive2://localhost:10000/> numSegmentsPerHistoricalQuery "1",
0: jdbc:hive2://localhost:10000/> columnMapping '{ } ',
0: jdbc:hive2://localhost:10000/> functionalDependencies '[] ',
0: jdbc:hive2://localhost:10000/> starSchema ' { "factTable" : "sp_demo_retail2", "relations" : [] } ');
+---------+--+
| Result |
+---------+--+
+---------+--+
No rows selected (0.207 seconds)
0: jdbc:hive2://localhost:10000/>
0: jdbc:hive2://localhost:10000/>
0: jdbc:hive2://localhost:10000/> explain select count(1) from sp_demo_retail2;
+-------------------------------------------+--+
| plan |
+-------------------------------------------+--+
| == Physical Plan == |
| java.lang.ArithmeticException: / by zero |
+-------------------------------------------+--+
2 rows selected (0.132 seconds)
0: jdbc:hive2://localhost:10000/> select count(1) from sp_demo_retail2;
Error: java.lang.ArithmeticException: / by zero (state=,code=0)

What is the version of Sparklinedata Jars in this case?
Also could you post exception stack from Thriftserver/SparkShell.

Thanks
John

On Mon, Aug 22, 2016 at 3:42 AM, redlion99 notifications@github.com wrote:

I'm using spark 1.6.0

I also changed the DDL, it still doesn't work.

0: jdbc:hive2://localhost:10000/> CREATE TABLE sp_demo_retail2
0: jdbc:hive2://localhost:10000/> USING org.sparklinedata.druid
0: jdbc:hive2://localhost:10000/> OPTIONS (
0: jdbc:hive2://localhost:10000/> sourceDataframe "sp_demo_retail_base",
0: jdbc:hive2://localhost:10000/> timeDimensionColumn "invoicedate",
0: jdbc:hive2://localhost:10000/> druidDatasource "retail",
0: jdbc:hive2://localhost:10000/> druidHost "10.25.2.91",
0: jdbc:hive2://localhost:10000/> zkQualifyDiscoveryNames "false",
0: jdbc:hive2://localhost:10000/> queryHistoricalServers "true",
0: jdbc:hive2://localhost:10000/> numSegmentsPerHistoricalQuery "1",
0: jdbc:hive2://localhost:10000/> columnMapping '{ } ',
0: jdbc:hive2://localhost:10000/> functionalDependencies '[] ',
0: jdbc:hive2://localhost:10000/> starSchema ' { "factTable" :
"sp_demo_retail2", "relations" : [] } ');
+---------+--+
| Result |
+---------+--+
+---------+--+
No rows selected (0.207 seconds)
0: jdbc:hive2://localhost:10000/>
0: jdbc:hive2://localhost:10000/>
0: jdbc:hive2://localhost:10000/> explain select count(1) from
sp_demo_retail2;
+-------------------------------------------+--+
| plan |
+-------------------------------------------+--+
| == Physical Plan == |
| java.lang.ArithmeticException: / by zero |
+-------------------------------------------+--+
2 rows selected (0.132 seconds)
0: jdbc:hive2://localhost:10000/> select count(1) from sp_demo_retail2;
Error: java.lang.ArithmeticException: / by zero (state=,code=0)


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#51 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AEcnVl3A-FYzU1ivPiw-22VZuwqY7c5dks5qiX0RgaJpZM4JphUO
.

The sparklinedata jar I'm using is from http://repo1.maven.org/maven2/com/sparklinedata/accelerator_2.10/0.2.0/accelerator_2.10-0.2.0-assembly.jar

Here is the full stack of Thriftserver

16/08/22 18:38:46 INFO thriftserver.SparkExecuteStatementOperation: Running query 'CREATE TABLE sp_demo_retail2
USING org.sparklinedata.druid
OPTIONS (
sourceDataframe "sp_demo_retail_base",
timeDimensionColumn "invoicedate",
druidDatasource "retail",
druidHost "10.25.2.91",
zkQualifyDiscoveryNames "false",
queryHistoricalServers "true",
numSegmentsPerHistoricalQuery "1",
columnMapping '{ } ',
functionalDependencies '[] ',
starSchema ' { "factTable" : "sp_demo_retail2", "relations" : [] } ')' with dcb2301d-38f7-4f69-a53b-efb72ccfd125
16/08/22 18:38:46 INFO metastore.HiveMetaStore: 21: get_table : db=default tbl=sp_demo_retail2
16/08/22 18:38:46 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=sp_demo_retail2
16/08/22 18:38:46 INFO metastore.HiveMetaStore: 21: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/08/22 18:38:46 INFO metastore.ObjectStore: ObjectStore, initialize called
16/08/22 18:38:46 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/08/22 18:38:46 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/08/22 18:38:46 INFO metastore.ObjectStore: Initialized ObjectStore
16/08/22 18:38:46 INFO metastore.HiveMetaStore: 21: get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:38:46 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:38:46 INFO metastore.HiveMetaStore: 21: get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:38:46 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:38:46 INFO metastore.HiveMetaStore: 21: get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:38:46 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:38:46 INFO metastore.HiveMetaStore: 21: get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:38:46 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:38:46 INFO druid.DefaultSource: invoiceno ->
description ->
country ->
customerid ->
stockcode ->
16/08/22 18:38:46 WARN sparklinedata.SparklineDataContext$$anon$1: Couldn't find corresponding Hive SerDe for data source provider org.sparklinedata.druid. Persisting data source re
lation sp_demo_retail2 into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
16/08/22 18:38:46 WARN security.UserGroupInformation: No groups available for user anonymous
16/08/22 18:38:46 WARN security.UserGroupInformation: No groups available for user anonymous
16/08/22 18:38:46 INFO metastore.HiveMetaStore: 21: create_table: Table(tableName:sp_demo_retail2, dbName:default, owner:anonymous, createTime:1471862326, lastAccessTime:0, retentio
n:0, sd:StorageDescriptor(cols:[FieldSchema(name:col, type:array, comment:from deserializer)], location:null, inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, o
utputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2
.MetadataTypedColumnsetSerDe, parameters:{druidHost=10.25.2.91, queryHistoricalServers=true, functionalDependencies=[] , druidDatasource=retail, sourceDataframe=sp_demo_retail_base,
numSegmentsPerHistoricalQuery=1, serialization.format=1, zkQualifyDiscoveryNames=false, starSchema= { "factTable" : "sp_demo_retail2", "relations" : [] } , timeDimensionColumn=invo
icedate, columnMapping={ } }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{})), partitionKeys
:[], parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=org.sparklinedata.druid}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivi
legeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null))
16/08/22 18:38:46 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=create_table: Table(tableName:sp_demo_retail2, dbName:default, owner:anonymous, createTim
e:1471862326, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col, type:array, comment:from deserializer)], location:null, inputFormat:org.apache.
hadoop.mapred.SequenceFileInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serial
izationLib:org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe, parameters:{druidHost=10.25.2.91, queryHistoricalServers=true, functionalDependencies=[] , druidDatasource=reta
il, sourceDataframe=sp_demo_retail_base, numSegmentsPerHistoricalQuery=1, serialization.format=1, zkQualifyDiscoveryNames=false, starSchema= { "factTable" : "sp_demo_retail2", "rela
tions" : [] } , timeDimensionColumn=invoicedate, columnMapping={ } }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewed
ColValueLocationMaps:{})), partitionKeys:[], parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=org.sparklinedata.druid}, viewOriginalText:null, viewExpandedText:null, tableType:
MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null))
16/08/22 18:38:46 INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://namenode:8020/user/hive/warehouse/sp_demo_retail2
16/08/22 18:39:39 INFO thriftserver.SparkExecuteStatementOperation: Running query 'explain select count(1) from sp_demo_retail2' with f735a285-4bcd-4219-90aa-3d4479fd60e7
16/08/22 18:39:39 INFO parse.ParseDriver: Parsing command: explain select count(1) from sp_demo_retail2
16/08/22 18:39:39 INFO parse.ParseDriver: Parse Completed
16/08/22 18:39:39 INFO metastore.HiveMetaStore: 22: get_table : db=default tbl=sp_demo_retail2
16/08/22 18:39:39 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=sp_demo_retail2
16/08/22 18:39:39 INFO metastore.HiveMetaStore: 22: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/08/22 18:39:39 INFO metastore.ObjectStore: ObjectStore, initialize called
16/08/22 18:39:39 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/08/22 18:39:39 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/08/22 18:39:39 INFO metastore.ObjectStore: Initialized ObjectStore
16/08/22 18:39:39 INFO metastore.HiveMetaStore: 22: get_table : db=default tbl=sp_demo_retail2
16/08/22 18:39:39 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=sp_demo_retail2
16/08/22 18:39:39 INFO metastore.HiveMetaStore: 22: get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:39:39 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:39:39 INFO metastore.HiveMetaStore: 22: get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:39:39 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:39:39 INFO metastore.HiveMetaStore: 22: get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:39:39 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:39:39 INFO druid.DefaultSource: invoiceno ->
description ->
country ->
customerid ->
stockcode ->
16/08/22 18:39:39 INFO metastore.HiveMetaStore: 22: get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:39:39 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=sp_demo_retail_base
16/08/22 18:39:39 INFO thriftserver.SparkExecuteStatementOperation: Result Schema: List(plan#64)
16/08/22 18:39:59 INFO thriftserver.SparkExecuteStatementOperation: Running query 'select count(1) from sp_demo_retail2' with e4a8322e-4be7-4b48-a2b4-8e5fc16c04b8
16/08/22 18:39:59 INFO parse.ParseDriver: Parsing command: select count(1) from sp_demo_retail2
16/08/22 18:39:59 INFO parse.ParseDriver: Parse Completed
16/08/22 18:39:59 INFO metastore.HiveMetaStore: 23: get_table : db=default tbl=sp_demo_retail2
16/08/22 18:39:59 INFO HiveMetaStore.audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=sp_demo_retail2
16/08/22 18:39:59 INFO metastore.HiveMetaStore: 23: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/08/22 18:39:59 INFO metastore.ObjectStore: ObjectStore, initialize called
16/08/22 18:39:59 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/08/22 18:39:59 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/08/22 18:39:59 INFO metastore.ObjectStore: Initialized ObjectStore
16/08/22 18:39:59 ERROR thriftserver.SparkExecuteStatementOperation: Error executing query, currentState RUNNING,
java.lang.ArithmeticException: / by zero
at org.apache.spark.sql.sources.druid.DruidQueryCostModel$.computeMethod(DruidQueryCostModel.scala:528)
at org.apache.spark.sql.sources.druid.DruidQueryCostModel$.computeMethod(DruidQueryCostModel.scala:596)
at org.apache.spark.sql.sources.druid.DruidStrategy$$anonfun$1$$anonfun$apply$2.apply(DruidStrategy.scala:71)
at org.apache.spark.sql.sources.druid.DruidStrategy$$anonfun$1$$anonfun$apply$2.apply(DruidStrategy.scala:37)
at scala.Option.map(Option.scala:145)
at org.apache.spark.sql.sources.druid.DruidStrategy$$anonfun$1.apply(DruidStrategy.scala:37)
at org.apache.spark.sql.sources.druid.DruidStrategy$$anonfun$1.apply(DruidStrategy.scala:36)
at scala.collection.GenTraversableViewLike$FlatMapped$$anonfun$foreach$3.apply(GenTraversableViewLike.scala:90)
at scala.collection.GenTraversableViewLike$FlatMapped$$anonfun$foreach$3.apply(GenTraversableViewLike.scala:89)
at scala.collection.GenTraversableViewLike$FlatMapped$$anonfun$foreach$3$$anonfun$apply$1.apply(GenTraversableViewLike.scala:91)
at scala.collection.GenTraversableViewLike$FlatMapped$$anonfun$foreach$3$$anonfun$apply$1.apply(GenTraversableViewLike.scala:91)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.GenTraversableViewLike$FlatMapped$$anonfun$foreach$3.apply(GenTraversableViewLike.scala:90)
at scala.collection.GenTraversableViewLike$FlatMapped$$anonfun$foreach$3.apply(GenTraversableViewLike.scala:89)
at scala.collection.GenTraversableViewLike$FlatMapped$$anonfun$foreach$3$$anonfun$apply$1.apply(GenTraversableViewLike.scala:91)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.GenTraversableViewLike$FlatMapped$$anonfun$foreach$3.apply(GenTraversableViewLike.scala:90)
at scala.collection.GenTraversableViewLike$FlatMapped$$anonfun$foreach$3.apply(GenTraversableViewLike.scala:89)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.SeqLike$$anon$2.foreach(SeqLike.scala:635)
at scala.collection.GenTraversableViewLike$FlatMapped$class.foreach(GenTraversableViewLike.scala:89)
at scala.collection.SeqViewLike$$anon$4.foreach(SeqViewLike.scala:79)
at scala.collection.GenTraversableViewLike$FlatMapped$class.foreach(GenTraversableViewLike.scala:89)
at scala.collection.SeqViewLike$$anon$4.foreach(SeqViewLike.scala:79)
at scala.collection.GenTraversableViewLike$FlatMapped$$anonfun$foreach$3.apply(GenTraversableViewLike.scala:90)
at scala.collection.GenTraversableViewLike$FlatMapped$$anonfun$foreach$3.apply(GenTraversableViewLike.scala:89)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.SeqLike$$anon$2.foreach(SeqLike.scala:635)
at scala.collection.GenTraversableViewLike$FlatMapped$class.foreach(GenTraversableViewLike.scala:89)
at scala.collection.SeqViewLike$$anon$4.foreach(SeqViewLike.scala:79)
at scala.collection.GenTraversableViewLike$FlatMapped$class.foreach(GenTraversableViewLike.scala:89)
at scala.collection.SeqViewLike$$anon$4.foreach(SeqViewLike.scala:79)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ListBuffer.$plus$plus$eq(ListBuffer.scala:176)
at scala.collection.mutable.ListBuffer.$plus$plus$eq(ListBuffer.scala:45)
at scala.collection.TraversableLike$class.to(TraversableLike.scala:629)
at scala.collection.SeqViewLike$AbstractTransformed.to(SeqViewLike.scala:43)
at scala.collection.TraversableOnce$class.toList(TraversableOnce.scala:257)
at scala.collection.SeqViewLike$AbstractTransformed.toList(SeqViewLike.scala:43)
at org.apache.spark.sql.sources.druid.DruidStrategy.apply(DruidStrategy.scala:119)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:47)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:45)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:52)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:52)
at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2134)
at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1542)
at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1519)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatement
Operation.scala:226)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:154)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:151)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:164)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/08/22 18:39:59 ERROR thriftserver.SparkExecuteStatementOperation: Error running hive query:
org.apache.hive.service.cli.HiveSQLException: java.lang.ArithmeticException: / by zero
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatement
Operation.scala:246)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:154)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:151)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:164)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

This issue doesn't occur in 0.2.1, So I'd like to close it!