srikalyc / Sql4D

Sql interface to druid.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Got Task Failed error

certxg opened this issue · comments

Tried out your 3.1.1 version of Sql4Dclient with the sample data, using the simple INSERTs. The .csv file was successfully created, and mysql duid_tasks table registered and showed the tasks. But it returned 'Task Failed' message. Wasn't able to pinpoint any type of issues. I only ran zookeeper and the overlord server, should I run the broker, coordinator and historical servers too? Any pointers will be appreciated.

Go through this blog http://druidwithsql.tumblr.com/post/108054375927/simple-insert-deleting-data-drop-table-in-druid . Let me know if you still face issues. BTW you can use trace=true command to enable debugging in the Slq4DClient. If you want to really see what happened with your task use Overlord console.

Thank you. This is the log that I got with 3.1.0 when trace=true and the error
Http org.apache.http.client.HttpResponseException: Not Found

nutch@ubuntu:~/Sql4D-old$ java -jar lib/Sql4DClient-3.1.0.jar -bh localhost -bp 8080 -ch localhost -cp 8082 -oh localhost -op 8083 -i 50

trace=true;
INSERT INTO adf (timestamp, provider, title, uuid, DOUBLE_SUM(click) AS click) VALUES ('2014-11-01 00:00:00', 'siri', 'FE', 'adb-dfgdf-32de1', '5') WHERE interval BETWEEN '2014-11-01' AND '2014-11-02' BREAK BY 'day';
Written to /tmp/022735a0-a9c8-4533-8446-ea07be4a5611.csv
{
"granularitySpec": {
"gran": "day",
"intervals": ["2014-11-01/2014-11-02"],
"type": "uniform"
},
"dataSource": "adf",
"firehose": {
"baseDir": "/tmp",
"type": "local",
"parser": {
"timestampSpec": {
"column": "timestamp",
"format": "yyyy-MM-dd HH:mm:ss"
},
"data": {
"columns": [
"timestamp",
"provider",
"title",
"uuid",
"click"
],
"dimensions": [
"provider",
"title",
"uuid"
],
"format": "csv"
}
},
"filter": "022735a0-a9c8-4533-8446-ea07be4a5611.csv"
},
"type": "index",
"aggregators": [{
"name": "click",
"fieldName": "click",
"type": "doubleSum"
}]
}

Written to /tmp/b308773c-7904-4ea7-91ec-756dc0ee7e70.csv
Http org.apache.http.client.HttpResponseException: Not Found

(0.641000 sec)

The 3.1.1 version of client, on the other hand, gave a different type of errors, likely with json parsing issues.

Sometimes got different errors:
Http java.net.SocketTimeoutException: Read timed out.

Could be related to access/availability of those servers' listening ports...

If you followed the blog i sent you then the option should be
-op 8087
and not -op 8083.

Thank you for the quick response.

I changed to druid 0.6.154 and kafka 0.7.2 and it is working:

druid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.154"] instead of
druid.extensions.coordinates=["io.druid.extensions:druid-kafka-eight:0.6.160"]

The sql4dclient is 3.1.1.

Cool.

Thanks for the good work!