Hadoop Cluster Management with Intelligent Defaults.
Hadoop is an Apache java framework that allows for distributed processing of enormous datasets across large clusters. It combines a computation engine based on MapReduce with HDFS, a distributed filesystem based on the Google File System.
Abstraction layers such as Cascading (for Java) and Cascalog (for Clojure) make writing MapReduce queries quite nice. Indeed, running hadoop locally with cascalog couldn't be easier.
Unfortunately, graduating one's MapReduce jobs to the cluster level isn't so easy. Amazon's Elastic MapReduce is a great option for getting up and running fast; but what to do if you want to configure your own cluster?
After surveying existing tools, I decided to write my own layer over Pallet, a wonderful cloud provisioning library written in Clojure. Pallet runs on top of jclouds, which allows pallet to define its operations independent of any one cloud provider. Switching between clouds involves a change of login credentials, nothing more.
To include pallet-hadoop in your project, add the following lines to
:dev-dependencies
in your project.clj
file:
[pallet-hadoop "0.3.3"]
[org.cloudhoist/pallet-jclouds "1.4.2"]
[org.jclouds/jclouds-all "1.4.2"]
[org.jclouds.driver/jclouds-jsch "1.4.2"]
[org.jclouds.driver/jclouds-slf4j "1.4.2"]
[ch.qos.logback/logback-classic "1.0.1"]
[ch.qos.logback/logback-core "1.0.1"]
You'll also need to add the Sonatype repository, to get access to
Pallet. Add this k-v pair to your project.clj
file:
:repositories {"sonatype" "http://oss.sonatype.org/content/repositories/releases/"}
For a detailed example of how to run Pallet-Hadoop, see the example project here. For more detailed information on the project's design, see the project wiki.
Pallet-Hadoop version 0.3.3
uses Pallet 0.7.2
, jclouds
1.4.2
and Clojure 1.3
and later.