License
Apache licensed.
Combiners are an optimization in MapReduce that allow for local aggregation before the shuffle and sort phase. The primary goal of combiners is to save as much bandwidth as possible by minimizing the number of key/value pairs that will be shuffled across the network between mappers and reducers. > job.setCombinerClass(LogReducer.class);
Check src/test/resource/SampleLog.txt to see demofile.
Execute the job as
> hadoop jar LogAnalyzerAdvancedMapReduce-0.0.1-SNAPSHOT.jar in /logOpPartitioned
The output of the job in HDFS will have two output files from two reducers.