cristal-smac / mas4data

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MAS4Data: Multiagent systems for processing very large datasets

Context

MapReduce

Data science involves the processing of large volumes of data which requires distributed file system and parallel programming. This emerging distributed computing topic brings new challenges related to task allocation and load-balancing for the adaptivity of those systems to various settings without configuration requiring user expertise, and their responsiveness to rapid changes.

We consider MapReduce which is the most prominent distributed data processing model for tackling vast amount of data in parallel on commodity clusters (Dean and Ghemawat, 2008). MapReduce jobs are divided into a set of map tasks and reduce tasks that are distributed on nodes. The task allocation among the reducers is a priori fixed by the partition function. For instance, the default partition of the most popular implementation Hadoop is a modulo of the number of reducers on the key hashcode. Such a task allocation can be problematic. Firstly, several data skews in the MapReduce applications lead to an unbalanced workload during the reduce phase(Kwon et al., 2013). Secondly, an unfair allocation can occur during the reduce phase due to the heterogeneous performance of nodes. Thirdly, the load-balancing can be challenged by some performance variations due to exogenous reasons.

Multiagent systems

In order to tackle the problem of load-balancing and task allocation in applications such as those that motivate this work, multi-agent technologies have received a lot of attention (Jiang, 2016). The SMAC team (Multiagent systems and behaviours) of the laboratory CRIStAL use the multiagent systems paradigm (MAS), in particular for complex distributed problem solving. A multi-agent system is a decentralized system where multiple agents takes local decisions based on their perceptions of the environment such that a solution to a complex problem can emerge from the interactions between simple individual behaviours. Most of the existing works adopting the market-based approach model the load-balancing problem as a non-cooperative game in order to optimize user-centric metrics rather than system-centric ones such as the global runtime we consider here.

Prototype

We provide a multi-agent system for task reallocation among distributed nodes based on the location of the required resources to perform these tasks in order to minimize the makespan. In particular, we apply our negotiation framework for load-balancing the reduce phase in the distributed MapReduce model in order to process large datasets.

The dynamic and on-going task reallocation process takes place concurrently with the task execution and so the distributed system is adaptive to disruptive phenomena (task consumption, slowing down nodes). Apart from decentralization, i.e. avoiding performance bottlenecks due to global control, our multi-agent approach for situated task allocation supports two additional crucial requirements (a) concurrency -- where task reallocation and task executions are concurrent, and (b) adaptation -- where task reallocation is triggered when a disruptive event is performed.

See the following instructions for the deployement.

References

Dean, J. and Ghemawat, S. (2008), Mapreduce: simplified data processing on large clusters, Commun. ACM 51(1), 107–113.

Y. Kwon, K. Ren, M. Balazinska, B. Howe, Managing skew in Hadoop., IEEE Data Eng. Bull. 36 (1) (2013) 24–33.

Y. Jiang, A survey of task allocation and load balancing in distributed systems, IEEE Transactions on Parallel and Distributed Systems 27 (2) (2016) 585–599.

Demonstration

The video shows some examples of tasks allocation with our MAS. We consider here real-world weather data and we count the number of records by temperature in the whole dataset.

You can observe the task negotiations during the reducing phase. We use here the default Hadoop partitioning as a reference point in order to illustrate several features of our MAS:

  1. The basic adaptation of our multiagent system, where each reducer is a negotiating agent, improves the tasks allocation;

  2. The extension of our multiagent system, where the tasks are divisible, allows the negotiation of expensive tasks;

  3. The multi-auctions extension, which allows reducers to be a bidder in more than one auction at the same time, improves the efficiency of the negotiation process.

Please note that in the video:

  • the tasks with a black border are currently performed by the reducer while the tasks with a green border are already performed;

  • the cheap tasks are graphically bigger than they should be in order to be see properly. Thus, the unfairness of the tasks allocation is a graphical effect.

Publications

Abstract: We study the problem of task reallocation for load-balancing of MapReduce jobs in applications that process large datasets. In this context, we propose a novel strategy based on cooperative agents used to optimise the task scheduling in a single MapReduce job. The novelty of our strategy lies in the ability of agents to identify opportunities within a current unbalanced allocation, which in turn trigger concurrent and one-to-many negotiations amongst agents to locally reallocate some of the tasks within a job. Our contribution is that tasks are reallocated according to the proximity of the resources and they are performed in accordance to the capabilities of the nodes in which agents are situated. To evaluate the adaptivity and responsiveness of our approach, we implement a prototype test-bed and conduct a vast panel of experiments in a heterogeneous environment and by exploring varying hardware configurations. This extensive experimentation reveals that our strategy significantly improves the overall runtime over the classical Hadoop data processing.

Abstract: We study a novel location-aware strategy for distributed systems where cooperating agents perform the load-balancing. The strategy allows agents to identify opportunities within a current unbalanced allocation , which in turn triggers concurrent and one-to-many negotiations amongst agents to locally reallocate some tasks. The tasks are reallocated according to the proximity of the resources and they are performed in accordance with the capabilities of the nodes in which agents are situated. This dynamic and ongoing negotiation process takes place concurrently with the task execution and so the task allocation process is adaptive to disruptions (task consumption, slowing down nodes). We evaluate the strategy in a multi-agent deployment of the MapReduce design pattern for processing large datasets. Empirical results demonstrate that our strategy significantly improves the overall runtime of the data processing.

  • Negotiation Strategy of Divisible Tasks for Large Dataset Processing Quentin Baert, Anne-Cécile Caron, Maxime Morge, Jean-Christophe Routier in the 15th European Conference on Multi-Agent Systems (EUMAS'2017). Evry, 14-15 December 2017.

Abstract: MapReduce is a design pattern for processing large datasets on a cluster. Its performances depend on some data skews and on the runtime environment. In order to tackle these problems, we propose an adaptive multiagent system. The agents interact during the data processing and the dynamic task allocation is the outcome of negotiations. These negotiations aim at improving the workload partition among the nodes within a cluster and so decrease the runtime of the whole process. Moreover, since the negotiations are iterative the system is responsive in case of node performance variations. In this full original paper, we show how, when a task is divisible, an agent may split it in order to negotiate its subtasks.

Abstract: MapReduce is a design pattern for processing large datasets distributed on a cluster. Its performances are linked to the data structure and the runtime environ- ment. Indeed, data skew can yield an unfair task allocation, but even when the initial allocation produced by the partition function is well balanced, an unfair allocation can occur during the reduce phase due to the heterogeneous performance of nodes. For these reasons, we propose an adaptive multi-agent system. In our approach, the reducer agents interact during the job and the task re-allocation is based on negotiation in order to decrease the workload of the most loaded reducer and so the runtime. In this paper, we propose and evaluate two negotiation strategies. Finally, we experiment our multi-agent system with real-world datasets over heterogeneous runtime environment.

Abstract: Many companies are using MapReduce applications to pro- cess very large amounts of data. Static optimization of such applications is complex because they are based on user-defined operations, called map and reduce, which prevents some algebraic optimization. In order to optimize the task allocation, several systems collect data from previous runs and predict the performance doing job profiling. However they are not effective during the learning phase, or when a new type of job or data set appears. In this paper, we present an adaptive multi-agent system for large data sets analysis with MapReduce. We do not preprocess data and we adopt a dynamic approach, where the reducer agents interact during the job. In order to decrease the workload of the most loaded reducer - and so the execution time - we propose a task re-allocation based on negotiation.

Grant

This project is supported by the CNRS Challenge Mastodons.

About

License:GNU General Public License v3.0


Languages

Language:Scala 95.1%Language:Shell 4.3%Language:Python 0.6%