badarahmed / awesome-data-engineering

A curated list of data engineering tools for software developers

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Awesome Data Engineering

A curated list of data engineering tools for software developers Awesome

List of content

  1. [Databases] (#databases)
  2. Ingestion
  3. [File System] (#file-system)
  4. Serialization format
  5. Stream Processing
  6. [Batch Processing] (#batch-processing)
  7. [Charts and Dashboards] (#charts-and-dashboards)
  8. [Workflow] (#workflow)
  9. Datasets
  10. [Monitoring] (#monitoring)
  11. Docker

Databases

  • Relational
  • Key-Value
    • [Redis] (http://redis.io/) An open source, BSD licensed, advanced key-value cache and store.
    • [Riak] (https://docs.basho.com/riak/latest/) A distributed database designed to deliver maximum data availability by distributing data across multiple servers.
    • [AWS DynamoDB] (http://aws.amazon.com/dynamodb/) A fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale.
    • HyperDex HyperDex is a scalable, searchable key-value store
    • SSDB A high performance NoSQL database supporting many data structures, an alternative to Redis
    • Kyoto Tycoon Kyoto Tycoon is a lightweight network server on top of the Kyoto Cabinet key-value database, built for high-performance and concurrency
  • Column
    • [Cassandra] (http://cassandra.apache.org/) The right choice when you need scalability and high availability without compromising performance.
      • [Cassandra Calculator] (http://www.ecyrd.com/cassandracalculator/) This simple form allows you to try out different values for your Apache Cassandra cluster and see what the impact is for your application.
      • CCM A script to easily create and destroy an Apache Cassandra cluster on localhost
    • [HBase] (http://hbase.apache.org/) The Hadoop database, a distributed, scalable, big data store.
    • [Infobright] (http://www.infobright.org) Column oriented, open-source analytic database provides both speed and efficiency.
    • [AWS Redshift] (http://aws.amazon.com/redshift/) A fast, fully managed, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all your data using your existing business intelligence tools.
  • Document
    • [MongoDB] (https://www.mongodb.org/) An open-source, document database designed for ease of development and scaling.
    • Percona Server for MongoDB Percona Server for MongoDB® is a free, enhanced, fully compatible, open source, drop-in replacement for the MongoDB® Community Edition that includes enterprise-grade features and functionality.
    • [Elasticsearch] (https://www.elastic.co/) Search & Analyze Data in Real Time.
    • [Couchbase] (http://www.couchbase.com/) The highest performing NoSQL distributed database.
    • RethinkDB The open-source database for the realtime web.
  • Graph
    • [Neo4j] (http://neo4j.com/) The world’s leading graph database.
    • [OrientDB] (http://orientdb.com/orientdb/) 2nd Generation Distributed Graph Database with the flexibility of Documents in one product with an Open Source commercial friendly license.
    • [ArangoDB] (https://www.arangodb.com/) A distributed free and open-source database with a flexible data model for documents, graphs, and key-values.
    • [Titan] (http://thinkaurelius.github.io/titan/) A scalable graph database optimized for storing and querying graphs containing hundreds of billions of vertices and edges distributed across a multi-machine cluster.
    • FlockDB A distributed, fault-tolerant graph database by Twitter.
  • Distributed
    • DAtomic The fully transactional, cloud-ready, distributed database.
    • Apache Geode An open source, distributed, in-memory database for scale-out applications.
  • Timeseries
    • InfluxDB Scalable datastore for metrics, events, and real-time analytics.
    • OpenTSDB A scalable, distributed Time Series Database.
    • kairosdb Fast scalable time series database.
  • Other
    • Tarantool Tarantool is an in-memory database and application server.
    • GreenPlum The Greenplum Database (GPDB) is an advanced, fully featured, open source data warehouse. It provides powerful and rapid analytics on petabyte scale data volumes.
    • cayley An open-source graph database. Google.

Data Ingestion

  • [Kafka] (http://kafka.apache.org/) Publish-subscribe messaging rethought as a distributed commit log.
  • [AWS Kinesis] (http://aws.amazon.com/kinesis/) A fully managed, cloud-based service for real-time data processing over large, distributed data streams.
  • RabbitMQ Robust messaging for applications.
  • FluentD An open source data collector for unified logging layer.
  • Apache Scoop A tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases.
  • Heka Data Acquisition and Processing Made Easy
  • Gobblin Universal data ingestion framework for Hadoop from Linkedin

File System

  • [HDFS] (https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html)
  • [AWS S3] (http://aws.amazon.com/s3/)
  • [Tachyon] (http://tachyon-project.org/) Tachyon is a memory-centric distributed storage system enabling reliable data sharing at memory-speed across cluster frameworks, such as Spark and MapReduce
  • CEPH Ceph is a unified, distributed storage system designed for excellent performance, reliability and scalability
  • OrangeFS Orange File System is a branch of the Parallel Virtual File System
  • SnackFS SnackFS is our bite-sized, lightweight HDFS compatible FileSystem built over Cassandra
  • GlusterFS Gluster Filesystem
  • XtreemFS fault-tolerant distributed file system for all storage needs
  • SeaweedFS Seaweed-FS is a simple and highly scalable distributed file system. There are two objectives: to store billions of files! to serve the files fast! Instead of supporting full POSIX file system semantics, Seaweed-FS choose to implement only a key~file mapping. Similar to the word "NoSQL", you can call it as "NoFS".
  • S3QL S3QL is a file system that stores all its data online using storage services like Google Storage, Amazon S3, or OpenStack.

Serialization format

  • Apache Avro Apache Avro™ is a data serialization system
  • Apache Parquet Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model or programming language.
    • Snappy A fast compressor/decompressor. Used with Parquet
    • PigZ A parallel implementation of gzip for modern multi-processor, multi-core machines
  • Apache ORC The smallest, fastest columnar storage for Hadoop workloads
  • Apache Thrift The Apache Thrift software framework, for scalable cross-language services development
  • ProtoBuf Protocol Buffers - Google's data interchange format
  • SequenceFile SequenceFile is a flat file consisting of binary key/value pairs. It is extensively used in MapReduce as input/output formats
  • Kryo Kryo is a fast and efficient object graph serialization framework for Java

Stream Processing

  • Spark Streaming Spark Streaming makes it easy to build scalable fault-tolerant streaming applications.
  • Apache Flink Apache Flink is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams.
  • Apache Storm Apache Storm is a free and open source distributed realtime computation system
  • Apache Samza Apache Samza is a distributed stream processing framework
  • Apache NiFi is an easy to use, powerful, and reliable system to process and distribute data
  • VoltDB
  • PipelineDB The Streaming SQL Database https://www.pipelinedb.com

Batch Processing

Charts and Dashboards

  • [Highcharts] (http://www.highcharts.com/) A charting library written in pure JavaScript, offering an easy way of adding interactive charts to your web site or web application.
  • ZingChart Fast JavaScript charts for any data set.
  • C3.js D3-based reusable chart library.
  • [D3.js] (http://d3js.org/) A JavaScript library for manipulating documents based on data.
    • [D3Plus] (http://d3plus.org) D3's simplier, easier to use cousin. Mostly predefined templates that you can just plug data in.
  • SmoothieCharts A JavaScript Charting Library for Streaming Data.
  • PyXley Python helpers for building dashboards using Flask and React

Workflow

  • [Luigi] (https://github.com/spotify/luigi) Luigi is a Python module that helps you build complex pipelines of batch jobs.
    • CronQ An application cron-like system. Used w/Luige
  • [Cascading] (http://www.cascading.org/) Java based application development platform.
  • [Airflow] (https://github.com/airbnb/airflow) Airflow is a system to programmaticaly author, schedule and monitor data pipelines.
  • [Azkaban] (https://azkaban.github.io/) Azkaban is a batch workflow job scheduler created at LinkedIn to run Hadoop jobs. Azkaban resolves the ordering through job dependencies and provides an easy to use web user interface to maintain and track your workflows.
  • Oozie Oozie is a workflow scheduler system to manage Apache Hadoop jobs

ELK Elastic Logstash Kibana

  • docker-logstash A highly configurable logstash (1.4.4) docker image running Elasticsearch (1.7.0) and Kibana (3.1.2).
  • elasticsearch-jdbc JDBC importer for Elasticsearch
  • ZomboDB Postgres Extension that allows creating an index backed by Elasticsearch

Docker

  • Gockerize Package golang service into minimal docker containers
  • Flocker Easily manage Docker containers & their data
  • Rancher RancherOS is a 20mb Linux distro that runs the entire OS as Docker containers
  • Kontena Application Containers for Masses
  • Weave Weaving Docker containers into applications http://weave.works
  • Zodiac A lightweight tool for easy deployment and rollback of dockerized applications
  • cAdvisor Analyzes resource usage and performance characteristics of running containers
  • Micro S3 persistence Docker microservice for saving/restoring volume data to S3
  • Dockup Docker image to backup/restore your Docker container volumes to AWS S3
  • Rocker-compose Docker composition tool with idempotency features for deploying apps composed of multiple containers.
  • Nomad Nomad is a cluster manager, designed for both long lived services and short lived batch processing workloads

Datasets

Realtime

Data Dumps

Monitoring

Prometheus

  • Prometheus.io An open-source service monitoring system and time series database
  • HAProxy Exporter Simple server that scrapes HAProxy stats and exports them via HTTP for Prometheus consumption

Cheers to The Data Engineering Ecosystem: An Interactive Map

Inspired by the awesome list. Created by Insight Data Engineering fellows.

License

CC0

To the extent possible under law, Igor Barinov has waived all copyright and related or neighboring rights to this work.

About

A curated list of data engineering tools for software developers