davidmr001 / piflow

πflow is a big data flow engine with spark support

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GitHub releases GitHub stars GitHub forks GitHub downloads GitHub issues

πFlow is an easy to use, powerful big data pipeline system. Try PiFlow v0.9 with: http://piflow.cstcloud.cn/

Table of Contents


  • Easy to use
    • provide a WYSIWYG web interface to configure data flow
    • monitor data flow status
    • check the logs of data flow
    • provide checkpoints
  • Strong scalability:
    • Support customized development of data processing components
  • Superior performance
    • based on distributed computing engine Spark
  • Powerful
    • 100+ data processing components available
    • include spark、mllib、hadoop、hive、hbase、solr、redis、memcache、elasticSearch、jdbc、mongodb、http、ftp、xml、csv、json,etc.



  • JDK 1.8
  • Scala-2.11.8
  • Apache Maven 3.1.0 or newer
  • Spark-2.1.0、 Spark-2.2.0、 Spark-2.3.0
  • Hadoop-2.6.0

Getting Started

To Build:

  • install external package

        mvn install:install-file -Dfile=/../piflow/piflow-bundle/lib/spark-xml_2.11-0.4.2.jar -DgroupId=com.databricks -DartifactId=spark-xml_2.11 -Dversion=0.4.2 -Dpackaging=jar
        mvn install:install-file -Dfile=/../piflow/piflow-bundle/lib/java_memcached-release_2.6.6.jar -DgroupId=com.memcached -DartifactId=java_memcached-release -Dversion=2.6.6 -Dpackaging=jar
        mvn install:install-file -Dfile=/../piflow/piflow-bundle/lib/ojdbc6- -DgroupId=oracle -DartifactId=ojdbc6 -Dversion= -Dpackaging=jar
        mvn install:install-file -Dfile=/../piflow/piflow-bundle/lib/edtftpj.jar -DgroupId=ftpClient -DartifactId=edtftp -Dversion=1.0.0 -Dpackaging=jar
  • mvn clean package -Dmaven.test.skip=true

        [INFO] Replacing original artifact with shaded artifact.
        [INFO] Reactor Summary:
        [INFO] piflow-project ..................................... SUCCESS [  4.369 s]
        [INFO] piflow-core ........................................ SUCCESS [01:23 min]
        [INFO] piflow-configure ................................... SUCCESS [ 12.418 s]
        [INFO] piflow-bundle ...................................... SUCCESS [02:15 min]
        [INFO] piflow-server ...................................... SUCCESS [02:05 min]
        [INFO] ------------------------------------------------------------------------
        [INFO] ------------------------------------------------------------------------
        [INFO] Total time: 06:01 min
        [INFO] Finished at: 2020-05-21T15:22:58+08:00
        [INFO] Final Memory: 118M/691M
        [INFO] ------------------------------------------------------------------------

Run Piflow Server:

  • run piflow server on Intellij:

    • download piflow: git clone https://github.com/cas-bigdatalab/piflow.git

    • import piflow into Intellij

    • edit config.properties file

    • build piflow to generate piflow jar:

      • Edit Configurations --> Add New Configuration --> Maven
      • Name: package
      • Command line: clean package -Dmaven.test.skip=true -X
      • run 'package' (piflow jar file will be built in ../piflow/piflow-server/target/piflow-server-0.9.jar)
    • run HttpService:

      • Edit Configurations --> Add New Configuration --> Application
      • Name: HttpService
      • Main class : cn.piflow.api.Main
      • Environment Variable: SPARK_HOME=/opt/spark-2.2.0-bin-hadoop2.6(change the path to your spark home)
      • run 'HttpService'
    • test HttpService:

      • run /../piflow/piflow-server/src/main/scala/cn/piflow/api/HTTPClientStartMockDataFlow.scala
      • change the piflow server ip and port to your configure
  • run piflow server by release version:

    • download piflow.tar.gz:

    • unzip piflow.tar.gz:
      tar -zxvf piflow.tar.gz

    • edit config.properties

    • run start.sh、stop.sh、 restart.sh、 status.sh

    • test piflow server

      • set PIFLOW_HOME
        • vim /etc/profile
          export PIFLOW_HOME=/yourPiflowPath/bin
          export PATH=$PATH:$PIFLOW_HOME/bin

        • command
          piflow flow start example/mockDataFlow.json
          piflow flow stop appID
          piflow flow info appID
          piflow flow log appID

          piflow flowGroup start example/mockDataGroup.json
          piflow flowGroup stop groupId
          piflow flowGroup info groupId

  • how to configure config.properties

    #spark and yarn config
    #hdfs default file system
    #yarn resourcemanager.hostname
    #if you want to use hive, set hive metastore uris
    #show data in log, set 0 if you do not want to show data in logs
    #server port
    #h2db port

Run Piflow Web:

Restful API:

  • flow json

    flow example
    "flow": {
      "name": "MockData",
      "executorMemory": "1g",
      "executorNumber": "1",
      "uuid": "8a80d63f720cdd2301723b7461d92600",
      "paths": [
          "inport": "",
          "from": "MockData",
          "to": "ShowData",
          "outport": ""
      "executorCores": "1",
      "driverMemory": "1g",
      "stops": [
          "name": "MockData",
          "bundle": "cn.piflow.bundle.common.MockData",
          "uuid": "8a80d63f720cdd2301723b7461d92604",
          "properties": {
            "schema": "title:String, author:String, age:Int",
            "count": "10"
          "customizedProperties": {
      "name": "ShowData",
      "bundle": "cn.piflow.bundle.external.ShowData",
      "uuid": "8a80d63f720cdd2301723b7461d92602",
      "properties": {
        "showNumber": "5"
      "customizedProperties": {

    ] } }


  • Command line:

    • set PIFLOW_HOME
      vim /etc/profile
      export PIFLOW_HOME=/yourPiflowPath/piflow-bin
      export PATH=$PATH:$PIFLOW_HOME/bin

    • command example
      piflow flow start yourFlow.json
      piflow flow stop appID
      piflow flow info appID
      piflow flow log appID

      piflow flowGroup start yourFlowGroup.json
      piflow flowGroup stop groupId
      piflow flowGroup info groupId


  • pull piflow images
    docker pull registry.cn-hangzhou.aliyuncs.com/cnic_piflow/piflow:v1.0

  • show docker images
    docker images

  • run a container with piflow imageID , all services run automatically
    docker run -h master -itd --env HOST_IP=*.*.*.* --name piflow-v0.9.1 -p 6001:6001 -p 6002:6002 [imageID]

  • please visit "HOST_IP:6001", it may take a while

  • if somethings goes wrong, all the application are in /opt folder


  • Login:

  • Dashboard:

  • Flow list:

  • Create flow:

  • Configure flow:

  • Load flow:

  • Monitor flow:

  • Flow logs:

  • Group list:

  • Configure group:

  • Monitor group:

  • Process List:

  • Template List:

  • DataSource List:

  • Schedule List:

  • StopHub List:

Contact Us

  • Name:吴老师
  • Mobile Phone:18910263390
  • WeChat:18910263390
  • Email: wzs@cnic.cn
  • QQ Group:1003489545
  • WeChat group is valid for 7 days


πflow is a big data flow engine with spark support

License:Apache License 2.0


Language:Scala 69.0%Language:Java 30.4%Language:Shell 0.5%Language:Roff 0.1%Language:JavaScript 0.0%