DIYBigData / personal-compute-cluster

Software and tools for setting up and operating a personal compute cluster, with focus on big data.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

env settings

skorzan opened this issue · comments

Hi
Have You ever set the environment ? in the docker spark yml?

`version: "3.7"
services:
spark-master:
image: spark-master:3.2.1
container_name: spark-master
ports:
- "8080:8080"
- "7077:7077"
networks:
- spark
volumes:
- /home/elasticsearch/spark/spark-apps:/opt/spark-apps
- /home/elasticsearch/spark/spark-data:/opt/spark-data
environment:
- SPARK_LOCAL_IP=spark-master
- SPARK_WORKLOAD=master
deploy:
placement:
constraints:
- node.hostname == srv20900aam-kvm.kopernik.t-mobile.pl
mode: "global"

spark-worker-1:
image: spark-worker:3.2.1
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- SPARK_MASTER=spark://spark-master:7077
- SPARK_WORKLOAD=worker
- SPARK_LOCAL_IP=spark-worker-1
- SPARK_WORKER_CORES=6
- SPARK_WORKER_MEMORY=10G
- SPARK_DRIVER_MEMORY=2G
- SPARK_EXECUTOR_MEMORY=4G
networks:
- spark
volumes:
- /home/elasticsearch/spark/spark-apps:/opt/spark-apps
- /home/elasticsearch/spark/spark-data:/opt/spark-data
deploy:
placement:
constraints:
- node.hostname == srv20900aam-kvm.kopernik.t-mobile.pl
mode: "global"
`

@skorzan I am not sure what you are asking here?

Closing due to the lack of a response. Feel free to reopen.