coolcoolhua / TFoS_ButterflyDetectionWithRCNN

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

All of our execution files are stored in /home/hduser of VM (student35). Execution steps are as follows.
1. Run the command "${SPARK_HOME}/bin/spark-submit --master yarn --conf spark.executorEnv.LD_LIBRARY_PATH="${JAVA_HOME}/jre/lib/amd64/server":$LIB_HDFS --conf spark.executorEnv.CLASSPATH="$($HADOOP_HOME/bin/hadoop classpath --glob):${CLASSPATH}" --conf spark.dynamicAllocation.enabled=false --conf spark.rpc.message.maxSize=512 --py-files ${TFOS_HOME}/tfspark.zip --conf spark.cores.max=8 --driver-memory 6g  --conf spark.task.cpus=1 /home/hduser/fine_tune_for_rcnn_data_setup.py --output datasetTFrecord/train1 --input /home/hduser/datasetcloudusing/1dataset.pkl --num-partitions 80". This command will run the fine_tune_for_rcnn_data_setup.py to load the data from the folder /home/hduser/datasetcloudusing and store the data in hdfs:///datasetTFrecord/train1. There are 11 data sets under /home/hduser/datasetcloudusing and will run the different commands to store 11 train sets in hdfs respectively.

2. Run the command "${SPARK_HOME}/bin/spark-submit --master yarn --conf spark.executorEnv.LD_LIBRARY_PATH="${JAVA_HOME}/jre/lib/amd64/server":$LIB_HDFS --conf spark.executorEnv.CLASSPATH="$($HADOOP_HOME/bin/hadoop classpath --glob):${CLASSPATH}" --conf spark.dynamicAllocation.enabled=false --py-files /home/hduser/fine_tune_rcnn.py,${TFOS_HOME}/tfspark.zip --conf spark.cores.max=10 --conf spark.task.cpus=1 /home/hduser/fine_tune_rcnn_spark.py --images datasetTFrecord/demo/images --labels datasetTFrecord/demo/labels --format pickle --mode train --model fine_tune_rcnn_model --epochs 1" to do the train steps. This command will run the fine_tune_rcnn_spark.py and fine_tune_rcnn.py to load the data from hdfs:///datasetTFrecord/demo/images and hdfs:///datasetTFrecord/demo/labels to do the convolution neural network training. The result is the model after training. The model will be stored in hdfs:///fine_tune_rcnn_model.

3. Run the command "${SPARK_HOME}/bin/spark-submit --master yarn --conf spark.executorEnv.LD_LIBRARY_PATH="${JAVA_HOME}/jre/lib/amd64/server":$LIB_HDFS --conf spark.executorEnv.CLASSPATH="$($HADOOP_HOME/bin/hadoop classpath --glob):${CLASSPATH}" --conf spark.dynamicAllocation.enabled=false --py-files /home/hduser/fine_tune_rcnn.py,${TFOS_HOME}/tfspark.zip --conf spark.cores.max=10 --conf spark.task.cpus=1 /home/hduser/fine_tune_rcnn_spark.py --images datasetTFrecord/test/images --labels datasetTFrecord/test/labels --format pickle --mode inference --model fine_tune_rcnn_model --output rcnn_predictions" to do the inference steps. This command will run the fine_tune_rcnn_spark.py and fine_tune_rcnn.py to load the data from hdfs:///datasetTFrecord/test/images and do the inference. The result of the inference will be stored in hdfs:///rcnn_predictions

4.upload the demo directory to hdfs, then use "${SPARK_HOME}/bin/spark-submit --master yarn --conf spark.executorEnv.LD_LIBRARY_PATH="${JAVA_HOME}/jre/lib/amd64/server":$LIB_HDFS --conf spark.executorEnv.CLASSPATH="$($HADOOP_HOME/bin/hadoop classpath --glob):${CLASSPATH}" --conf spark.dynamicAllocation.enabled=false --py-files /home/hduser/fine_tune_rcnn.py,${TFOS_HOME}/tfspark.zip --conf spark.cores.max=10 --conf spark.task.cpus=1 /home/hduser/fine_tune_rcnn_spark.py --images datasetTFrecord/demo/images --labels datasetTFrecord/demo/labels --format pickle --mode train --model fine_tune_rcnn_model --epochs 1" for training



any running problems please contact:
longtaoz@connect.hku.hk

About


Languages

Language:Python 79.1%Language:Shell 20.9%