acconeer / acconeer-python-exploration

Acconeer Exploration Tool

Home Page:https://docs.acconeer.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

stand_alone.py error

benjaminschn opened this issue · comments

When trying to use the trained model (from gui) with stand_alone.py, i got this error:

Tensorflow version 2.7.0 detected
C:/Users/benjamin/Desktop/acconeer-python-exploration-master/gui/ml/model1 
2022-01-20 14:28:04.656768: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-01-20 14:28:05.339982: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1334 MB memory:  -> device: 0, name: NVIDIA GeForce MX150, pci bus id: 0000:01:00.0, compute capability: 6.1
Keras layer batch not found in layer_definitions.py!
Loaded model with:
input shape    :[149, 12, 1]
output shape   :2
nr of features :1
labels         :['Fichte', 'Fichte+Hand']
Trained with 3008 features

Feature detection settings:
time_series: 1
frame_pad: 0
frame_size: 12
collection_mode: continuous
auto_threshold: 1.5
auto_offset: 5
dead_time: 10
rolling: False
update_rate: 120.0
frame_time: 0.1

Press Ctrl-C to end session
(1198,)
Traceback (most recent call last):
  File "./gui/ml/stand_alone.py", line 114, in <module>
    main()
  File "./gui/ml/stand_alone.py", line 91, in main
    ml_frame_data = feature_process.feature_extraction(data)
  File "C:\Users\benjamin\Desktop\acconeer-python-exploration-master\gui\ml\feature_processing.py", line 214, in feature_extraction
    self.prepare_data_container(data)
  File "C:\Users\benjamin\Desktop\acconeer-python-exploration-master\gui\ml\feature_processing.py", line 169, in prepare_data_container
    num_sensors, data_len = data["sweep_data"].shape
ValueError: not enough values to unpack (expected 2, got 1)

Hi,

It seems that the sweep returned from client.get_next() does not have the correct shape. Have you changed the sensor configuration in the script?

Also, could you attach the command you run to execute the script?

Hi,
thank you for the reply.
I didn´t change the sensor configuration in the script. I only put in the filename manually in this line
model_data, message = keras_proc.load_model(filename), because i cound't get it running the normal way.

I ran the script with the following command:
python ./gui/ml/stand_alone.py --socket 169.254.208.245 --load-model model1

Hi,

The sensor configuration is in the model file. It is loaded on line 45. Can you add a print of the config and attach it here?

config = model_data["sensor_config"]
print(config)  # Added line
feature_list = model_data["feature_list"]
frame_settings = model_data["frame_settings"]

Sure. Here is the config:

EnvelopeServiceConfig
  mode .............................. ENVELOPE
  sensor ............................ [1]
  range_interval .................... [0.12, 0.7]
  mur ............................... MUR_6
  profile ........................... PROFILE_2
  update_rate ....................... 120.0
  running_average_factor ............ 0.9
  repetition_mode ................... HOST_DRIVEN
  downsampling_factor ............... 1
  hw_accelerated_average_samples .... 20
  gain .............................. 0.2
  maximize_signal_attenuation ....... False
  noise_level_normalization ......... True
  tx_disable ........................ False
  power_save_mode ................... ACTIVE
  asynchronous_measurement .......... True

Hi,

Try setting squeeze to False by adding a line after the client has been setup:

if args.socket_addr:
    client = SocketClient(args.socket_addr)
elif args.spi:
    client = SPIClient()
else:
    port = args.serial_port or utils.autodetect_serial_port()
    client = UARTClient(port)
client.squeeze = False  # Added line

It works now after I added the line! Thank you:)

Hi,

Great to here it solved your issue!