tensorflow2.0 – How to export a tensorflow 2 model that has keypoints?

I am trying to export the model CenterNet MobileNetV2 FPN Keypoints 512x512 using the /exporter_main_v2.py script within the Tensorflow 2 Object Detection API.

The model is listed here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md

I build a docker image with the detection API, following the instructions here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2.md

Dockerfile:

FROM ubuntu:22.04

RUN apt update

RUN apt install -y git wget

WORKDIR /tensorflow
RUN git clone https://github.com/tensorflow/models.git

RUN apt install -y protobuf-compiler software-properties-common python3 python3-pip python-is-python3

WORKDIR /tensorflow/models/research
RUN protoc object_detection/protos/*.proto --python_out=.

# Install TensorFlow Object Detection API.
RUN cp object_detection/packages/tf2/setup.py .
RUN python -m pip install --use-feature=2020-resolver .

Build the docker:

#!/bin/bash

docker build 
-t tensorflow-object-detection .

Run the Docker:

#!/bin/bash

docker run 
-it 
-v $(pwd):/workspace 
tensorflow-object-detection

Do a successful export of a model without keypoints:

cd /workspace

wget http://download.tensorflow.org/models/object_detection/tf2/20210210/centernet_mobilenetv2fpn_512x512_coco17_od.tar.gz

tar -xvf centernet_mobilenetv2fpn_512x512_coco17_od.tar.gz

# This works!
python /tensorflow/models/research/object_detection/exporter_main_v2.py 
--input_type float_image_tensor 
--trained_checkpoint_dir /workspace/centernet_mobilenetv2_fpn_od/checkpoint/ 
--pipeline_config_path /workspace/centernet_mobilenetv2_fpn_od/pipeline.config 
--output_directory /workspace/centernet_mobilenetv2fpn_512x512_coco17_od_exported/

But if I do the same procedure for a model that has keypoints I get an error:

cd /workspace

wget http://download.tensorflow.org/models/object_detection/tf2/20210210/centernet_mobilenetv2fpn_512x512_coco17_kpts.tar.gz

tar -xvf centernet_mobilenetv2fpn_512x512_coco17_kpts.tar.gz

# Fails!
python /tensorflow/models/research/object_detection/exporter_main_v2.py 
--input_type float_image_tensor 
--trained_checkpoint_dir /workspace/centernet_mobilenetv2_fpn_kpts/checkpoint/ 
--pipeline_config_path /workspace/centernet_mobilenetv2_fpn_kpts/pipeline.config 
--output_directory /workspace/centernet_mobilenetv2fpn_512x512_coco17_kpts_exported/

Error:

python /tensorflow/models/research/object_detection/exporter_main_v2.py --input_type float_image_tensor --trained_checkpoint_dir /workspace/centernet_mobilenetv2_fpn_kpts/checkpoint/ --pipeline_config_path /workspace/centernet_mobilenetv2_fpn_kpts/pipeline.config --output_directory /workspace/centernet_mobilenetv2fpn_512x512_coco17_kpts_exported/
2022-06-25 21:20:40.339771: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-06-25 21:20:40.339792: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-06-25 21:20:42.885043: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2022-06-25 21:20:42.885063: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2022-06-25 21:20:42.885079: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] no NVIDIA GPU device is present: /dev/nvidia0 does not exist
WARNING:tensorflow:`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.
W0625 21:20:42.890892 140467532775424 mobilenet_v2.py:303] `input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.
2022-06-25 21:20:42.891244: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
  File "/tensorflow/models/research/object_detection/exporter_main_v2.py", line 164, in <module>
    app.run(main)
  File "/usr/local/lib/python3.10/dist-packages/absl/app.py", line 312, in run
    _run_main(main, args)
  File "/usr/local/lib/python3.10/dist-packages/absl/app.py", line 258, in _run_main
    sys.exit(main(argv))
  File "/tensorflow/models/research/object_detection/exporter_main_v2.py", line 157, in main
    exporter_lib_v2.export_inference_graph(
  File "/usr/local/lib/python3.10/dist-packages/object_detection/exporter_lib_v2.py", line 244, in export_inference_graph
    detection_model = INPUT_BUILDER_UTIL_MAP['model_build'](
  File "/usr/local/lib/python3.10/dist-packages/object_detection/builders/model_builder.py", line 1252, in build
    return build_func(getattr(model_config, meta_architecture), is_training,
  File "/usr/local/lib/python3.10/dist-packages/object_detection/builders/model_builder.py", line 1118, in _build_center_net_model
    label_map_proto = label_map_util.load_labelmap(
  File "/usr/local/lib/python3.10/dist-packages/object_detection/utils/label_map_util.py", line 168, in load_labelmap
    label_map_string = fid.read()
  File "/usr/local/lib/python3.10/dist-packages/tensorflow/python/lib/io/file_io.py", line 114, in read
    self._preread_check()
  File "/usr/local/lib/python3.10/dist-packages/tensorflow/python/lib/io/file_io.py", line 76, in _preread_check
    self._read_buf = _pywrap_file_io.BufferedInputStream(
tensorflow.python.framework.errors_impl.NotFoundError: PATH_TO_BE_CONFIGURED/label_map.txt; No such file or directory

It seems to be something to do with the PATH_TO_BE_CONFIGURED entry – but what I don’t understand is that the other model that did export had these too? Any ideas?

Leave a Comment