Convert to QNN for Linux Host on DSP Backend¶
Note
This is Part 3 of the Convert to QNN tutorial for Linux host machines. If you have not completed Part 2, please do so here.
Warning
DSP processors require quantized models instead of full precision models. If you do not have a quantized model, please follow Step 2 of the CNN to QNN tutorial to build one.
Transferring over all relevant files¶
On the target device, open a terminal and make a destination folder by running:
mount -o remount,rw / mkdir -p /data/local/tmp cd /data/local/tmp ln -s /etc/ /data/local/tmp chmod -R 777 /data/local/tmp mkdir -p /data/local/tmp/qnn_tutorial
Determine your target device’s SnapDragon architecture by looking up your chipset in the Supported Snapdragon Devices table.
Update the “X” values below and run the commands to set
DSP_ARCHto match the version number found in the above table. Only the 2 digits at the end should update, and they should have the same version. Ex. For “V68”, the proper value would behexagon-v68.export DSP_VERSION="XX" export DSP_ARCH="hexagon-v${DSP_VERSION}"
Use
scpto transferlibQnnDsp.soas well as other necessary executables from your host machine to/data/local/tmp/qnn_tutorialon the target device.scp "$QNN_SDK_ROOT/lib/${QNN_TARGET_ARCH}/libQnnDsp.so" "${TARGET_USER}@${TARGET_IP}:/data/local/tmp/qnn_tutorial" scp "${QNN_SDK_ROOT}/lib/${DSP_ARCH}/unsigned/libQnnDspV${DSP_VERSION}Skel.so" "${TARGET_USER}@${TARGET_IP}:/data/local/tmp/qnn_tutorial" scp "${QNN_SDK_ROOT}/lib/${QNN_TARGET_ARCH}/libQnnDspV${DSP_VERSION}Stub.so" "${TARGET_USER}@${TARGET_IP}:/data/local/tmp/qnn_tutorial" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/model_libs/${QNN_TARGET_ARCH}/*" "${TARGET_USER}@${TARGET_IP}:/data/local/tmp/qnn_tutorial"
Check the Backend table to see if there are any other processor-specific executables needed for your target processor (
DSP) and your target device’s architecture ($QNN_TARGET_ARCH). Use similar syntax above forscpto transfer any additional.sofiles listed below your selected target architecture in this table. (There may be none!)
Warning
Ensure you scp the hexagon-v## values (in addition to the other architecture files!)
Use
scpto transfer the example built model. Update thex64folder below to the proper folder for your built model. The folder name depends on your host machine’s architecture.scp "${QNN_MODEL_PATH}" "${TARGET_USER}@${TARGET_IP}:/data/local/tmp/qnn_tutorial"
Transfer the input data, input list, and script from the QNN SDK examples folder into
/data/local/tmp/qnn_tutorialon the target device usingscpin a similar way:scp -r "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped" "${TARGET_USER}@${TARGET_IP}:/data/local/tmp/qnn_tutorial" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/target_raw_list.txt" "${TARGET_USER}@${TARGET_IP}:/data/local/tmp/qnn_tutorial" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/imagenet_slim_labels" "${TARGET_USER}@${TARGET_IP}:/data/local/tmp/qnn_tutorial" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications.py" "${TARGET_USER}@${TARGET_IP}:/data/local/tmp/qnn_tutorial"
Transfer
qnn-net-runfrom$QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-runto/data/local/tmp/qnn_tutorialon the target device:scp "${QNN_SDK_ROOT}/bin/${QNN_TARGET_ARCH}/qnn-net-run" "${TARGET_USER}@${TARGET_IP}:/data/local/tmp/qnn_tutorial"
Doing inferences on the target device processor¶
Open a terminal instance on the target device.
ssh "${TARGET_USER}@${TARGET_IP}"
Note
You will have to login with your target device’s login for that username.
Navigate to the directory containing the test files:
cd /data/local/tmp/qnn_tutorial
Run the following command on the target device to execute an inference:
./qnn-net-run \ --model "./libInception_v3.so" \ --input_list "./target_raw_list.txt" \ --backend "./libQnnDsp.so" \ --output_dir "./output"
Run the following script on the target device to view the classification results:
Note
You can alternatively copy the output folder back to your host machine with scp and run the following script there to avoid having to install python on your target device.
python3 ".\show_inceptionv3_classifications.py" \
-i ".\cropped\raw_list.txt" \
-o "output" \
-l ".\imagenet_slim_labels.txt"
Verify that the classification results in
outputmatch the following: 1.${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/trash_bin.raw 0.777344 413 ashcan2.${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/chairs.raw 0.253906 832 studio couch3.${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/plastic_cup.raw 0.980469 648 measuring cup4.${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/notice_sign.raw 0.167969 459 brass