完善资料让更多小伙伴认识你,还能领取20积分哦, 立即完善>
e693246a70ebb666921fb949b890719fhttps://www.toradex.cn/blog/nxp-imx8ji-yueiq-kuang-jia-ce-shi-machine-learning IMX-MACHINE-LEARNING-UG.pdf cd /usr/bin/tensorflow-lite-2.4.0/examples ./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt INFO: Loaded model mobilenet_v1_1.0_224_quant.tflite INFO: resolved reporter INFO: invoked INFO: average time: 50.66 ms INFO: 0.780392: 653 military uniform INFO: 0.105882: 907 Windsor tie INFO: 0.0156863: 458 bow tie INFO: 0.0117647: 466 bulletproof vest INFO: 0.00784314: 835 suit GPU/NPU加速运行./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt -a 1 INFO: Loaded model mobilenet_v1_1.0_224_quant.tflite INFO: resolved reporter INFO: Created TensorFlow Lite delegate for NNAPI. INFO: Applied NNAPI delegate. INFO: invoked INFO: average time: 2.775 ms INFO: 0.768627: 653 military uniform INFO: 0.105882: 907 Windsor tie INFO: 0.0196078: 458 bow tie INFO: 0.0117647: 466 bulletproof vest INFO: 0.00784314: 835 suit USE_GPU_INFERENCE=0 ./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt --external_delegate_path=/usr/lib/libvx_delegate.so python3 label_image.py INFO: Created TensorFlow Lite delegate for NNAPI. Applied NNAPI delegate. WARM-up time: 6628.5 ms Inference time: 2.9 ms 0.870588: military uniform 0.031373: Windsor tie 0.011765: mortarboard 0.007843: bow tie 0.007843: bulletproof vest 基准测试CPU单核运行./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite STARTING! Log parameter values verbosely: [0] Graph: [mobilenet_v1_1.0_224_quant.tflite] Loaded model mobilenet_v1_1.0_224_quant.tflite The input model file size (MB): 4.27635 Initialized session in 15.076ms. Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds. count=4 first=166743 curr=161124 min=161054 max=166743 avg=162728 std=2347 Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds. count=50 first=161039 curr=161030 min=160877 max=161292 avg=161039 std=94 Inference timings in us: Init: 15076, First inference: 166743, Warmup (avg): 162728, Inference (avg): 161039 Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion. Peak memory footprint (MB): init=2.65234 overall=9.00391 ./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite --num_threads=4 4核--num_threads设置为4性能最好 STARTING! Log parameter values verbosely: [0] Num threads: [4] Graph: [mobilenet_v1_1.0_224_quant.tflite] #threads used for CPU inference: [4] Loaded model mobilenet_v1_1.0_224_quant.tflite The input model file size (MB): 4.27635 Initialized session in 2.536ms. Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds. count=11 first=48722 curr=44756 min=44597 max=49397 avg=45518.9 std=1679 Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds. count=50 first=44678 curr=44591 min=44590 max=50798 avg=44965.2 std=1170 Inference timings in us: Init: 2536, First inference: 48722, Warmup (avg): 45518.9, Inference (avg): 44965.2 Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion. Peak memory footprint (MB): init=1.38281 overall=8.69922 ./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite --num_threads=4 --use_nnapi=true STARTING! Log parameter values verbosely: [0] Num threads: [4] Graph: [mobilenet_v1_1.0_224_quant.tflite] #threads used for CPU inference: [4] Use NNAPI: [1] NNAPI accelerators available: [vsi-npu] Loaded model mobilenet_v1_1.0_224_quant.tflite INFO: Created TensorFlow Lite delegate for NNAPI. Explicitly applied NNAPI delegate, and the model graph will be completely executed by the delegate. The input model file size (MB): 4.27635 Initialized session in 3.968ms. Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds. count=1 curr=6611085 Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds. count=369 first=2715 curr=2623 min=2572 max=2776 avg=2634.2 std=20 Inference timings in us: Init: 3968, First inference: 6611085, Warmup (avg): 6.61108e+06, Inference (avg): 2634.2 Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion. Peak memory footprint (MB): init=2.42188 overall=28.4062
cd /usr/share/OpenCV/samples/bin ./example_dnn_classification --input=dog416.png --zoo=models.yml squeezenet 下载模型 cd /usr/share/opencv4/testdata/dnn/ python3 download_models_basic.py cd /usr/share/OpenCV/samples/bin ./example_dnn_classification --input=dog416.png --zoo=models.yml squeezenet 文件浏览器地址栏输入 ftp://ftp.toradex.cn/Linux/i.MX8/eIQ/OpenCV/Image_Classification.zip 下载文件 解压得到文件models.yml和squeezenet_v1.1.caffemodel cd /usr/share/OpenCV/samples/bin 将文件导入到开发板的/usr/share/OpenCV/samples/bin目录下 $ cp /usr/share/opencv4/testdata/dnn/dog416.png /usr/share/OpenCV/samples/bin/ 图片输入$ cp /usr/share/opencv4/testdata/dnn/squeezenet_v1.1.prototxt /usr/share/OpenCV/samples/bin/ $ cp /usr/share/OpenCV/samples/data/dnn/classification_classes_ILSVRC2012.txt /usr/share/OpenCV/samples/bin/ $ cd /usr/share/OpenCV/samples/bin/ ./example_dnn_classification --input=dog416.png --zoo=models.yml squeezenet 报错 root@myd-jx8mp:/usr/share/OpenCV/samples/bin# ./example_dnn_classification --input=dog416.png --zoo=model.yml squeezenet ERRORS: Missing parameter: 'mean' Missing parameter: 'rgb' 加入参数--rgb 和 --mean=1 还是报错加入参数--mode root@myd-jx8mp:/usr/share/OpenCV/samples/bin# ./example_dnn_classification --rgb --mean=1 --input=dog416.png --zoo=models.yml squeezenet [ WARN:0] global /usr/src/debug/opencv/4.4.0.imx-r0/git/modules/videoio/src/cap_gstreamer.cpp (898) open OpenCV | GStreamer warning: unable to query duration of stream [ WARN:0] global /usr/src/debug/opencv/4.4.0.imx-r0/git/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=0, duration=-1 root@myd-jx8mp:/usr/share/OpenCV/samples/bin# ./example_dnn_classification --rgb --mean=1 --input=dog416.png --zoo=models.yml squeezenet --mode [ WARN:0] global /usr/src/debug/opencv/4.4.0.imx-r0/git/modules/videoio/src/cap_gstreamer.cpp (898) open OpenCV | GStreamer warning: unable to query duration of stream [ WARN:0] global /usr/src/debug/opencv/4.4.0.imx-r0/git/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=0, duration=-1 ./example_dnn_classification --device=2 --zoo=models.yml squeezenet 问题如果testdata目录下没有文件,则查找下 lhj@DESKTOP-BINN7F8:~/myd-jx8mp-yocto$ find . -name "dog416.png" ./build-xwayland/tmp/work/cortexa53-crypto-mx8mp-poky-linux/opencv/4.4.0.imx-r0/extra/testdata/dnn/dog416.png 再将相应的文件复制到开发板 cd ./build-xwayland/tmp/work/cortexa53-crypto-mx8mp-poky-linux/opencv/4.4.0.imx-r0/extra/testdata/ tar -cvf /mnt/e/dnn.tar ./dnn/ cd /usr/share/opencv4/testdata 目录不存在则先创建 rz导入dnn.tar 解压 tar -xvf dnn.tar terminate called after throwing an instance of 'cv::Exception' what(): OpenCV(4.4.0) /usr/src/debug/opencv/4.4.0.imx-r0/git/samples/dnn/classification.cpp:81: error: (-215:Assertion failed) !model.empty() in function 'main' Aborted lhj@DESKTOP-BINN7F8:~/myd-jx8mp-yocto/build-xwayland$ find . -name classification.cpp lhj@DESKTOP-BINN7F8:~/myd-jx8mp-yocto/build-xwayland$ cp ./tmp/work/cortexa53-crypto-mx8mp-poky-linux/opencv/4.4.0.imx-r0/packages-split/opencv-src/usr/src/debug/opencv/4.4.0.imx-r0/git/samples/dnn/classification.cpp /mnt/e lhj@DESKTOP-BINN7F8:~/myd-jx8mp-yocto/build-xwayland$ cd /usr/share/OpenCV/samples/bin ./example_dnn_object_detection --width=1024 --height=1024 --scale=0.00392 --input=dog416.png --rgb --zoo=models.yml yolo https://pjreddie.com/darknet/yolo/下下载cfg和weights文件 cd /usr/share/OpenCV/samples/bin/ 导入上面下载的文件 cp /usr/share/OpenCV/samples/data/dnn/object_detection_classes_yolov3.txt /usr/share/OpenCV/samples/bin/ cp /usr/share/opencv4/testdata/dnn/yolov3.cfg /usr/share/OpenCV/samples/bin/ ./example_dnn_object_detection --width=1024 --height=1024 --scale=0.00392 --input=dog416.png --rgb --zoo=models.yml yolo cd /usr/share/OpenCV/samples/bin ./example_tutorial_introduction_to_svm ./example_tutorial_non_linear_svms ./example_tutorial_introduction_to_pca ../data/pca_test1.jpg ./example_cpp_logistic_regression
|
|||
相关推荐
|
|||
只有小组成员才能发言,加入小组>>
【米尔-紫光MYB-J7A100T国产FPGA开发板试用】米尔-紫光PG2L100H国产FPGA开发板开箱评测
1039 浏览 0 评论
【米尔-紫光PG2L100H国产FPGA开发板试用】官方LED例程测试体验
5322 浏览 0 评论
【米尔-紫光PG2L100H国产FPGA开发板试用】上电测试报告
4970 浏览 0 评论
【米尔-紫光PG2L100H国产FPGA开发板试用】开箱评测!米尔电子PG2L100H开发板深度体验报告
1010 浏览 0 评论
【米尔-Xilinx XC7A100T FPGA开发板试用】+04.SFP之Aurora测试(zmj)
831 浏览 0 评论
【米尔-瑞米派兼容树莓派扩展模块-试用体验】基于ROS系统的三麦轮小车自主导航
3658浏览 2评论
【米尔NXP i.MX 93开发板试用评测】5、安装Debian和排除启动故障
738浏览 2评论
【米尔NXP i.MX 93开发板试用评测】2、异构通信环境搭建和源码编译
881浏览 2评论
【米尔-瑞米派兼容树莓派扩展模块-试用体验】Free RTOS应用开发环境部署
1464浏览 1评论
【米尔-芯驰D9开发板- 国产平台试用】- 03- 外设接口测试-U盘、485总线
6823浏览 1评论
小黑屋| 手机版| Archiver| 电子发烧友 ( 湘ICP备2023018690号 )
GMT+8, 2024-12-24 02:40 , Processed in 0.634312 second(s), Total 73, Slave 53 queries .
Powered by 电子发烧友网
© 2015 bbs.elecfans.com
关注我们的微信
下载发烧友APP
电子发烧友观察
版权所有 © 湖南华秋数字科技有限公司
电子发烧友 (电路图) 湘公网安备 43011202000918 号 电信与信息服务业务经营许可证:合字B2-20210191 工商网监 湘ICP备2023018690号