【EASY EAI Nano开源套件试用体验】4.神经网络框架ncnn的开发测试 大信(QQ:8125036)
电子发烧友和灵眸科技退出了EASY EAI Nano 开发板试用。该开发板是基于Rockchip的RV1126处理器,是四核32位Cortex®-A7架构,同时集成了2TOPs AI算力的NPU,具备一定神经网络计算能力,因此能够运行一些深度学习的推理程序。 本次测试目标是在此开发板上进行神经网络框架ncnn的测试开发,测试ncnn在此开发板上的性能与应用。
一、ncnn简介 ncnn 是腾讯优图推出的在 手机端极致优化的高性能神经网络前向计架框架。也能够在移动设备上的高性能神经网络前向计算框架。ncnn 从设计之初深刻考虑移动端的部署和使用。无第三方依赖,跨平台,其中手机端 cpu的速度快于目前所有已知的开源框架。 基于ncnn,能够将深度学习算法轻松移植到手机端和移动设备上高效执行,开发人工智能应用。以腾讯内部应用为例,ncnn目前已在QQ,Qzone,微信,天天P图等上得到应用。 ncnn支持大部分常用的CNN 网络 Classical CNN: VGG AlexNetGoogleNet Incep tion … Practical CNN: ResNetDenseNet SENet FPN … Light-weight CNN:SqueezeNet MobileNetV1/V2/V3 ShuffleNetV1/V2 MNasNet … Detection: MTCNN facedetection… Detection: VGG-SSDMobileNet-SSD SqueezeNet-SSD MobileNetV2-SSDLite … Detection: Faster-RCNNR-FCN … Detection: YOLOV2 YOLOV3MobileNet-YOLOV3 … Segmentation: FCN PSPNetUNet … 具体特性对比与功能在之前文章已经介绍过,这里不在过多的叙述,感兴趣的同学可以参看笔者之前的一个测试报告有详细的各CNN框架比较介绍: https://bbs.elecfans.com/jishu_2328174_1_1.html 三、交叉编译ncnn
同时该项目作者nihui大佬也亲自在rv1126板上移植测过过该项目,可以参考她的帖子: 《EASY EAINano (RV1126) 上手指北> https://zhuanlan.zhihu.com/p/548039018 按帖子内容,很容易在EASY EAI Nano板上移植编译ncnn. 首先按模板编写好编译脚本:build-easyeai-nano.sh 内容如下:
然后切换到交叉编译环境,再执行该脚本,如下图:
整个过程大概10分钟左右,就能看到编译出结果了:
此脚本把cmake和 make 二合一了,一次就能生成目标文件,编译起来非常简单方便。 该编译把ncnn库和测试例程一起编译了,最后把编译的输出结果传到板子上,另外把ncnn的benchmark测试程序所需要的模型资源也传到板子上, 进入到 build/example 下,把所有编译出的例程也传到开发板上。
四、运行测试ncnn 编译完成把可执行文件与模型文件复制到开发板里进行测试。
把 build/benchmark 下的benchncn复制到开发板/home/root/ncnn 目录下,同时把工程根目录下的benchmark 目录下所有文件也复制到开发板 /home/root/ncnn目录下,
然后就可以执行 benchncnn执行文件来测试该板的人工神经网络的计算能力。
先把开发环境下目标文件系统 ARM目录下/usr/lib下的libgomp.so.1文件复制到开发板的/usr/lib下。然后在开发板上执行benchncnn,最后执行结果如下图:
可见该开发板运行ncnn跑分还是很不错的,超过了大多数同配置的硬件,从对各个模型的分值可以评估该开发板的神经网络推理计算能力了。
五、测试ncnn的例程
进一步在EASYEAI Nano开发板上测试ncnn工程带的应用例子,这测试列子在开发板文件如下:
另外这些测试用例在运行时,需要相应的模型资源支持,这些模型在这个地址下可以下载到:
https://github.com/nihui/ncnn-assets/tree/master/models
1. 测试人脸识别
准备被测试图片face5-test.jpg,传到上ncnn当前目录下
并且下载好mnet.25-opt.param和mnet.25-opt.bin文件到 ncnn当前目录下,然后执行一下命令:
./retinafaceface5-test.jpg
运行速度非常快,瞬间就返回了识别结果,输出结果如下图
同时输出叠加标记结果的图 image.png:
2. 测试图片内容多目标识别
准备一张多目标物体的图,如下:
测试图片内容识别,先用上面的图,再使用 squeezenetssd来执行。执行前先下载 squeezenet_ssd_voc.bin 和 squeezenet_ssd_voc.param 到板上ncnn当前目录下,然后执行:
./squeezenetssd ./test.jpg
大约半秒左右输出结果如图:
输出的分类编号,可见代码的定义:
同时输出了识别结果图:
可见该模型识别时,图中的小狗没有识别出来。
再测试另外一个模型;
识别结果图:
识别大约2秒,除了狗识别错误,其它都正确,效果也比较理想。
同样测试 yolov5_pnnx ,命令行
./yolov5_pnnx ./testting.jpg
识别结果如图:
这次识别狗正确,但使用时间比yolov7 稍长。
六、整合开发ncnn应用
结合Easy-EAI的摄像头采集和屏幕显示的RkMedia例程,在整合ncnn人脸识别程序,进一步改造成一个完成的ncnn的应用。
这个引用在ncnn的scrfd例程基础上进行修改, 新文件为 easy_scrfd.cpp,因为开发板上的mipi 摄像头不方便拍摄,这里使用 usb 摄像头。
代码如下:
- // Tencent is pleased to support the open source community by making ncnn available.
- //
- // Copyright (C) 2021 THL A29 Limited, a Tencent company. All rights reserved.
- //
- // Licensed under the BSD 3-Clause License (the "License"); you may not use this file except
- // in compliance with the License. You may obtain a copy of the License at
- //
- // https://opensource.org/licenses/BSD-3-Clause
- //
- // Unless required by applicable law or agreed to in writing, software distributed
- // under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
- // CONDITIONS OF ANY KIND, either express or implied. See the License for the
- // specific language governing permissions and limitations under the License.
- #include "net.h"
- #if defined(USE_NCNN_SIMPLEOCV)
- #include "simpleocv.h"
- #else
- #include
- #include
- #include
- #endif
- #include
- #include
- #include
- #include
- #include
- struct FaceObject
- {
- cv::Rect_ rect;
- float prob;
- };
- static inline float intersection_area(const FaceObject& a, const FaceObject& b)
- {
- cv::Rect_ inter = a.rect & b.rect;
- return inter.area();
- }
- static void qsort_descent_inplace(std::vector& faceobjects, int left, int right)
- {
- int i = left;
- int j = right;
- float p = faceobjects[(left + right) / 2].prob;
- while (i <= j)
- {
- while (faceobjects[i].prob > p)
- i++;
- while (faceobjects[j].prob < p)
- j--;
- if (i <= j)
- {
- // swap
- std::swap(faceobjects[i], faceobjects[j]);
- i++;
- j--;
- }
- }
- #pragma omp parallel sections
- {
- #pragma omp section
- {
- if (left < j) qsort_descent_inplace(faceobjects, left, j);
- }
- #pragma omp section
- {
- if (i < right) qsort_descent_inplace(faceobjects, i, right);
- }
- }
- }
- static void qsort_descent_inplace(std::vector& faceobjects)
- {
- if (faceobjects.empty())
- return;
- qsort_descent_inplace(faceobjects, 0, faceobjects.size() - 1);
- }
- static void nms_sorted_bboxes(const std::vector& faceobjects, std::vector& picked, float nms_threshold)
- {
- picked.clear();
- const int n = faceobjects.size();
- std::vector areas(n);
- for (int i = 0; i < n; i++)
- {
- areas[i] = faceobjects[i].rect.area();
- }
- for (int i = 0; i < n; i++)
- {
- const FaceObject& a = faceobjects[i];
- int keep = 1;
- for (int j = 0; j < (int)picked.size(); j++)
- {
- const FaceObject& b = faceobjects[picked[j]];
- // intersection over union
- float inter_area = intersection_area(a, b);
- float union_area = areas[i] + areas[picked[j]] - inter_area;
- // float IoU = inter_area / union_area
- if (inter_area / union_area > nms_threshold)
- keep = 0;
- }
- if (keep)
- picked.push_back(i);
- }
- }
- // insightface/detection/scrfd/mmdet/core/anchor/anchor_generator.py gen_single_level_base_anchors()
- static ncnn::Mat generate_anchors(int base_size, const ncnn::Mat& ratios, const ncnn::Mat& scales)
- {
- int num_ratio = ratios.w;
- int num_scale = scales.w;
- ncnn::Mat anchors;
- anchors.create(4, num_ratio * num_scale);
- const float cx = 0;
- const float cy = 0;
- for (int i = 0; i < num_ratio; i++)
- {
- float ar = ratios[i];
- int r_w = round(base_size / sqrt(ar));
- int r_h = round(r_w * ar); //round(base_size * sqrt(ar));
- for (int j = 0; j < num_scale; j++)
- {
- float scale = scales[j];
- float rs_w = r_w * scale;
- float rs_h = r_h * scale;
- float* anchor = anchors.row(i * num_scale + j);
- anchor[0] = cx - rs_w * 0.5f;
- anchor[1] = cy - rs_h * 0.5f;
- anchor[2] = cx + rs_w * 0.5f;
- anchor[3] = cy + rs_h * 0.5f;
- }
- }
- return anchors;
- }
- static void generate_proposals(const ncnn::Mat& anchors, int feat_stride, const ncnn::Mat& score_blob, const ncnn::Mat& bbox_blob, float prob_threshold, std::vector& faceobjects)
- {
- int w = score_blob.w;
- int h = score_blob.h;
- // generate face proposal from bbox deltas and shifted anchors
- const int num_anchors = anchors.h;
- for (int q = 0; q < num_anchors; q++)
- {
- const float* anchor = anchors.row(q);
- const ncnn::Mat score = score_blob.channel(q);
- const ncnn::Mat bbox = bbox_blob.channel_range(q * 4, 4);
- // shifted anchor
- float anchor_y = anchor[1];
- float anchor_w = anchor[2] - anchor[0];
- float anchor_h = anchor[3] - anchor[1];
- for (int i = 0; i < h; i++)
- {
- float anchor_x = anchor[0];
- for (int j = 0; j < w; j++)
- {
- int index = i * w + j;
- float prob = score[index];
- if (prob >= prob_threshold)
- {
- // insightface/detection/scrfd/mmdet/models/dense_heads/scrfd_head.py _get_bboxes_single()
- float dx = bbox.channel(0)[index] * feat_stride;
- float dy = bbox.channel(1)[index] * feat_stride;
- float dw = bbox.channel(2)[index] * feat_stride;
- float dh = bbox.channel(3)[index] * feat_stride;
- // insightface/detection/scrfd/mmdet/core/bbox/transforms.py distance2bbox()
- float cx = anchor_x + anchor_w * 0.5f;
- float cy = anchor_y + anchor_h * 0.5f;
- float x0 = cx - dx;
- float y0 = cy - dy;
- float x1 = cx + dw;
- float y1 = cy + dh;
- FaceObject obj;
- obj.rect.x = x0;
- obj.rect.y = y0;
- obj.rect.width = x1 - x0 + 1;
- obj.rect.height = y1 - y0 + 1;
- obj.prob = prob;
- faceobjects.push_back(obj);
- }
- anchor_x += feat_stride;
- }
- anchor_y += feat_stride;
- }
- }
- }
- static int detect_scrfd(const cv::Mat& bgr, std::vector& faceobjects)
- {
- ncnn::Net scrfd;
- scrfd.opt.use_vulkan_compute = true;
- // model is converted from
- // https://github.com/deepinsight/insightface/tree/master/detection/scrfd
- // the ncnn model https://github.com/nihui/ncnn-assets/tree/master/models
- if (scrfd.load_param("scrfd_500m-opt2.param"))
- exit(-1);
- if (scrfd.load_model("scrfd_500m-opt2.bin"))
- exit(-1);
- int width = bgr.cols;
- int height = bgr.rows;
- // insightface/detection/scrfd/configs/scrfd/scrfd_500m.py
- const int target_size = 640;
- const float prob_threshold = 0.3f;
- const float nms_threshold = 0.45f;
- // pad to multiple of 32
- int w = width;
- int h = height;
- float scale = 1.f;
- if (w > h)
- {
- scale = (float)target_size / w;
- w = target_size;
- h = h * scale;
- }
- else
- {
- scale = (float)target_size / h;
- h = target_size;
- w = w * scale;
- }
- ncnn::Mat in = ncnn::Mat::from_pixels_resize(bgr.data, ncnn::Mat::PIXEL_BGR2RGB, width, height, w, h);
- // pad to target_size rectangle
- int wpad = (w + 31) / 32 * 32 - w;
- int hpad = (h + 31) / 32 * 32 - h;
- ncnn::Mat in_pad;
- ncnn::copy_make_border(in, in_pad, hpad / 2, hpad - hpad / 2, wpad / 2, wpad - wpad / 2, ncnn::BORDER_CONSTANT, 0.f);
- const float mean_vals[3] = { 127.5f, 127.5f, 127.5f };
- const float norm_vals[3] = { 1 / 128.f, 1 / 128.f, 1 / 128.f };
- in_pad.substract_mean_normalize(mean_vals, norm_vals);
- ncnn::Extractor ex = scrfd.create_extractor();
- ex.input("input.1", in_pad);
- std::vector faceproposals;
- // stride 32
- {
- ncnn::Mat score_blob, bbox_blob;
- ex.extract("412", score_blob);
- ex.extract("415", bbox_blob);
- const int base_size = 16;
- const int feat_stride = 8;
- ncnn::Mat ratios(1);
- ratios[0] = 1.f;
- ncnn::Mat scales(2);
- scales[0] = 1.f;
- scales[1] = 2.f;
- ncnn::Mat anchors = generate_anchors(base_size, ratios, scales);
- std::vector faceobjects32;
- generate_proposals(anchors, feat_stride, score_blob, bbox_blob, prob_threshold, faceobjects32);
- faceproposals.insert(faceproposals.end(), faceobjects32.begin(), faceobjects32.end());
- }
- // stride 16
- {
- ncnn::Mat score_blob, bbox_blob;
- ex.extract("474", score_blob);
- ex.extract("477", bbox_blob);
- const int base_size = 64;
- const int feat_stride = 16;
- ncnn::Mat ratios(1);
- ratios[0] = 1.f;
- ncnn::Mat scales(2);
- scales[0] = 1.f;
- scales[1] = 2.f;
- ncnn::Mat anchors = generate_anchors(base_size, ratios, scales);
- std::vector faceobjects16;
- generate_proposals(anchors, feat_stride, score_blob, bbox_blob, prob_threshold, faceobjects16);
- faceproposals.insert(faceproposals.end(), faceobjects16.begin(), faceobjects16.end());
- }
- // stride 8
- {
- ncnn::Mat score_blob, bbox_blob;
- ex.extract("536", score_blob);
- ex.extract("539", bbox_blob);
- const int base_size = 256;
- const int feat_stride = 32;
- ncnn::Mat ratios(1);
- ratios[0] = 1.f;
- ncnn::Mat scales(2);
- scales[0] = 1.f;
- scales[1] = 2.f;
- ncnn::Mat anchors = generate_anchors(base_size, ratios, scales);
- std::vector faceobjects8;
- generate_proposals(anchors, feat_stride, score_blob, bbox_blob, prob_threshold, faceobjects8);
- faceproposals.insert(faceproposals.end(), faceobjects8.begin(), faceobjects8.end());
- }
- // sort all proposals by score from highest to lowest
- qsort_descent_inplace(faceproposals);
- // apply nms with nms_threshold
- std::vector picked;
- nms_sorted_bboxes(faceproposals, picked, nms_threshold);
- int face_count = picked.size();
- faceobjects.resize(face_count);
- for (int i = 0; i < face_count; i++)
- {
- faceobjects[i] = faceproposals[picked[i]];
- // adjust offset to original unpadded
- float x0 = (faceobjects[i].rect.x - (wpad / 2)) / scale;
- float y0 = (faceobjects[i].rect.y - (hpad / 2)) / scale;
- float x1 = (faceobjects[i].rect.x + faceobjects[i].rect.width - (wpad / 2)) / scale;
- float y1 = (faceobjects[i].rect.y + faceobjects[i].rect.height - (hpad / 2)) / scale;
- x0 = std::max(std::min(x0, (float)width - 1), 0.f);
- y0 = std::max(std::min(y0, (float)height - 1), 0.f);
- x1 = std::max(std::min(x1, (float)width - 1), 0.f);
- y1 = std::max(std::min(y1, (float)height - 1), 0.f);
- faceobjects[i].rect.x = x0;
- faceobjects[i].rect.y = y0;
- faceobjects[i].rect.width = x1 - x0;
- faceobjects[i].rect.height = y1 - y0;
- }
- return 0;
- }
- static void draw_faceobjects(cv::Mat& bgr, const std::vector& faceobjects)
- {
- cv::Mat image = bgr.clone();
- for (size_t i = 0; i < faceobjects.size(); i++)
- {
- const FaceObject& obj = faceobjects[i];
- fprintf(stderr, "%.5f at %.2f %.2f %.2f x %.2fn", obj.prob,
- obj.rect.x, obj.rect.y, obj.rect.width, obj.rect.height);
- cv::rectangle(image, obj.rect, cv::Scalar(0, 255, 0));
- char text[256];
- sprintf(text, "%.1f%%", obj.prob * 100);
- int baseLine = 0;
- cv::Size label_size = cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);
- int x = obj.rect.x;
- int y = obj.rect.y - label_size.height - baseLine;
- if (y < 0)
- y = 0;
- if (x + label_size.width > image.cols)
- x = image.cols - label_size.width;
- cv::rectangle(image, cv::Rect(cv::Point(x, y), cv::Size(label_size.width, label_size.height + baseLine)),
- cv::Scalar(255, 255, 255), -1);
- cv::putText(image, text, cv::Point(x, y + label_size.height),
- cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 0, 0));
- }
- //cv::imshow("image", image);
- //cv::waitKey(0);
- bgr= image;
- return;
- }
- #define DISP_WIDTH 720
- #define DISP_HEIGHT 1280
- #define CAMERA_WIDTH 1280
- #define CAMERA_HEIGHT 720
- #include
- static int g_run = 0;
- static void sigterm_handler(int sig) {
- fprintf(stderr, "signal %dn", sig);
- g_run = 1;
- }
- int main(int argc, char** argv)
- {
- int ret;
- cv::Mat image;
- std::vector faceobjects;
- if (argc != 2)
- {
- fprintf(stderr, "Usage: %s [imagepath]n", argv[0]);
- return -1;
- }
- signal(SIGINT, sigterm_handler);
- const char* imagepath = argv[1];
- ret = disp_init(DISP_WIDTH, DISP_HEIGHT); //RGB888 default
- if (ret) {
- printf("disp_init error!n");
- return 0;
- }
- #if 1
- int scrimage_size;
- double fx,fy;
- cv::Mat screen_image;
-
- image = cv::imread(imagepath, 1);
- if (image.empty())
- {
- fprintf(stderr, "cv::imread %s failedn", imagepath);
- return -1;
- }
- //printf("image: %d,%dn",image.cols, image.rows);
-
- screen_image = cv::Mat::zeros(DISP_WIDTH, DISP_HEIGHT, CV_8UC3);
- scrimage_size = DISP_WIDTH*DISP_HEIGHT*3;
-
- detect_scrfd(image, faceobjects);
- draw_faceobjects(image, faceobjects);
-
- fx = DISP_HEIGHT / image.cols ;
- fy = DISP_WIDTH / image.rows ;
- if(fx > fy) fx = fy;
- if(fx < fy) fy = fx;
- cv::resize(image, image, cv::Size(0,0),fx,fy);
- //printf("image: %d,%dn",image.cols, image.rows);
-
- cv::Mat imgROI = screen_image(cv::Rect((DISP_HEIGHT-image.cols)/2, (DISP_WIDTH -image.rows)/2, image.cols,image.rows));
- cv::addWeighted(imgROI, 0.0, image, 1.0, 0.0 , imgROI); //叠加图片
-
- cv::rotate(screen_image, screen_image, cv::ROTATE_90_CLOCKWISE);
- disp_commit(screen_image.data,scrimage_size); //显示图像
- while (!g_run) {
- usleep(10000);
- }
- disp_exit();
- #else
- char* pbuf = NULL;
- int width, height;
- int outimage_size;
- int i, j, k;
- FILE* fp;
- width = CAMERA_WIDTH;
- height = CAMERA_HEIGHT;
- ret = usbcamera_init(USB2_0, USB_DIRECT, CAMERA_HEIGHT, CAMERA_WIDTH, 90);
- if (ret) {
- printf("usbcamera_init %d, error: %s, %dn", ret, __func__, __LINE__);
- return -1;
- }
- //image = cv::Mat::zeros(height, width, CV_8UC3);
- usbcamera_preset_fps(5);
- //fp = fopen("./usb_camera.bgr", "w");
- //if (!fp) {
- // printf("error: %s, %dn", __func__, __LINE__);
- // ret = -1;
- // return -1;
- //}
- outimage_size = width * height * 3;
- pbuf = (char*)malloc(outimage_size);
- while (!g_run)
- {
- ret = usbcamera_getframe(USB2_0, USB_DIRECT, pbuf);
- if (ret) {
- printf("error: %s, %dn", __func__, __LINE__);
- break;
- }
-
- image = cv::Mat(height, width, CV_8UC3, pbuf);
- detect_scrfd(image, faceobjects);
- printf("faceobjects: %d n", faceobjects.size());
-
- draw_faceobjects(image, faceobjects);
- //show_image(outimage,0);
- disp_commit(image.data, outimage_size); //显示图像
- //fwrite(pbuf, 1, outimage_size, fp);
- //disp_commit(pbuf, outimage_size); //显示图像
- usleep(1000);
- }
- //fclose(fp);
- usbcamera_exit(USB2_0, USB_DIRECT);
- disp_exit();
- free(pbuf);
- pbuf = NULL;
- #endif
- return 0;
- }
复制代码
编写cmake 文件
- cmake_minimum_required(VERSION 2.8.4)
- project(ncnn-demo)
- set(CMAKE_SYSTEM_NAME Linux)
- set(CMAKE_CROSSCOMPILING TRUE)
- set(CMAKE_C_COMPILER "arm-linux-gnueabihf-gcc")
- set(CMAKE_CXX_COMPILER "arm-linux-gnueabihf-g++")
- # -I
- set(inc ./)
- include_directories(
- ./include/ncnn/
- /mnt/hgfs/EASY-EAI-NANO/proj/EASY-EAI-Toolkit-C-Demo/easyeai-api/peripheral_api/display/
- /mnt/hgfs/EASY-EAI-NANO/proj/EASY-EAI-Toolkit-C-Demo/easyeai-api/peripheral_api/camera/
- )
- # -L
- link_directories(
- ./lib/
- /mnt/hgfs/EASY-EAI-NANO/proj/EASY-EAI-Toolkit-C-Demo/easyeai-api/peripheral_api/display/
- /mnt/hgfs/EASY-EAI-NANO/proj/EASY-EAI-Toolkit-C-Demo/easyeai-api/peripheral_api/camera/
- /opt/rv1126_rv1109_sdk/buildroot/output/rockchip_face_board/host/arm-buildroot-linux-gnueabihf/sysroot/usr/lib/
- )
- #--------------------------
- # ncnn_app
- #--------------------------
- add_executable(ncnn-demo ncnn-demo.cpp) #-o
- target_include_directories(ncnn-demo PRIVATE ${inc}) #-I
- target_link_libraries(ncnn-demo ncnn pthread -fopenmp easymedia rockchip_mpp rkaiq rga
- -lopencv_calib3d
- -lopencv_core
- -lopencv_dnn
- -lopencv_features2d
- -lopencv_flann
- -lopencv_highgui
- -lopencv_imgcodecs
- -lopencv_imgproc
- -lopencv_ml
- -lopencv_objdetect
- -lopencv_photo
- -lopencv_shape
- -lopencv_stitching
- -lopencv_superres
- -lopencv_videoio
- -lopencv_video
- -lopencv_videostab) #-l
- target_link_libraries(ncnn-demo libdisplay.a libcamera.a)
复制代码
然后进行编译
上传到板子上运行
最后的测试现场,通过USB 采集画面,在通过scrfd做人脸识别,识别的结果图像显示到LCD屏幕上。
从测试效果看,验证了从摄像头抓取视频图片,然后再进行scrfd进行人脸识别,再把标记后的图片显示到屏幕上,大概有美秒2帧的速度。分析主要原因在easy包装的摄像头和显示接口与opencv的mat图像转换上,效率很低,在不加人脸识别时,其直接显示的帧率也就3帧左右。而其自带的rkmedia例程则是通过系统的共享内存直接把采集到的图像送到显示驱动,因此效率要高很多。
七、ncnn开发测试总结
通过在Easy-EAI-Nano开发板上对ncnn的benchmark的性能测试来看,该开发板执行ncnn跑分较为不错,对一般的嵌入式神经网络推理来说,已经完全能够满足业务需求。
在对实际中测试,在具体应用时,还需要进一步的优化,在数据传输,转化等方面做到提高效率,一般是通过rkmedia底层的接口,来实现高效的数据转换,同时在应用中,也可以把推理算法的封装进行优化,把通用的图像格式,转化为硬件支持的数据格式,利用硬件进一步的加速,得到最好的效果。
Ncnn结合该开发板的其它多媒体能力和网络 通信功能,可以开发出多种基于神经网络的边缘应用,而ncnn工程里带有一些各种其它框架模型转化ncnn的工具,也方便将其它模型转化到ncnn上使用,开源社区也提供了非常丰富的资源,非常方便扩展使用。
|