完善资料让更多小伙伴认识你,还能领取20积分哦, 立即完善>
嗨,
我遇到了一个问题,加载模型失败了(作为附件,从tensorflow saved_model转换)。 模型加载器是从openvino的示例代码“classification_sample”修改的。 以下是错误消息。 [INFO]将模型加载到插件中 [错误]未知异常 C:英特尔 computer_vision_sdk_2018.3.343 deployment_tools inference_engine 包括细节/ ie_exception_conversion.hpp:80 我发现错误发生在以下代码段中,文件“ie_plugin_cpp.hpp” ExecutableNetwork LoadNetwork(CNNNetwork network,const std :: map&amp; config){ IExecutableNetwork :: Ptr ret; CALL_STATUS_FNC(LoadNetwork,ret,network,config); if(ret.get()== nullptr)THROW_IE_EXCEPTION saved_model.rar 16.1 MB 以上来自于谷歌翻译 以下为原文 Hi, I experienced a problem and failed in loading a model (as attachment, convert from tensorflow saved_model). The model loader was modified from openvino's sample code "classification_sample". Following are error messages. [ INFO ] Loading model to the plugin [ ERROR ] Unknown exception c:intelcomputer_vision_sdk_2018.3.343deployment_toolsinference_engineincludedetails/ie_exception_conversion.hpp:80 And I found the error was occurred on following code segment, in file "ie_plugin_cpp.hpp" ExecutableNetwork LoadNetwork(CNNNetwork network, const std::map IExecutableNetwork::Ptr ret; CALL_STATUS_FNC(LoadNetwork, ret, network, config); if (ret.get() == nullptr) THROW_IE_EXCEPTION << "Internal error: pointer to executable network is null"; return ExecutableNetwork(ret); } LoadNetwork() is not tracable. How should I do to debug this problem? Thanks, Jack
|
|
相关推荐
22个回答
|
|
嗨杰克,谢谢你的回复。 当你试一试时请告诉我们。注意,Deepthi Raj 以上来自于谷歌翻译 以下为原文 Hi Jack, Thanks for the reply. Please let us know when you try it out. Regards, Deepthi Raj |
|
|
|
嗨,Deepthi 我将我的模型修改为四维输入。 (tensorflow_export.006.rar附件) 并使用以下命令将模型转换为openvino格式。 (openvino_model.006-cmd1.rar和openvino_model.006-cmd2.rar附件) -command1:python C: Intel computer_vision_sdk_2018.3.343 deployment_tools model_optimizer mo_tf.py --saved_model_dir tensorflow_export.006 1537926199 --input_shape [1,256,256,1] -command2:python C: Intel computer_vision_sdk_2018.3.343 deployment_tools model_optimizer mo_tf.py --saved_model_dir tensorflow_export.006 1537926199 --input_shape [1,256,256,1] --output softmax_tensor 这两个模型都应该有两个维度输出。 我使用classification_sample.exe(没有修改原始sdk)来评估模型。 但是只能评估command2的模型。 并且command2模型的输出也不正确,它总是输出相同的估计(对于第一类总是1.0000置信度,对于第二类总是0.0000)具有不同的输入图像。 我想知道mo_tf.py的参数可能有问题。 Command1的模型开始>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> classification_sample.exe -m D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model openvino_model.006-cmd1 saved_model.xml -i D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model test_256x256.jpg [INFO] InferenceEngine: API版本............ 1.2 建造.................. 13911 [INFO]解析输入参数 [INFO]加载插件 API版本............ 1.2 建立.................. win_20180511 说明....... MKLDNNPlugin [INFO]加载网络文件: d: JKWorkQuantaII 2018_0831 - ai_edge_server 2018_0910 - convert_particle_detect_model openvino_model.006-CMD1 saved_model.xml d: JKWorkQuantaII 2018_0831 - ai_edge_server 2018_0910 - convert_particle_detect_model openvino_model.006-CMD1 saved_model.bin [INFO]准备输入blob [INFO]批量大小为1 [INFO]准备输出blob [错误]分类模型的输出尺寸不正确 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> classification_sample.exe -m D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model openvino_model.006-cmd2 saved_model.xml -i D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model test_256x256.jpg [INFO] InferenceEngine: API版本............ 1.2 建造.................. 13911 [INFO]解析输入参数 [INFO]加载插件 API版本............ 1.2 建立.................. win_20180511 说明....... MKLDNNPlugin [INFO]加载网络文件: d: JKWorkQuantaII 2018_0831 - ai_edge_server 2018_0910 - convert_particle_detect_model openvino_model.006-CMD2 saved_model.xml d: JKWorkQuantaII 2018_0831 - ai_edge_server 2018_0910 - convert_particle_detect_model openvino_model.006-CMD2 saved_model.bin [INFO]准备输入blob [INFO]批量大小为1 [INFO]准备输出blob [INFO]将模型加载到插件中 [INFO]开始推理(1次迭代) [INFO]处理输出blob [警告] -nt 10不适用于此网络(-nt应小于3且大于0) 将使用最大值:2 前2个结果: 图像D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model test_256x256.jpg 1 1.0000000标签NO 0 0.0000000标签是 总推断时间:45.3929044 一次迭代的平均运行时间:45.3929044 ms 吞吐量:22.0298748 FPS [INFO]执行成功 我附上我在附件中使用的文件。 谢谢, 插口 test_256x256.jpg 12.3 K. openvino_model.006-cmd2.rar 16.1 MB openvino_model.006-cmd1.rar 16.1 MB tensorflow_export.006.rar 16.1 MB 以上来自于谷歌翻译 以下为原文 Hi, Deepthi I modified my model to four dimensions input. (tensorflow_export.006.rar in attachment) And use following command to convert the model into openvino format. (openvino_model.006-cmd1.rar and openvino_model.006-cmd2.rar in attachment) -command1: python C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsmodel_optimizermo_tf.py --saved_model_dir tensorflow_export.0061537926199 --input_shape [1,256,256,1] -command2: python C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsmodel_optimizermo_tf.py --saved_model_dir tensorflow_export.0061537926199 --input_shape [1,256,256,1] --output softmax_tensor The two models should both have two dimensions output. I use classification_sample.exe (no modification from original sdk) to evaluate the models. But only command2's model can be evaluated. And also command2 model's output is not correct, it always output the same estimation (always 1.0000 confidence for first class, and 0.0000 for second class) with different input images. I'm wondering maybe there is something wrong in my parameters for mo_tf.py. Command1's model start >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> classification_sample.exe -m D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd1saved_model.xml -i D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modeltest_256x256.jpg [ INFO ] InferenceEngine: API version ............ 1.2 Build .................. 13911 [ INFO ] Parsing input parameters [ INFO ] Loading plugin API version ............ 1.2 Build .................. win_20180511 Description ....... MKLDNNPlugin [ INFO ] Loading network files: D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd1saved_model.xml D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd1saved_model.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ ERROR ] Incorrect output dimensions for classification model <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Command2's model start >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> classification_sample.exe -m D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd2saved_model.xml -i D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modeltest_256x256.jpg [ INFO ] InferenceEngine: API version ............ 1.2 Build .................. 13911 [ INFO ] Parsing input parameters [ INFO ] Loading plugin API version ............ 1.2 Build .................. win_20180511 Description ....... MKLDNNPlugin [ INFO ] Loading network files: D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd2saved_model.xml D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd2saved_model.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the plugin [ INFO ] Starting inference (1 iterations) [ INFO ] Processing output blobs [ WARNING ] -nt 10 is not available for this network (-nt should be less than 3 and more than 0) will be used maximal value : 2 Top 2 results: Image D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modeltest_256x256.jpg 1 1.0000000 label NO 0 0.0000000 label YES total inference time: 45.3929044 Average running time of one iteration: 45.3929044 ms Throughput: 22.0298748 FPS [ INFO ] Execution successful <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< I attache the files I used in attachment. Thanks, Jack
|
|
|
|
嗨杰克,在创建来自pb文件的IR文件时,在这种情况下,我们必须提到输出节点名称(softmax_tensor -command 2),因为在模型定义中,我们有多个出口节点(其输出未连接到任何 其他节点)关于准确性,你能否检查一下相同的结果是否会在tensorflow模型上进行推理(不使用openvino).Regards,Deepthi Raj。 以上来自于谷歌翻译 以下为原文 Hi Jack, While creating the IR files from the pb file, in this case, we have to mention the output node name (softmax_tensor -command 2) since in the model definition, we have multiple exit nodes (whose output is not connected to any other nodes) Regarding the accuracy , could you please check if the same result is coming for inference run on the tensorflow model (without using openvino). Regards, Deepthi Raj. |
|
|
|
jerry1978 发表于 2018-11-5 16:45 嗨,Deepthi 张量流模型运行良好。 谢谢, 插口 以上来自于谷歌翻译 以下为原文 Hi, Deepthi The tensorflow model works good. Thanks, Jack |
|
|
|
嗨,Deepthi 我发现我的tensorflow模型和openvino模型之间存在格式不匹配问题。 我先说清楚,然后让你知道更新。 谢谢, 插口 以上来自于谷歌翻译 以下为原文 Hi, Deepthi I found a format mismatch issue between my tensorflow model and openvino model. I'll make it clear first and let you know the update. Thanks, Jack |
|
|
|
嗨,Deepthi 我发现我的张量流模型期望接收归一化像素值作为输入(0~1)。 虽然classification_sample.exe将原始值(0~255)提供给openvino模型。 是否有任何参数(过程模型转换)我可以用来克服格式不匹配问题? 谢谢, 插口 以上来自于谷歌翻译 以下为原文 Hi, Deepthi I found that my tensorflow model expects to receive normalize pixel value as input (0~1). While classification_sample.exe feeds raw value (0~255) to openvino model. Is there any parameter (while process model convert) I can use to overcome the format mismatch issue? Thanks, Jack |
|
|
|
cd340823 发表于 2018-11-5 17:22 嗨,Deepthi 我使用以下命令转换模型,输出看起来不错。 Command3:python C: Intel computer_vision_sdk_2018.3.343 deployment_tools model_optimizer mo_tf.py --saved_model_dir tensorflow_export.006 1537926199 --input_shape [1,256,256,1] --output softmax_tensor --scale 256 谢谢, 插口 以上来自于谷歌翻译 以下为原文 Hi, Deepthi I used following command to convert the model and the output looks good. Command3: python C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsmodel_optimizermo_tf.py --saved_model_dir tensorflow_export.0061537926199 --input_shape [1,256,256,1] --output softmax_tensor --scale 256 Thanks, Jack |
|
|
|
嗨杰克,很高兴听到它有效:)我们正在关闭这个帖子。 如果您有任何其他问题,请提出新的帖子。如有任何疑问,请随时与我们联系。 :)问候,Deepthi Raj。 以上来自于谷歌翻译 以下为原文 Hi Jack, Happy to hear that it worked :) We are closing this thread. If you have any other issues, please raise a new thread. Feel free to contact us in case of any queries. :) Regards, Deepthi Raj. |
|
|
|
嗨杰克,感谢您与我们联系。请您确保您已连接到正确的网络?如果问题仍然存在,请检查您是否能够运行以下使用标准squeezenet模型和默认分类样本的代码。
exe可用于cvsdk.cd deployment_tools inference_engine samplesclassification_sample -m deployment_tools demo ir squeezenet1.1.xml -i deployment_tools demo car.png -d CPU如果有效,请共享修改后的classification_sample.exe文件 。等待您的回复:)问候,Deepthi Raj。 以上来自于谷歌翻译 以下为原文 Hi Jack, Thanks for reaching out to us. Could you please ensure that you are connected to proper network ? If the issue still persists, please check if you are able to run the below code that uses the standard squeezenet model and the default classification_sample.exe available in cvsdk. cd classification_sample -m If this works, please share the modified classification_sample.exe file. Waiting for your response :) Regards, Deepthi Raj. |
|
|
|
music19960304 发表于 2018-11-5 18:03 嗨,Deepthi 请在附件中检查我修改过的classification_sample。 原始的classification_sample.exe在我的环境中工作正常。 (如下) C: Intel computer_vision_sdk_2018.3.343 deployment_tools inference_engine bin intel64 Debug> classification_sample.exe -m C: Intel computer_vision_sdk_2018.3.343 deployment_tools demo ir squeezenet1.1.xml -i C: Intel computer_vision_sdk_2018.3.343 deployment_tools demo car.png -d CPU [INFO] InferenceEngine: API版本............ 1.2 建造.................. 13911 [INFO]解析输入参数 [INFO]加载插件 API版本............ 1.2 建立.................. win_20180511 说明....... MKLDNNPlugin [INFO]加载网络文件: C:英特尔 computer_vision_sdk_2018.3.343 deployment_tools 演示 IR squeezenet1.1.xml C:英特尔 computer_vision_sdk_2018.3.343 deployment_tools 演示 IR squeezenet1.1.bin [INFO]准备输入blob [警告]图像的大小从(787,259)调整为(227,227) [INFO]批量大小为1 [INFO]准备输出blob [INFO]将模型加载到插件中 [INFO]开始推理(1次迭代) [INFO]处理输出blob 十大成绩: 图片C: Intel computer_vision_sdk_2018.3.343 deployment_tools demo car.png 817 0.8363345标签跑车,跑车 511 0.0946488标签可转换 479 0.0419131标签车轮 751 0.0091071标签赛车,赛车,赛车 436 0.0068161标签沙滩车,旅行车,马车,房地产车,海滩大车,车辆大车,大车 656 0.0037564标签小型货车 586 0.0025741标签半轨 717 0.0016069标签皮卡,皮卡车 864 0.0012027标签拖车,拖车,清障车 581 0.0005882标签格栅,散热器格栅 总推断时间:39.4296311 一次迭代的平均运行时间:39.4296311 ms 吞吐量:25.3616372 FPS [INFO]执行成功 谢谢, 插口 _classification_sample.jkglee.rar 8.4 K. 以上来自于谷歌翻译 以下为原文 Hi, Deepthi Please check my modified classification_sample in attachment. Original classification_sample.exe works fine in my environment. (as following) C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsinference_enginebinintel64Debug>classification_sample.exe -m C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsdemoirsqueezenet1.1.xml -i C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsdemocar.png -d CPU [ INFO ] InferenceEngine: API version ............ 1.2 Build .................. 13911 [ INFO ] Parsing input parameters [ INFO ] Loading plugin API version ............ 1.2 Build .................. win_20180511 Description ....... MKLDNNPlugin [ INFO ] Loading network files: C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsdemoirsqueezenet1.1.xml C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsdemoirsqueezenet1.1.bin [ INFO ] Preparing input blobs [ WARNING ] Image is resized from (787, 259) to (227, 227) [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the plugin [ INFO ] Starting inference (1 iterations) [ INFO ] Processing output blobs Top 10 results: Image C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsdemocar.png 817 0.8363345 label sports car, sport car 511 0.0946488 label convertible 479 0.0419131 label car wheel 751 0.0091071 label racer, race car, racing car 436 0.0068161 label beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon 656 0.0037564 label minivan 586 0.0025741 label half track 717 0.0016069 label pickup, pickup truck 864 0.0012027 label tow truck, tow car, wrecker 581 0.0005882 label grille, radiator grille total inference time: 39.4296311 Average running time of one iteration: 39.4296311 ms Throughput: 25.3616372 FPS [ INFO ] Execution successful Thanks, Jack |
|
|
|
jerry1978 发表于 2018-11-5 18:16 嗨杰克,我们正在研究这个案子。 会在一两天内回复你.Regards,Deepthi Raj。 以上来自于谷歌翻译 以下为原文 Hi Jack, We are working on this case. Will get back to you in a day or two. Regards, Deepthi Raj. |
|
|
|
知道了谢谢。 -插口 以上来自于谷歌翻译 以下为原文 Got it, thanks. -Jack |
|
|
|
嗨杰克,请问你能否解决这个问题?问候,Deepthi Raj 以上来自于谷歌翻译 以下为原文 Hi Jack, Could you please confirm if you were able to resolve the issue? Regards, Deepthi Raj |
|
|
|
嗨,Deepthi 不,问题仍未解决。 谢谢, 插口 以上来自于谷歌翻译 以下为原文 Hi, Deepthi No, the issue is still unsolved. Thanks, Jack |
|
|
|
嗨杰克,实际上我们无法重现你所面临的问题。 能否请您提供执行代码所遵循的步骤。我们能够通过在模型文件中进行一些修改并使用cvsdk中提供的默认分类_示例来成功运行它。 你可以分享标签文件,以便我们检查它是否真的适用于那些变化。如果你能解释为什么在默认的classification_sample上做出更改,那将是很好的。等待你回复:)问候,Deepthi Raj 以上来自于谷歌翻译 以下为原文 Hi Jack, Actually we could not recreate the issue that you are facing. Could you please provide the steps that you followed to execute the code. We were able to run it successfully by doing some modifications in the model file and using the default classification_sample available in cvsdk. Could you please share the label file so that we can check if it actually works with those changes. Also it would be great if you can explain why the changes are made on default classification_sample. Waiting for you reply :) Regards, Deepthi Raj |
|
|
|
cd340823 发表于 2018-11-5 19:20 嗨,Deepthi 我修改了classification_sample以满足我的模型的输入foramt(灰色层中只有1个通道),而原始的classification_sample需要3个通道(RGB)作为输入格式。 我使用以下命令加载我的模型,输入图像可以是任何虚拟jpeg文件。 - classification_sample.exe -m D:\ saved_model.xml -i D: any_dummy_image.jpg 由于我的classification_sample.exe尚未完成,因此在加载网络模型文件后退出。 我的模型的输出应该是0或1,没有标签文件应该没问题。 你能告诉我你修改了什么(对于模型文件)并且能够使用classification_sample.exe进行修改吗? 我可以检查它是否适用于我的环境。 谢谢, 插口 以上来自于谷歌翻译 以下为原文 Hi, Deepthi I modify classification_sample to meet my model's input foramt (only 1 channel in gray layer), while original classification_sample takes 3 channels (RGB) as input format. I use following command to load my model, and input image could be any dummy jpeg file. - classification_sample.exe -m D:\saved_model.xml -i D:any_dummy_image.jpg Because my classification_sample.exe is not yet completed, it quits after loading network model files. The output of my model should be 0 or 1, no label file should be fine. Could you also let me know what you modified (for the model files) and able to bring it up with classification_sample.exe? I can check whether it works in my environment or not. Thanks, Jack |
|
|
|
jerry1978 发表于 2018-11-5 19:32 嗨杰克,当另外一个维度被添加到saved_model.xml中的前两个层(占位符和重塑)时,程序成功运行。 附加了更新的xml文件。这意味着驱动程序(classification_sample)需要输入(n,c,h,w)的四个维度,需要修改它们以适用于3维(n,h,w).n- imagesc数 - channelsh数 - heightw-widthHope这澄清了你的查询.Regards,Deepthi Raj。 以上来自于谷歌翻译 以下为原文 Hi Jack, Program ran successfully when one more dimension is added to the first two layers(Placeholder and Reshape) in saved_model.xml. Have attached the updated xml file. That means the driver program (classification_sample) expects four dimensions for the input (n, c, h, w) which needs to be modified to work for 3 dimensions (n, h, w). n- number of images c - number of channels h - height w- width Hope this clarifies your query. Regards, Deepthi Raj. |
|
|
|
music19960304 发表于 2018-11-5 19:45 嗨,Deepthi 找不到附件。 你能再发一次吗? 谢谢, 插口 以上来自于谷歌翻译 以下为原文 Hi, Deepthi Can't find the attachment. Could you post it again? Thanks, Jack |
|
|
|
嗨杰克,对不起它没有得到正确的附件。 再次附加它。如果这澄清了您的疑问,请告诉我们.Regards,Deepthi Raj saved_model.zip 1.4 K. 以上来自于谷歌翻译 以下为原文 Hi Jack, Sorry that it did not get attached properly. Attaching it again. Please let us know if this clarifies your query. Regards, Deepthi Raj
|
|
|
|
只有小组成员才能发言,加入小组>>
476浏览 0评论
小黑屋| 手机版| Archiver| 电子发烧友 ( 湘ICP备2023018690号 )
GMT+8, 2024-11-27 01:14 , Processed in 1.045926 second(s), Total 113, Slave 96 queries .
Powered by 电子发烧友网
© 2015 bbs.elecfans.com
关注我们的微信
下载发烧友APP
电子发烧友观察
版权所有 © 湖南华秋数字科技有限公司
电子发烧友 (电路图) 湘公网安备 43011202000918 号 电信与信息服务业务经营许可证:合字B2-20210191 工商网监 湘ICP备2023018690号