英特尔
直播中

李汉荣

7年用户 198经验值
私信 关注
[问答]

LoadNetwork()的未知异常

嗨,
我遇到了一个问题,加载模型失败了(作为附件,从tensorflow saved_model转换)。
模型加载器是从openvino的示例代码“classification_sample”修改的。
以下是错误消息。
[INFO]将模型加载到插件中
[错误]未知异常
C:英特尔 computer_vision_sdk_2018.3.343  deployment_tools  inference_engine 包括细节/ ie_exception_conversion.hpp:80
我发现错误发生在以下代码段中,文件“ie_plugin_cpp.hpp”
ExecutableNetwork LoadNetwork(CNNNetwork network,const std :: map& config){
IExecutableNetwork :: Ptr ret;
CALL_STATUS_FNC(LoadNetwork,ret,network,config);
if(ret.get()== nullptr)THROW_IE_EXCEPTION     
saved_model.rar
16.1 MB

以上来自于谷歌翻译


以下为原文

Hi,

I experienced a problem and failed in loading a model (as attachment, convert from tensorflow saved_model).
The model loader was modified from openvino's sample code "classification_sample".

Following are error messages.
[ INFO ] Loading model to the plugin
[ ERROR ] Unknown exception
c:intelcomputer_vision_sdk_2018.3.343deployment_toolsinference_engineincludedetails/ie_exception_conversion.hpp:80

And I found the error was occurred on following code segment, in file "ie_plugin_cpp.hpp"
    ExecutableNetwork LoadNetwork(CNNNetwork network, const std::map &config) {
        IExecutableNetwork::Ptr ret;
        CALL_STATUS_FNC(LoadNetwork, ret, network, config);
        if (ret.get() == nullptr) THROW_IE_EXCEPTION << "Internal error: pointer to executable network is null";
        return ExecutableNetwork(ret);
    }

LoadNetwork() is not tracable.
How should I do to debug this problem?

Thanks,
Jack



  •   saved_model.rar 16.1 MB  

回帖(22)

汤宇

2018-11-5 11:28:46

嗨杰克,谢谢你的回复。
当你试一试时请告诉我们。注意,Deepthi Raj

以上来自于谷歌翻译


以下为原文

Hi Jack,

Thanks for the reply. Please let us know when you try it out.

Regards,
Deepthi Raj
举报

李玉鑫

2018-11-5 11:42:57
引用: jerry1978 发表于 2018-11-5 16:16
嗨杰克,谢谢你的回复。
当你试一试时请告诉我们。注意,Deepthi Raj

嗨,Deepthi
我将我的模型修改为四维输入。
(tensorflow_export.006.rar附件)
并使用以下命令将模型转换为openvino格式。
(openvino_model.006-cmd1.rar和openvino_model.006-cmd2.rar附件)
-command1:python C: Intel  computer_vision_sdk_2018.3.343  deployment_tools  model_optimizer  mo_tf.py --saved_model_dir tensorflow_export.006  1537926199 --input_shape [1,256,256,1]
-command2:python C: Intel  computer_vision_sdk_2018.3.343  deployment_tools  model_optimizer  mo_tf.py --saved_model_dir tensorflow_export.006  1537926199 --input_shape [1,256,256,1] --output softmax_tensor
这两个模型都应该有两个维度输出。
我使用classification_sample.exe(没有修改原始sdk)来评估模型。
但是只能评估command2的模型。
并且command2模型的输出也不正确,它总是输出相同的估计(对于第一类总是1.0000置信度,对于第二类总是0.0000)具有不同的输入图像。
我想知道mo_tf.py的参数可能有问题。
Command1的模型开始>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
classification_sample.exe -m D: JKWorkQuantaII  2018_0831-ai_edge_server  2018_0910-convert_particle_detect_model  openvino_model.006-cmd1  saved_model.xml -i D: JKWorkQuantaII  2018_0831-ai_edge_server  2018_0910-convert_particle_detect_model  test_256x256.jpg
[INFO] InferenceEngine: 
API版本............ 1.2 
建造.................. 13911
[INFO]解析输入参数
[INFO]加载插件 
API版本............ 1.2 
建立.................. win_20180511 
说明....... MKLDNNPlugin
[INFO]加载网络文件: 
d: JKWorkQuantaII  2018_0831 - ai_edge_server  2018_0910 - convert_particle_detect_model  openvino_model.006-CMD1  saved_model.xml 
d: JKWorkQuantaII  2018_0831 - ai_edge_server  2018_0910 - convert_particle_detect_model  openvino_model.006-CMD1  saved_model.bin
[INFO]准备输入blob
[INFO]批量大小为1
[INFO]准备输出blob
[错误]分类模型的输出尺寸不正确
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
classification_sample.exe -m D: JKWorkQuantaII  2018_0831-ai_edge_server  2018_0910-convert_particle_detect_model  openvino_model.006-cmd2  saved_model.xml -i D: JKWorkQuantaII  2018_0831-ai_edge_server  2018_0910-convert_particle_detect_model  test_256x256.jpg
[INFO] InferenceEngine: 
API版本............ 1.2 
建造.................. 13911
[INFO]解析输入参数
[INFO]加载插件 
API版本............ 1.2 
建立.................. win_20180511 
说明....... MKLDNNPlugin
[INFO]加载网络文件: 
d: JKWorkQuantaII  2018_0831 - ai_edge_server  2018_0910 - convert_particle_detect_model  openvino_model.006-CMD2  saved_model.xml 
d: JKWorkQuantaII  2018_0831 - ai_edge_server  2018_0910 - convert_particle_detect_model  openvino_model.006-CMD2  saved_model.bin
[INFO]准备输入blob
[INFO]批量大小为1
[INFO]准备输出blob
[INFO]将模型加载到插件中
[INFO]开始推理(1次迭代)
[INFO]处理输出blob
[警告] -nt 10不适用于此网络(-nt应小于3且大于0) 
将使用最大值:2
前2个结果:
图像D: JKWorkQuantaII  2018_0831-ai_edge_server  2018_0910-convert_particle_detect_model  test_256x256.jpg
1 1.0000000标签NO
0 0.0000000标签是
总推断时间:45.3929044
一次迭代的平均运行时间:45.3929044 ms
吞吐量:22.0298748 FPS
[INFO]执行成功 
我附上我在附件中使用的文件。
谢谢,
插口     
test_256x256.jpg 
12.3 K.      
openvino_model.006-cmd2.rar 
16.1 MB       
openvino_model.006-cmd1.rar 
16.1 MB       
tensorflow_export.006.rar 
16.1 MB

以上来自于谷歌翻译


以下为原文

Hi, Deepthi
 
I modified my model to four dimensions input. (tensorflow_export.006.rar in attachment)
 
And use following command to convert the model into openvino format. (openvino_model.006-cmd1.rar and openvino_model.006-cmd2.rar in attachment)
-command1: python C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsmodel_optimizermo_tf.py --saved_model_dir tensorflow_export.0061537926199 --input_shape [1,256,256,1]
-command2: python C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsmodel_optimizermo_tf.py --saved_model_dir tensorflow_export.0061537926199 --input_shape [1,256,256,1] --output softmax_tensor
 
The two models should both have two dimensions output.
I use classification_sample.exe (no modification from original sdk) to evaluate the models.
But only command2's model can be evaluated.
And also command2 model's output is not correct, it always output the same estimation (always 1.0000 confidence for first class, and 0.0000 for second class) with different input images.
I'm wondering maybe there is something wrong in my parameters for mo_tf.py.
 
Command1's model start >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
classification_sample.exe -m D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd1saved_model.xml -i D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modeltest_256x256.jpg
[ INFO ] InferenceEngine:
        API version ............ 1.2
        Build .................. 13911
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin
        API version ............ 1.2
        Build .................. win_20180511
        Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
        D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd1saved_model.xml
        D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd1saved_model.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ ERROR ] Incorrect output dimensions for classification model
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
 
Command2's model start >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

classification_sample.exe -m D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd2saved_model.xml -i D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modeltest_256x256.jpg
[ INFO ] InferenceEngine:
        API version ............ 1.2
        Build .................. 13911
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin
        API version ............ 1.2
        Build .................. win_20180511
        Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
        D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd2saved_model.xml
        D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd2saved_model.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (1 iterations)
[ INFO ] Processing output blobs
[ WARNING ] -nt 10 is not available for this network (-nt should be less than 3 and more than 0)
            will be used maximal value : 2
Top 2 results:
Image D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modeltest_256x256.jpg
1 1.0000000 label NO
0 0.0000000 label YES
total inference time: 45.3929044
Average running time of one iteration: 45.3929044 ms
Throughput: 22.0298748 FPS
[ INFO ] Execution successful
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
 
I attache the files I used in attachment.
 
Thanks,
Jack


举报

汤宇

2018-11-5 11:57:49

嗨杰克,在创建来自pb文件的IR文件时,在这种情况下,我们必须提到输出节点名称(softmax_tensor -command 2),因为在模型定义中,我们有多个出口节点(其输出未连接到任何
其他节点)关于准确性,你能否检查一下相同的结果是否会在tensorflow模型上进行推理(不使用openvino).Regards,Deepthi Raj。

以上来自于谷歌翻译


以下为原文

Hi Jack,

While creating the IR files from the pb file, in this case, we have to mention the output node name (softmax_tensor -command 2) since in the model definition, we have multiple exit nodes (whose output is not connected to any other nodes)

Regarding the accuracy , could you please check if the same result is coming for inference run on the tensorflow model (without using openvino).

Regards,
Deepthi Raj.
举报

李汉荣

2018-11-5 12:07:01
引用: jerry1978 发表于 2018-11-5 16:45
嗨杰克,在创建来自pb文件的IR文件时,在这种情况下,我们必须提到输出节点名称(softmax_tensor -command 2),因为在模型定义中,我们有多个出口节点(其输出未连接到任何
其他节点)关于准确性,你能否检查一下相同的结果是否会在tensorflow模型上进行推理(不使用openvino).Regards,Deepthi Raj。

嗨,Deepthi
张量流模型运行良好。
谢谢,
插口

以上来自于谷歌翻译


以下为原文

Hi, Deepthi
 
The tensorflow model works good.
 
Thanks,
Jack
举报

更多回帖

发帖
×
20
完善资料,
赚取积分