嗨,Deepthi
我将我的模型修改为四维输入。
(tensorflow_export.006.rar附件)
并使用以下命令将模型转换为openvino格式。
(openvino_model.006-cmd1.rar和openvino_model.006-cmd2.rar附件)
-command1:python C: Intel computer_vision_sdk_2018.3.343 deployment_tools model_optimizer mo_tf.py --saved_model_dir tensorflow_export.006 1537926199 --input_shape [1,256,256,1]
-command2:python C: Intel computer_vision_sdk_2018.3.343 deployment_tools model_optimizer mo_tf.py --saved_model_dir tensorflow_export.006 1537926199 --input_shape [1,256,256,1] --output softmax_tensor
这两个模型都应该有两个维度输出。
我使用classification_sample.exe(没有修改原始sdk)来评估模型。
但是只能评估command2的模型。
并且command2模型的输出也不正确,它总是输出相同的估计(对于第一类总是1.0000置信度,对于第二类总是0.0000)具有不同的输入图像。
我想知道mo_tf.py的参数可能有问题。
Command1的模型开始>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
classification_sample.exe -m D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model openvino_model.006-cmd1 saved_model.xml -i D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model test_256x256.jpg
[INFO] InferenceEngine:
API版本............ 1.2
建造.................. 13911
[INFO]解析输入参数
[INFO]加载插件
API版本............ 1.2
建立.................. win_20180511
说明....... MKLDNNPlugin
[INFO]加载网络文件:
d: JKWorkQuantaII 2018_0831 - ai_edge_server 2018_0910 - convert_particle_detect_model openvino_model.006-CMD1 saved_model.xml
d: JKWorkQuantaII 2018_0831 - ai_edge_server 2018_0910 - convert_particle_detect_model openvino_model.006-CMD1 saved_model.bin
[INFO]准备输入blob
[INFO]批量大小为1
[INFO]准备输出blob
[错误]分类模型的输出尺寸不正确
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
classification_sample.exe -m D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model openvino_model.006-cmd2 saved_model.xml -i D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model test_256x256.jpg
[INFO] InferenceEngine:
API版本............ 1.2
建造.................. 13911
[INFO]解析输入参数
[INFO]加载插件
API版本............ 1.2
建立.................. win_20180511
说明....... MKLDNNPlugin
[INFO]加载网络文件:
d: JKWorkQuantaII 2018_0831 - ai_edge_server 2018_0910 - convert_particle_detect_model openvino_model.006-CMD2 saved_model.xml
d: JKWorkQuantaII 2018_0831 - ai_edge_server 2018_0910 - convert_particle_detect_model openvino_model.006-CMD2 saved_model.bin
[INFO]准备输入blob
[INFO]批量大小为1
[INFO]准备输出blob
[INFO]将模型加载到插件中
[INFO]开始推理(1次迭代)
[INFO]处理输出blob
[警告] -nt 10不适用于此网络(-nt应小于3且大于0)
将使用最大值:2
前2个结果:
图像D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model test_256x256.jpg
1 1.0000000标签NO
0 0.0000000标签是
总推断时间:45.3929044
一次迭代的平均运行时间:45.3929044 ms
吞吐量:22.0298748 FPS
[INFO]执行成功
我附上我在附件中使用的文件。
谢谢,
插口
test_256x256.jpg
12.3 K.
openvino_model.006-cmd2.rar
16.1 MB
openvino_model.006-cmd1.rar
16.1 MB
tensorflow_export.006.rar
16.1 MB
以上来自于谷歌翻译
以下为原文
Hi,
Deepthi
I modified my model to four dimensions input. (tensorflow_export.006.rar in attachment)
And use following command to convert the model into openvino format. (openvino_model.006-cmd1.rar and openvino_model.006-cmd2.rar in attachment)
-command1: python C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsmodel_optimizermo_tf.py --saved_model_dir tensorflow_export.0061537926199 --input_shape [1,256,256,1]
-command2: python C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsmodel_optimizermo_tf.py --saved_model_dir tensorflow_export.0061537926199 --input_shape [1,256,256,1] --output softmax_tensor
The two models should both have two dimensions output.
I use classification_sample.exe (no modification from original sdk) to evaluate the models.
But only command2's model can be evaluated.
And also command2 model's output is not correct, it always output the same estimation (always 1.0000 confidence for first class, and 0.0000 for second class) with different input images.
I'm wondering maybe there is something wrong in my parameters for mo_tf.py.
Command1's model start >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
classification_sample.exe -m D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd1saved_model.xml -i D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modeltest_256x256.jpg
[ INFO ] InferenceEngine:
API version ............ 1.2
Build .................. 13911
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin
API version ............ 1.2
Build .................. win_20180511
Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd1saved_model.xml
D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd1saved_model.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ ERROR ] Incorrect output dimensions for classification model
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Command2's model start >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
classification_sample.exe -m D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd2saved_model.xml -i D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modeltest_256x256.jpg
[ INFO ] InferenceEngine:
API version ............ 1.2
Build .................. 13911
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin
API version ............ 1.2
Build .................. win_20180511
Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd2saved_model.xml
D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd2saved_model.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (1 iterations)
[ INFO ] Processing output blobs
[ WARNING ] -nt 10 is not available for this network (-nt should be less than 3 and more than 0)
will be used maximal value : 2
Top 2 results:
Image D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modeltest_256x256.jpg
1 1.0000000 label NO
0 0.0000000 label YES
total inference time: 45.3929044
Average running time of one iteration: 45.3929044 ms
Throughput: 22.0298748 FPS
[ INFO ] Execution successful
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I attache the files I used in attachment.
Thanks,
Jack
嗨,Deepthi
我将我的模型修改为四维输入。
(tensorflow_export.006.rar附件)
并使用以下命令将模型转换为openvino格式。
(openvino_model.006-cmd1.rar和openvino_model.006-cmd2.rar附件)
-command1:python C: Intel computer_vision_sdk_2018.3.343 deployment_tools model_optimizer mo_tf.py --saved_model_dir tensorflow_export.006 1537926199 --input_shape [1,256,256,1]
-command2:python C: Intel computer_vision_sdk_2018.3.343 deployment_tools model_optimizer mo_tf.py --saved_model_dir tensorflow_export.006 1537926199 --input_shape [1,256,256,1] --output softmax_tensor
这两个模型都应该有两个维度输出。
我使用classification_sample.exe(没有修改原始sdk)来评估模型。
但是只能评估command2的模型。
并且command2模型的输出也不正确,它总是输出相同的估计(对于第一类总是1.0000置信度,对于第二类总是0.0000)具有不同的输入图像。
我想知道mo_tf.py的参数可能有问题。
Command1的模型开始>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
classification_sample.exe -m D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model openvino_model.006-cmd1 saved_model.xml -i D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model test_256x256.jpg
[INFO] InferenceEngine:
API版本............ 1.2
建造.................. 13911
[INFO]解析输入参数
[INFO]加载插件
API版本............ 1.2
建立.................. win_20180511
说明....... MKLDNNPlugin
[INFO]加载网络文件:
d: JKWorkQuantaII 2018_0831 - ai_edge_server 2018_0910 - convert_particle_detect_model openvino_model.006-CMD1 saved_model.xml
d: JKWorkQuantaII 2018_0831 - ai_edge_server 2018_0910 - convert_particle_detect_model openvino_model.006-CMD1 saved_model.bin
[INFO]准备输入blob
[INFO]批量大小为1
[INFO]准备输出blob
[错误]分类模型的输出尺寸不正确
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
classification_sample.exe -m D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model openvino_model.006-cmd2 saved_model.xml -i D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model test_256x256.jpg
[INFO] InferenceEngine:
API版本............ 1.2
建造.................. 13911
[INFO]解析输入参数
[INFO]加载插件
API版本............ 1.2
建立.................. win_20180511
说明....... MKLDNNPlugin
[INFO]加载网络文件:
d: JKWorkQuantaII 2018_0831 - ai_edge_server 2018_0910 - convert_particle_detect_model openvino_model.006-CMD2 saved_model.xml
d: JKWorkQuantaII 2018_0831 - ai_edge_server 2018_0910 - convert_particle_detect_model openvino_model.006-CMD2 saved_model.bin
[INFO]准备输入blob
[INFO]批量大小为1
[INFO]准备输出blob
[INFO]将模型加载到插件中
[INFO]开始推理(1次迭代)
[INFO]处理输出blob
[警告] -nt 10不适用于此网络(-nt应小于3且大于0)
将使用最大值:2
前2个结果:
图像D: JKWorkQuantaII 2018_0831-ai_edge_server 2018_0910-convert_particle_detect_model test_256x256.jpg
1 1.0000000标签NO
0 0.0000000标签是
总推断时间:45.3929044
一次迭代的平均运行时间:45.3929044 ms
吞吐量:22.0298748 FPS
[INFO]执行成功
我附上我在附件中使用的文件。
谢谢,
插口
test_256x256.jpg
12.3 K.
openvino_model.006-cmd2.rar
16.1 MB
openvino_model.006-cmd1.rar
16.1 MB
tensorflow_export.006.rar
16.1 MB
以上来自于谷歌翻译
以下为原文
Hi,
Deepthi
I modified my model to four dimensions input. (tensorflow_export.006.rar in attachment)
And use following command to convert the model into openvino format. (openvino_model.006-cmd1.rar and openvino_model.006-cmd2.rar in attachment)
-command1: python C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsmodel_optimizermo_tf.py --saved_model_dir tensorflow_export.0061537926199 --input_shape [1,256,256,1]
-command2: python C:Intelcomputer_vision_sdk_2018.3.343deployment_toolsmodel_optimizermo_tf.py --saved_model_dir tensorflow_export.0061537926199 --input_shape [1,256,256,1] --output softmax_tensor
The two models should both have two dimensions output.
I use classification_sample.exe (no modification from original sdk) to evaluate the models.
But only command2's model can be evaluated.
And also command2 model's output is not correct, it always output the same estimation (always 1.0000 confidence for first class, and 0.0000 for second class) with different input images.
I'm wondering maybe there is something wrong in my parameters for mo_tf.py.
Command1's model start >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
classification_sample.exe -m D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd1saved_model.xml -i D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modeltest_256x256.jpg
[ INFO ] InferenceEngine:
API version ............ 1.2
Build .................. 13911
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin
API version ............ 1.2
Build .................. win_20180511
Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd1saved_model.xml
D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd1saved_model.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ ERROR ] Incorrect output dimensions for classification model
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Command2's model start >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
classification_sample.exe -m D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd2saved_model.xml -i D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modeltest_256x256.jpg
[ INFO ] InferenceEngine:
API version ............ 1.2
Build .................. 13911
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin
API version ............ 1.2
Build .................. win_20180511
Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd2saved_model.xml
D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modelopenvino_model.006-cmd2saved_model.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (1 iterations)
[ INFO ] Processing output blobs
[ WARNING ] -nt 10 is not available for this network (-nt should be less than 3 and more than 0)
will be used maximal value : 2
Top 2 results:
Image D:JKWorkQuantaII2018_0831-ai_edge_server2018_0910-convert_particle_detect_modeltest_256x256.jpg
1 1.0000000 label NO
0 0.0000000 label YES
total inference time: 45.3929044
Average running time of one iteration: 45.3929044 ms
Throughput: 22.0298748 FPS
[ INFO ] Execution successful
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I attache the files I used in attachment.
Thanks,
Jack
举报