英特尔
直播中

陈蓓

8年用户 195经验值
私信 关注
[问答]

点云数据索引与对齐的帧索引之间的关系是

你好,
我正在进行RealSense D435多摄像机设置,我只显示那些与相关摄像机足够接近的3D点。
我对每台相机的实际工作流程是:
使用OpenCV用圆形图案计算校准旋转矩阵和平移向量
使用一些后置滤波器来减少混叠
根据示例对齐流程
计算pointcloud
我有一些关于在对齐之前使用后过滤的问题,但我从其他主题得到了答案。
目前我有一个用于后置过滤器的processing_block以及与pointcloud计算对齐。
我还看到,当我想使用对齐的video_frame和depth_frame来生成pointcloud时,结果真的很奇怪,所以我使用经过滤波后的depth_frame来获取pointcloud和映射的对齐video_frame。
我也不知道这个问题,但我的主要问题是我不知道如何正确地找到那些与相关相机足够接近的点对准。
我注意到抽取滤波器可以改变分辨率,但仍然没有得到它。
这是我的align_process代码:
rs2 :: processing_block align_processing([&](rs2 :: frameset data,//输入框架集(来自管道)
RS2 :: frame_source&安培;
source)//可以分配新帧的帧池
{
std :: vector point_indicies;
auto frameset_data = align_frameset(profile,data,point_indicies,(* filter_config_ptr) - > get_process_option(“Depth_Clip”));
rs2 :: frameset aligned_frameset = source.allocate_composite_frame({frameset_data.first,data.get_depth_frame()});
//第一个是对齐的video_frame,另一个是经过滤波后的depth_frame
auto depth_frame = aligned_frameset.get_depth_frame();
auto rgb_frame = aligned_frameset.get_color_frame();
pc.map_to(rgb_frame);
auto points = pc.calculate(depth_frame);
data_handler.lock();
output_d.points = points;
// RS2 ::点
output_d.pois = point_indicies;
//只会绘制这些点
output_d.rgb_frame = rgb_frame;
if(calibration_m.valid)output_d.calibration_m = calibration_m;
data_handler.unlock();
source.frame_ready(aligned_frameset);
});
在remove_background()函数中,我存储了这些指标:
point_indicies.push_back((y / scale)*(width / scale)+(x / scale));
其中y遍历行,x遍历列,scale = aligned_depth_frame.get_width()/ depth_frame.get_width()
结果是:
我用红色圆圈标记了它不应该渲染点的区域。
有人可以看到我的问题在哪里或者知道从pointcloud获得这些指标的更好方法吗?
感谢您的支持!

以上来自于谷歌翻译


以下为原文

Hello,

I am working on a RealSense D435 multicamera setup where I show only those 3D points which are close enough to the related camera.

My actual workflow for each camera is:
     Calculate the calibration rotation matrix and translation vector with a circle pattern using OpenCV
     Using some post-filter to reduce aliasing
     Align process based on the example
     Calculate the pointcloud

I had some issue about using post-filtering before align but I got the answer from other topic. Currently I have a processing_block for the post-filter and the align with pointcloud calculation as well.
I also saw that when I want to use the aligned video_frame and the depth_frame to generate the pointcloud then the result is really strange so I use the post-filtered depth_frame to get the pointcloud and the aligned video_frame for the mapping. I dont know about this issue as well but my main problem is that I dont know how to get properly those point indicies at align which are close enough to the related camera. I noticed that the decimation filter can change the resolution but still not got it.

Here is my code for the align_process:

rs2::processing_block align_processing(  [&](rs2::frameset data, // Input frameset (from the pipeline)
                                                                       rs2::frame_source& source) // Frame pool that can allocate new frames
{
        std::vector point_indicies;
       auto frameset_data = align_frameset(profile, data, point_indicies, (*filter_config_ptr)->get_process_option("Depth_Clip"));
       rs2::frameset aligned_frameset = source.allocate_composite_frame({ frameset_data.first, data.get_depth_frame() });
                                                                                                                                   //the first is the aligned video_frame, the other the post-filtered depth_frame
       auto depth_frame = aligned_frameset.get_depth_frame();
       auto rgb_frame = aligned_frameset.get_color_frame();

        pc.map_to(rgb_frame);
        auto points = pc.calculate(depth_frame);

        data_handler.lock();
        output_d.points = points;     //rs2::points
        output_d.pois = point_indicies;     //Only these points will be drawn
        output_d.rgb_frame = rgb_frame;
        if (calibration_m.valid) output_d.calibration_m = calibration_m;
        data_handler.unlock();

        source.frame_ready(aligned_frameset);
});

In the remove_background() function I store these indicies:
     point_indicies.push_back((y / scale) * (width / scale) + (x / scale));
where y goes through the row and x through the column and scale = aligned_depth_frame.get_width() / depth_frame.get_width()

The result is:

I marked with red circle that area where it shouldnt render the points.

Someone can see where is my problem or know a better way to get those indicies from the pointcloud?

Thanks for your support!

回帖(1)

汤宇

2018-11-12 12:05:57
Hellospektro,
英特尔依靠社区来帮助解决这类算法实施问题。
不幸的是,它超出了我们的支持范围。但是,此示例可以帮助您找到解决方案:https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv/grabcuts。
如果您找到解决方案,请在此主题中与社区分享您的结果谢谢您和最好的问候,Eliza

以上来自于谷歌翻译


以下为原文

Hello spektro,
 
Intel relies on the community to help with this type of algorithm implementation question. Unfortunately, it is out of our support scope.
However, this sample may help you find a solution: https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv/grabcuts.
 
If you find a solution, please share your results with the community in this thread

Thank you and best regards,
Eliza
举报

更多回帖

发帖
×
20
完善资料,
赚取积分