完善资料让更多小伙伴认识你,还能领取20积分哦, 立即完善>
你好
我正在开发一个视频混音器的过程,该混音器在复合格式中输入2个视频输入并输出一个复合视频信号 搅拌机的功能是 在视频图像之间选择图片中的图片PIPPicture out of Picture POPPicture resizing一些切换效果所有功能都应该从PCMy问题控制哪个设备应该足以做这样的功能 也正在为这样的产品选择FPGA是好还是使用DSP更好或者使用Just dataacusition卡将数据捕获到PC并在PC上进行所有Processign更好? 最好的祝福 Hossam Alzomor www.eandit.com www.i-g.org 以上来自于谷歌翻译 以下为原文 Hi I am on the process of Developing a Video mixer which takes 2 Video inputs in composite formate and outputs one composite video signal The fuction of the mixer is to
also is selecting FPGA for such product is good or using DSP is better or is it better to use Just dataacusition card to capture data to PC and do all the Processign on the PC? Best Regards Hossam Alzomor www.eandit.com www.i-g.org |
|
相关推荐
6个回答
|
|
随着比率变小,用于减小尺寸的过滤器更重要。
简单 插值时,可以根据输入图像中的相邻像素取值。 例如,如果输出像素对应于位置X = 12.3,Y = 10.7,则为您 将使用仅基于像素值12,10 13,10 12,11和13,11的值 在输入图像中。 缩小到大于输入图像的50%的大小时 这适用于O.K.,但如果您的输出图像更像缩略图,那么像素 输出图像的一部分表示输入图像的更大部分,并且这个 方法仅适用于该情况,如果四个之外没有尖锐特征 用于插值的像素,但在与输出像素对应的区域内。 在决定之前,您可能希望在计算机上使用缩放算法 你想怎么做 我不确定你的意思是“做你所有的在线处理”。 在我的情况下 输出图像帧率不一定与输入图像相同,并且 输入图像不同步,甚至不必同时运行 帧率。 因此输入图像存储在SDRAM中,输出图像是 使用每个输入的最新完全转移图像进行处理。 在我的情况下,没有混合效果。 每个来源都分配了一个矩形 在输出图像中。 还有一个“背景图像”来填充任何区域 被这些矩形覆盖。 矩形重叠的地方很简单 优先选择“顶部”图像。 在Spartan 3推出之前,我选择了Virtex-II。 没有理由 你不能用Spartan 3做到这一点,特别是因为它与Virtex-II基本相同 内部。 请注意,“1000”尺寸的Spartan 3可能不一样 作为Virtex-II的块RAM和乘法器的数量,最后这些可能是 限制您选择零件的资源。 对于PNX-1500,您可能仍需要一些外部逻辑来提供两个视频 流入该部分,但其视频输入端口最高可达400兆字节 第二,外部逻辑比整个视频简单得多 混合器。 我做了前端,需要四个CCIR-656输入并缓冲它们 在SDRAM中,然后使用它一次将它们提供给PNX系列第一部分源 只是一个Spartan 2e XC2s150e和一些单数据速率SDRAM。 如果你的两个来源 是生成锁定的你可以逃脱没有外部记忆,只是创建一个 单个数据流,用于在像素或行级别混合两个视频。 - Gabor 在原帖中查看解决方案 以上来自于谷歌翻译 以下为原文 The filter for downsizing is more important as the ratio becomes smaller. With simple interpolation, you can take a value based on the adjacent pixels in the input image. For example if the output pixel corresponds to a position X = 12.3, Y = 10.7, you would use a value based on just the values of pixels 12,10 13,10 12,11 and 13,11 in the input image. When downscaling to a size greater than 50% of the input image this works O.K., but if your output image is more like a thumbnail, then the pixel of the output image represents a much larger portion of the input image, and this method only works for that case if there are no sharp features outside the four pixels used for interpolation, but within the area corresponding to the output pixel. You may want to play with scaling algorithms on a computer before you decide how you want to do it. I'm not sure what you mean by "do all your processing on line". In my case the output image frame rate was not necessarily the same as the input images, and the input images were not synchronized nor even necessarily running at the same frame rate. So input images were stored in the SDRAM and the output image was processed using the most recent fully transferrred image from each input. In my case there were no mixing effects. Each source was assigned a rectangle in the output image. There was also a "background image" to fill any area not covered by these rectangles. Where the rectangles overlapped there was a simple priority to select the "top" image. I chose the Virtex-II at the time before Spartan 3 was available. There is no reason you couldn't do this with Spartan 3, especially as it is essentially the same as Virtex-II internally. Just be aware that a Spartan 3 at the "1000" size may not have the same number of block RAMs and multipliers as the Virtex-II, and in the end these may be the resources that limit your selection of part. As for the PNX-1500, you may still need some external logic to feed the two video streams into the part, but its video input port can run at up to 400 megabytes per second, and the external logic would be much simpler than doing the entire video mixer. I have made front-ends that take four CCIR-656 inputs and buffer them in SDRAM and then supply them to the PNX series part one source at a time using just a Spartan 2e XC2s150e and some single-data-rate SDRAM. If your two sources are gen-locked you could get away with no external memory and just create a single data stream that mixes the two videos at the pixel or line level. -- GaborView solution in original post |
|
|
|
我已经完成了两个和四个视频源的类似项目(视频缩放和混合),尽管在
那种情况我得到的是VGA风格的RGB视频而不是复合电视分辨率视频。 为了 四个输入版本我在一个SO-DIMM上使用了Virtex-II XC2V1000和DDR SDRAM。 我是什么 发现缩放算法可能需要很多Block RAM来做垂直方向 因为你需要存储至少与你的水龙头一样多的输入图像行 过滤(缩小图像时,您需要在插值前过滤)。 然后我发现了 虽然我可以使用分布式算法进行水平滤波,但我需要乘法器 垂直过滤器。 因此,根据您的图像尺寸和滤镜要求,您可能会发现这一点 您需要的设备将受到块ram和乘数的限制,而不受结构逻辑的限制。 此外,如果您的输入不是基因锁定,则需要外部图像存储器。 这很可能 由所需的带宽而不是所需的总存储量来确定。 你可以 发现对于两个电视分辨率复合输入,像PNX1500系列这样的媒体处理器 来自恩智浦可以为您完成这项工作。 HTH, 的Gabor - Gabor 以上来自于谷歌翻译 以下为原文 I have done similar projects (video scaling and mixing) with two and four video sources, although in that case I was getting VGA style RGB video instead of composite TV resolution video. For the four input version I used a Virtex-II XC2V1000 and DDR SDRAM on one SO-DIMM. What I found is that the scaling algorithm can require a lot of block RAM to do the vertical direction because you need to store at least as many rows of the input image as the taps in your filter (when downsizing an image you'll want to filter before interpolating). Then I found that although I could use distributed arithmetic for horizontal filtering, I needed multipliers for the vertical filter. So depending on your image dimensions and filter requirements you may find that the device you need will be limited by block ram and multipliers rather than by the fabric logic. Also if your inputs are not gen-locked you need external image memory. This is likely to be determined by the required bandwidth rather than the total required storage. You may find that for two TV resolution composite inputs a media processor like the PNX1500 series from NXP can do the job for you. HTH, Gabor -- Gabor |
|
|
|
谢谢Gabor,
但是为什么在缩小图像时我们需要在插值之前进行滤波? 你在网上做过所有的处理吗? 你在混音时也添加了任何效果,我的意思是你是如何实现从一个源到另一个源的过渡? 另一方面,Trimedia PNX1500价格(30美元)与XC2V1000(Digikey的230美元)相比不错,但它有一个视频输入!!!!!!!! 不能低成本的FPGA就像斯巴达这样的算法工作? 最好的祝福 Hossam Alzomor 以上来自于谷歌翻译 以下为原文 Thanks Gabor, But Why do we need to filter before interpolating when downsizing an image? Did you did all your processing online ? also did you add any effects when mixing, I mean How did you implement the transition from One source to the other? on the other hand Trimedia PNX1500 Price (30$) is ver good compared with XC2V1000(230$ from Digikey) , but it have one input for video!!!!!!!! can't low cost FPGA's like spartan work in such algorithm? Best Regards Hossam Alzomor |
|
|
|
随着比率变小,用于减小尺寸的过滤器更重要。
简单 插值时,可以根据输入图像中的相邻像素取值。 例如,如果输出像素对应于位置X = 12.3,Y = 10.7,则为您 将使用仅基于像素值12,10 13,10 12,11和13,11的值 在输入图像中。 缩小到大于输入图像的50%的大小时 这适用于O.K.,但如果您的输出图像更像缩略图,那么像素 输出图像的一部分表示输入图像的更大部分,并且这个 方法仅适用于该情况,如果四个外部没有尖锐特征 用于插值的像素,但在与输出像素对应的区域内。 在决定之前,您可能希望在计算机上使用缩放算法 你想怎么做 我不确定你的意思是“做你所有的在线处理”。 在我的情况下 输出图像帧率不一定与输入图像相同,并且 输入图像不同步,甚至不必同时运行 帧率。 因此输入图像存储在SDRAM中,输出图像是 使用每个输入的最新完全转移图像进行处理。 在我的情况下,没有混合效果。 每个来源都分配了一个矩形 在输出图像中。 还有一个“背景图像”来填充任何区域 被这些矩形覆盖。 矩形重叠的地方很简单 优先选择“顶部”图像。 在Spartan 3推出之前,我选择了Virtex-II。 没有理由 你不能用Spartan 3做到这一点,特别是因为它与Virtex-II基本相同 内部。 请注意,“1000”尺寸的Spartan 3可能不一样 作为Virtex-II的块RAM和乘法器的数量,最后这些可能是 限制您选择零件的资源。 对于PNX-1500,您可能仍需要一些外部逻辑来提供两个视频 流入该部分,但其视频输入端口最高可达400兆字节 第二,外部逻辑比整个视频简单得多 混合器。 我做了前端,需要四个CCIR-656输入并缓冲它们 在SDRAM中,然后使用它一次将它们提供给PNX系列第一部分源 只是一个Spartan 2e XC2s150e和一些单数据速率SDRAM。 如果你的两个来源 是生成锁定的你可以逃脱没有外部记忆,只是创建一个 单个数据流,用于在像素或行级别混合两个视频。 - Gabor 以上来自于谷歌翻译 以下为原文 The filter for downsizing is more important as the ratio becomes smaller. With simple interpolation, you can take a value based on the adjacent pixels in the input image. For example if the output pixel corresponds to a position X = 12.3, Y = 10.7, you would use a value based on just the values of pixels 12,10 13,10 12,11 and 13,11 in the input image. When downscaling to a size greater than 50% of the input image this works O.K., but if your output image is more like a thumbnail, then the pixel of the output image represents a much larger portion of the input image, and this method only works for that case if there are no sharp features outside the four pixels used for interpolation, but within the area corresponding to the output pixel. You may want to play with scaling algorithms on a computer before you decide how you want to do it. I'm not sure what you mean by "do all your processing on line". In my case the output image frame rate was not necessarily the same as the input images, and the input images were not synchronized nor even necessarily running at the same frame rate. So input images were stored in the SDRAM and the output image was processed using the most recent fully transferrred image from each input. In my case there were no mixing effects. Each source was assigned a rectangle in the output image. There was also a "background image" to fill any area not covered by these rectangles. Where the rectangles overlapped there was a simple priority to select the "top" image. I chose the Virtex-II at the time before Spartan 3 was available. There is no reason you couldn't do this with Spartan 3, especially as it is essentially the same as Virtex-II internally. Just be aware that a Spartan 3 at the "1000" size may not have the same number of block RAMs and multipliers as the Virtex-II, and in the end these may be the resources that limit your selection of part. As for the PNX-1500, you may still need some external logic to feed the two video streams into the part, but its video input port can run at up to 400 megabytes per second, and the external logic would be much simpler than doing the entire video mixer. I have made front-ends that take four CCIR-656 inputs and buffer them in SDRAM and then supply them to the PNX series part one source at a time using just a Spartan 2e XC2s150e and some single-data-rate SDRAM. If your two sources are gen-locked you could get away with no external memory and just create a single data stream that mixes the two videos at the pixel or line level. -- Gabor |
|
|
|
我在飞行中的意思是实时混音
我认为gen锁定的意思是,所有来源的帧同步都是一样的,对吧? 以上来自于谷歌翻译 以下为原文 what I mean by on the fly is real time mixing I think what do you mean by gen locked, is that the frame sync for all sources are the same, right? |
|
|
|
我不确定“锁定”这个术语来自何处,但它确实指的是具有该术语的相机
相同的帧时间。 例如在广播视频中,为了保持恒定的帧速率 在摄像机之间切换,所有摄像机将“锁定”到广播电台同步信号。 消息由gszakacs于11-15-2008 05:04 PM编辑 - Gabor 以上来自于谷歌翻译 以下为原文 I'm not exactly sure where the term "gen locked" comes from, but it does refer to cameras having the same frame timing. For example in broadcast video, in order to keep a constant frame rate while switching between cameras, all cameras would be "gen locked" to the broadcast station sync signal. Message Edited by gszakacs on 11-15-2008 05:04 PM-- Gabor |
|
|
|
只有小组成员才能发言,加入小组>>
2384 浏览 7 评论
2800 浏览 4 评论
Spartan 3-AN时钟和VHDL让ISE合成时出现错误该怎么办?
2264 浏览 9 评论
3336 浏览 0 评论
如何在RTL或xilinx spartan fpga的约束文件中插入1.56ns延迟缓冲区?
2431 浏览 15 评论
有输入,但是LVDS_25的FPGA内部接收不到数据,为什么?
757浏览 1评论
请问vc707的电源线是如何连接的,我这边可能出现了缺失元件的情况导致无法供电
547浏览 1评论
求一块XILINX开发板KC705,VC707,KC105和KCU1500
369浏览 1评论
1965浏览 0评论
684浏览 0评论
小黑屋| 手机版| Archiver| 电子发烧友 ( 湘ICP备2023018690号 )
GMT+8, 2024-11-24 17:25 , Processed in 1.291187 second(s), Total 87, Slave 71 queries .
Powered by 电子发烧友网
© 2015 bbs.elecfans.com
关注我们的微信
下载发烧友APP
电子发烧友观察
版权所有 © 湖南华秋数字科技有限公司
电子发烧友 (电路图) 湘公网安备 43011202000918 号 电信与信息服务业务经营许可证:合字B2-20210191 工商网监 湘ICP备2023018690号