完善资料让更多小伙伴认识你,还能领取20积分哦, 立即完善>
扫一扫,分享给好友
大家好,
我想知道如果我可以提交一个缓冲后立即另一直致力于(我知道我不能犯同样的缓冲区),情况如下:当我收到第一帧缓冲区,我想把另一个独立的帧缓冲器中的接收。然后我想把收到的寄出去。 这是类似于在一个缓冲区的开头添加页眉和uvcaddheader() UVC EOF函数。虽然主要有两点不同:一,正如已经提到的,必须独立于。其次,是犯了两次:在一帧的开始和结束,框架。 任何输入是赞赏。 马克 以上来自于百度翻译 以下为原文 Hello people, I was wondering if I can commit a buffer immediately after another has been committed (I know I can't commit the SAME buffer).The scenario is the following one: when I received the first frame buffer, I want to send another indepently from the frame buffer received. Then I want to sent the received one. This is similar to the UVCAddHeader() function used to add the UVC Header and EOF at the beginning of a buffer. Although there are two main differences: one, as already mention, must be indepently committed. The second one, is that is committed twice: at the beginning of a frame and at the end of that frame. Any input is appreciated. Marc |
|
相关推荐
7个回答
|
|
贾景晖
你真的在发送第一缓冲后最后一个缓冲区需要一个充分的缓冲区?或者你想要几个字节的数据之前和开始帧后发送? 请让我们知道,在每一帧后,这个额外的数据的意义是什么。 你想这样的流视频的决议是什么? 以上来自于百度翻译 以下为原文 Marc, Do you actually need one full buffer before sending the first buffer and after last buffer? or you want a few bytes of data to send before and after the start a frame? Please let us know what is the significance of this additional data before and after every frame. What are all video resolutions that you want to stream in this way? |
|
|
|
yahan52 发表于 2018-8-31 09:26 你好, 是的,如果我想正确地实现USB视觉标准。使用这种标准的设备必须发送数据单独两个额外的包。起初被称为领袖,最后叫拖车。 我想流任何可用的投资回报率。 不管怎样,你可以告诉如果有人能够实现这个标准使用CX3单片机? 溴 马克 以上来自于百度翻译 以下为原文 Hi, Yes, if I want to correctly implement USB Vision standard. Using this standard the device must sent two extra packets of data individually. At the beginning which is called Leader and at the end, called Trailer. I want to stream any ROI available. Anyway, could you tell if someone has being able to implement this standard using CX3 MCU? BR Marc |
|
|
|
60user198 发表于 2018-8-31 09:35 贾景晖 请尝试以下实现上述。 1。从GPIF到CPU创建DMA手册 2。创建一个从CPU到USB的DMA手工操作通道 三。根据GPIF到CPU DMA通道的数据流识别帧和帧的开始,然后通过CPU向USB DMA通道发送引导和预告器到USB。 4。发送有效数据,因为它是在领导者和拖车之间 还有另一种方法来实现这一点。 使用FPGA / ASIC获取传感器的数据,并向负载添加领导者和预告器。连接此FPGA并行接口与FX3GPIF II接口,以将数据传输到USB。 以上来自于百度翻译 以下为原文 Marc, Please try the following to implement the above. 1. Create a DMA Manual_IN channel from GPIF to CPU 2. Create a DMA Manual_Out Channel from CPU to USB 3. Identify the start of the frame and end of the frame as per the data flow through GPIF to CPU DMA Channel then send the leader and trailer to USB via CPU to USB DMA Channel. 4. Send the payload data as it is in between the leader and trailer There is another method to implement this. Use an FPGA/ASIC which gets the data from the sensor and adds leader and trailer to the payload. Connect this FPGA parallel interface with FX3 GPIF II interface to transfer data to USB. |
|
|
|
你好, 我试着用这个方法,但在我的情况下是不适用的,因为我有(需要)控制dmachannel。CX3禁止从CPU到CPU有两个DMAChan. 关于第二建议:我们已经考虑和选择。(我们正在调查可能的FPGA解决方案)但是我们目标的一个廉价的解决方案,从而避免了使用FPGA。 还有一个好消息,我设法流速度图像。几乎正确。问题是,我欺骗u3v标准告诉它我流VGA(30fps)的形象,但实际上我流少一点。换句话说,第一个和最后一个有效载荷数据转化为领导和有效载荷数据包。 还有一些问题与DMA的大小,GPIF的大小和传感器,但一个好的开始。 以上来自于百度翻译 以下为原文 Hello, I tried using this method but is not applicable at my case, because I have (and need) a Control DMAChannel. And CX3 forbids having two dmachannels from and to the CPU. About the second suggestion: We have already think about it and its an option. (We are looking into the possible FPGA solutions) But we are targetting a cheap solution, therefore avoiding using FPGA. And the good news, I managed to stream images at good speed. Almost correct. The thing is that I am tricking the U3V standard by telling it that I am streaming a VGA(30fps) image, but really I am streaming a bit less. In plain words, first and last payload data are changed into Leader and Payload packets. There still some issues with DMA size, GPIF size and sensor, but its a good start. |
|
|
|
60user198 发表于 2018-8-31 10:06 你好,贾景晖, 太棒了。请让我们知道你的实现细节和你当前面临的问题。 以上来自于百度翻译 以下为原文 Hello Marc, It's great. Please let us know your implementation details and the current issues that you are facing. |
|
|
|
你好,SRDR, 我一直在执行不同格式的测试,特别是720P和完整的RES,尝试格式:MUN8,MUN12和“YUV422”(但是输出格式是MYN12)。使用YUV422,它是相当直接的,这是出乎意料的,因为当我尝试其他两种格式,它是另一个故事。我使用不同的驱动程序使用不同的驱动程序进行测试。 MUN8和MYN12是很棘手的。有时我设法在固件停止之前发送一些图像,有时我只得到一个获取超时。 配置如下:1280x720@ 30fPS单8(有效载荷1280x712) 传感器:1280 七百二十 单8 30FPS CSI TX CLK@ 336MHz 传感器CLK 84兆赫 CX3:PCLK:84兆赫 CSI & LT-&GT;HS 84兆赫 (H-Ac能能能动= 7.42美国) MIPI CFG:RAW8 GPIF II:8位宽度 缓冲区大小:5120(计数数=23) u3v:1280x712p mono8 理论抢将超过30fps一点,因为u3v程序认为是接收不到什么是真正的传播 这个问题是众所周知的,startacquisition和分组后,错误71在DMA触发提交API在DMA回调。让我解释我的代码:当载荷数据接收刺激的事件,如果是第一包(柜台= = 0)是砸了这个包和发送的领袖。此计数器增加每次一包承诺直至达到最高值是等于2的整数(num_of_payload_buffers +领导+拖车)(最后一个也碎了,拖车传送代替)。类似于u3vision FX3的工程实例。 在一些点,这传输错误发生。也许是在第三包或第一百五十等(总是在到达最后一个) 所以,阅读关于这个错误的音符,唯一的解决方案,可以实现增加DMA缓冲区大小。事情是这样的:我怎么能提高呢?因为FX3 TRM说,512 KB的RAM,我只能扩大8kb(的2-boot阶段RAM区)。但似乎是不够的。但是,手臂有更多的“闲置”的空间,我不确定我能不能用。在任何输入? 另外,知道1280x720字节(mono8)比1280x720x2少得多(YUV422)字节的发送,怎么不在这里工作,但在同一速度YUV422格式的工作吗?我知道,更快的传输数据,那么,这是真的事件产生的那么快? 另一个问题是关于OV5640传感器。我能读/写寄存器(NDA与OVT),但我无法配置在默认的ROI和格式。因为,鉴于这种情况的解释,我想流更多的线,u3v发送正确的投资回报率,e.g.1280x720,意义的传感器配置为1280x728。我不能理解为什么我不能配置或为什么这么难。(事实:第一时间与图像MIPI传感器) 无论如何,谢谢你的支持,告诉如果有什么不明确的哈哈。 BR。 以上来自于百度翻译 以下为原文 Hello srdr, I have been performing tests on different formats, specifically 720p and Full Res, trying formats: Mono8, Mono12 and "YUV422" (but the output format is Mono12). Using YUV422 it has been quite straight forward, which was unexpected; because when I tried the other two formats its been another story. I tested using different app grabbers with different drivers. Mono8 and Mono12 it's being tricky. Sometimes I managed to transmit some images before the firmware stops and sometimes I just get a acquisition timeout. The configuration is the following: 1280x720@30fps Mono8 (Payload 1280x712) Sensor: 1280 720 Mono8 30fps CSI TX Clk @ 336MHz Sensor Clk 84 MHz CX3: PCLK: 84 MHz CSI <-> HS 84 MHz (H-Active = 7.42 us) MIPI Cfg: RAW8 GPIF II: 8-bit width Buffer Size: 5120 (with count number = 23) U3V: 1280x712p Mono8 The theoretical grab will be a bit more than 30fps, because the U3V App thinks is receiveing less than what it is really transmitted The problem is well know, after StartAcquisition and some packects, Error 71 is triggered at the DMA Commit API in the DMA Callback. Let me explained my code: when Payload data is received as Prod event, if is the first packet (counter == 0) it is smashed this packet and transmitted the Leader. This counter increases every time a packet is committed until it reaches the top value which is an integer number equal to NUM_OF_PAYLOAD_BUFFERS + 2 (Leader + Trailer) (the last one is also smashed and the Trailer is transmitted instead). Similar to the U3Vision Example project for FX3. At some point of this transmission the error happens. Maybe is at the 3rd packet or at the 150th, etc. (always before reaching the last one) So, reading the Note about this error, the only solution given that can be implemented is increasing the DMA buffer size. The thing is: how much I can increase it? Because FX3 TRM says that for the 512 KB RAM, I can only expand 8KB (the 2-boot stage RAM area). But it seems is not enough. But then, the ARM has more "Unused" space that I am unsure I can or cannot used. Any inputs on this? Also, knowing that 1280x720 Bytes (Mono8) is much less than 1280x720x2 (YUV422) byte to send, how is not working here, but is working in YUV422 format with same speed? I know, less data, faster transmission, but is it really producing events that fast? Another issue is regarding the OV5640 sensor. I am able to read/write the regs (NDA with OVT), but I am unable to configure it outside the default ROI and formats. Because, given the situation explained, I would like to stream some more lines so the U3V transmits a correct ROI, e.g.1280x720, meaning the sensor is configure as 1280x728. I cannot undestand why I cannot configure or why its so difficult. (fact: first time with Image MIPI sensor) Anyway, thank you for the support and tell if something is not clear haha. Br. |
|
|
|
60user198 发表于 2018-8-31 10:24 默认情况下,您有224 KB缓冲区数据。由于这4 KB将用于调试通道和控制端点缓冲区,你会休息220 kb的缓冲空间。你可以accomidate这220 kb的DMA缓冲区空间为您的应用程序。如果,如果你不使用二阶段的引导加载程序,你将得到额外的32 kb的缓冲空间。 根据DMA配置中所提到的,你使用较小的缓冲区(5K)。这就是你可能看到71错误的原因。请将此增加至16K并检查其功能性。 我还不清楚,你怎么得到生产者发送的片头和片尾包。没有传感器发送一些额外的数据(帧数据的附加),第一个缓冲区满而引发的生产活动,因此你丢弃缓冲将领导包USB? 传感器配置:请谈谈OVT对于登记设置问题。 以上来自于百度翻译 以下为原文 By default, you have 224 KB buffer data. Since 4 KB of this will be used for Debug Channel and Control Endpoint buffer, you will have rest 220 KB buffer space. You can accomidate this 220 KB DMA buffer space for your application. In case, if you are not using second stage boot loader, you will get additional 32 KB buffer space. As per the DMA configuration mentioned, you are using small buffer (5K). This is the reason you may be seeing the 71 error. Please increase this to 16K and check the funcitonality. It is not clear to me that how did you receive producer to send the Leader and Trailer Packet. Does the sensor is sending some additional data (additional to frame data) so that first buffer is getting full and triggered the Producer event, hence you are discarding the buffer and committing leader packet to USB? Sensor configuration: Please talk to OVT regarding the register settings issue. |
|
|
|
只有小组成员才能发言,加入小组>>
752个成员聚集在这个小组
加入小组2071 浏览 1 评论
1827 浏览 1 评论
3642 浏览 1 评论
请问可以直接使用来自FX2LP固件的端点向主机FIFO写入数据吗?
1762 浏览 6 评论
1513 浏览 1 评论
CY8C4025LQI在程序中调用函数,通过示波器观察SCL引脚波形,无法将pin0.4(SCL)下拉是什么原因导致?
511浏览 2评论
CYUSB3065焊接到USB3.0 TYPE-B口的焊接触点就无法使用是什么原因导致的?
362浏览 2评论
CX3连接Camera修改分辨率之后,播放器无法播出camera的画面怎么解决?
412浏览 2评论
359浏览 2评论
使用stm32+cyw43438 wifi驱动whd,WHD驱动固件加载失败的原因?
863浏览 2评论
小黑屋| 手机版| Archiver| 电子发烧友 ( 湘ICP备2023018690号 )
GMT+8, 2024-11-26 20:02 , Processed in 1.152275 second(s), Total 88, Slave 72 queries .
Powered by 电子发烧友网
© 2015 bbs.elecfans.com
关注我们的微信
下载发烧友APP
电子发烧友观察
版权所有 © 湖南华秋数字科技有限公司
电子发烧友 (电路图) 湘公网安备 43011202000918 号 电信与信息服务业务经营许可证:合字B2-20210191 工商网监 湘ICP备2023018690号