完善资料让更多小伙伴认识你,还能领取20积分哦, 立即完善>
GPU虚拟化在哪里发生?
它是否出现在GRID卡中,然后将vGPU呈现给管理程序然后呈现给客户? 或者,GPU的虚拟化和调度是否真的发生在管理程序上安装的GRID管理器软件? 是否使用了SR-IOV? 我已经在互联网上搜索了这个答案,并且没有发现任何官方和确定的内容。 谢谢!! [R 以上来自于谷歌翻译 以下为原文 Where does GPU virtualization occur? Does it occur within the GRID cards and then the vGPUs are presented to the hypervisor and then onto the guests? Or, is the virtualization and scheduling of the GPU actually happen with the GRID manager software installed on the hypervisor? Is SR-IOV used? I have scoured the internet for this answer and have found nothing that is official and definitive. Thanks!! R |
|
相关推荐
8个回答
|
|
|
|
|
|
该过程涉及两个驱动程序。
一个驱动程序驻留在管理程序中以管理GPU状态,并且一个驱动程序驻留在VM本身中,以管理在PCIe总线上对图形卡进行的图形API调用。 根据管理程序/ VM配置,内存是静态划分的。 有关更多信息,请参见http://blogs.citrix.com/2013/10/25/vgpu-a-developer-to-developer-deep-dive-with-xenserver-engineering/。 以上来自于谷歌翻译 以下为原文 There are two drivers involved in the process. One driver resides in the hypervisor to manage GPU state, and one driver resides in the VM itself, to manage graphics API calls made to the graphics card, on the PCIe bus. Memory is statically divided, per the hypervisor/VM configuration. See here http://blogs.citrix.com/2013/10/25/vgpu-a-developer-to-developer-deep-dive-with-xenserver-engineering/ for more information. |
|
|
|
谢谢nvEric,
我做了一些进一步的调查,这是对我所说的话的解释。 GRID卡基本上向管理程序提供完整的GPU。 安装在Dom0中的GRID Manager软件可以分割GPU并显示vGPU配置文件。 来宾操作系统中的NVIDIA驱动程序与将安装在物理工作站上的常规NVIDIA驱动程序相同。 基本上,GPU的虚拟化发生在管理程序/ Dom0中,而不是在GRID卡或客户操作系统中。 假设以上是正确的,我还有一些其他问题。 调度VM对GPU内核的访问是在虚拟机管理程序内完成的,对吗? 如果是这样,管理程序本身是在进行调度,还是GRID管理器执行此操作,或者两者同时进行? SR-IOV是用于将单个GPU呈现为多个vGPU的方法,还是NVIDIA的专有技术? 谢谢大家! 理查德 以上来自于谷歌翻译 以下为原文 Thanks nvEric, I did some further investigation and here's a paraphrasing of what I was told. The GRID cards essentially present a full GPU to the hypervisor. The GRID Manager software, installed in Dom0, is what carves up the GPUs and presents the vGPU profiles. The NVIDIA driver in the guest OS is the same as a regular NVIDIA driver that would be installed on a physical workstation. Basically, the virtualization of the GPU occurs in the hypervisor/Dom0 and not in the GRID cards or in the guest OS. Assuming that the above is correct, I have a few other questions. The scheduling of the VMs’ access to the GPU cores is done within the hypervisor, correct? If so, is the hypervisor itself doing the scheduling, or does the GRID manager do it, or both in-tandem? Is SR-IOV the method used to present a single GPU as multiple vGPUs, or is that a proprietary technology from NVIDIA? Thanks all! Richard |
|
|
|
谁告诉你这是误导。
该技术是NVIDIA专有的,虚拟化发生在硬件层面。 调度由GPU中的调度程序在GPU硬件级别处理。 vGPU管理器管理哪个VM与哪个GPU进行通信,以及哪些GPU正在运行哪个配置文件以确定可以放置其他vGPU会话的位置。 实质上,当VM引导时,vGPU管理器根据配置文件,放置策略和可用资源确定放置vGPU的位置,然后提供驱动程序应将哪些通道用于将工作发布到GPU的详细信息。 一旦完成,vGPU管理器就会停止运行,并且通信直接发送到GPU。 然后,vGPU管理器只是监视VM是否仍在运行。 关闭VM后,vGPU管理器现在可以在启动时为该GPU分配另一个VM。 去看看去年GTC的一些录音,有一对可以解决这个问题。 https://gridforums.nvidia.com/default/topic/11/?comment=28 这也是我的合作伙伴培训,将于明年推出GPU Genius。 以上来自于谷歌翻译 以下为原文 Whoever told you that was misguided. The technology is proprietary to NVIDIA and the virtualisation occurs at the hardware level. Scheduling is handled by the scheduler in the GPU, at the GPU hardware level. vGPU manager manages which VM talks to which GPU, and what GPU's are currently running which profile to determine where other vGPU sessions can be placed. Essentially, when a VM boots vGPU manager determines where to place the vGPU based on the profile, placement policies and available resources and then provides the details of which channels the driver should use to post work to the GPU. Once that's done, vGPU manager gets out of the way and the comms are direct to the GPU. vGPU manager is then simply monitoring whether the VM is still running. Once the VM is shut down vGPU manager can now allocate another VM to that GPU when it boots. Have a look at some of the recordings from GTC last year, there's a couple that cover this. https://gridforums.nvidia.com/default/topic/11/?comment=28 It's also in my partner training which will be coming to GPU Genius next year. |
|
|
|
谢谢你的解释,杰森。
我听过大多数GTC会议。 我认为最深入探讨GPU虚拟化的工作方式是Andy Currid的会议,这非常好。 但是,它没有具体解决我的问题。 今年我很想在GTC听到更深入的潜水。 再次感谢! 理查德 以上来自于谷歌翻译 以下为原文 Thanks for the explanation, Jason. I have listened to most all of the GTC sessions. I think the deepest dive into how GPU virtualization works is Andy Currid's session, which was really good. However, it did not address my questions specifically. I would love to hear some even deeper dives at GTC this year. Thanks again! Richard |
|
|
|
嗨,杰森,
我对上面关于GPU虚拟化发生的位置的解释有疑问。 以下是“调度由GPU中的调度程序在GPU硬件级别处理”。 当你说“在gpu中”,你的意思是在GRID卡中吗? 或者,你是说GPU芯片实际上有一些调度技术? 如果它是前者,我假设在GPU之外还有另一个芯片正在进行核心调度。 再次感谢! 理查德 以上来自于谷歌翻译 以下为原文 Hi Jason, I have a question about your explanation above regarding where GPU virtualization occurs. Here's the line, "Scheduling is handled by the scheduler in the GPU, at the GPU hardware level." When you say, "in the gpu," do you mean within the GRID card? Or, do you mean that the GPU chip actually has some scheduling technology baked in? If it is the former, I am assuming there is another chip outside of the GPUs that is doing the scheduling of the cores. Thanks again! Richard |
|
|
|
|
|
|
|
得到它了。
很有意思。 谢谢杰森! 理查德 以上来自于谷歌翻译 以下为原文 Got it. Very interesting. Thanks Jason! Richard |
|
|
|
只有小组成员才能发言,加入小组>>
使用Vsphere 6.5在Compute模式下使用2个M60卡遇到VM问题
3069 浏览 5 评论
是否有可能获得XenServer 7.1的GRID K2驱动程序?
3485 浏览 4 评论
小黑屋| 手机版| Archiver| 电子发烧友 ( 湘ICP备2023018690号 )
GMT+8, 2024-11-22 07:48 , Processed in 0.787665 second(s), Total 89, Slave 72 queries .
Powered by 电子发烧友网
© 2015 bbs.elecfans.com
关注我们的微信
下载发烧友APP
电子发烧友观察
版权所有 © 湖南华秋数字科技有限公司
电子发烧友 (电路图) 湘公网安备 43011202000918 号 电信与信息服务业务经营许可证:合字B2-20210191 工商网监 湘ICP备2023018690号