The protocol is still under development. This driver and the renderer are both considered experimental.
The Venus renderer requires
from the host driver. However, it violates the spec in some places currently and also relies on implementation-defined behaviors in others. It is not expected to work on all drivers meeting the requirements. It has only been tested with
ANV 21.1 or later
RADV 21.1 or later (the host kernel must have
CONFIG_TRANSPARENT_HUGEPAGEdisabled because of this KVM issue)
The Venus driver requires supports for
The simplest way to test Venus is to use virglrenderer’s vtest server. To build virglrenderer with Venus support and to start the vtest server,
$ git clone https://gitlab.freedesktop.org/virgl/virglrenderer.git $ cd virglrenderer $ meson out -Dvenus-experimental=true $ ninja -C out $ ./out/vtest/virgl_test_server --venus
In another shell,
$ export VK_ICD_FILENAMES=<path-to-virtio_icd.x86_64.json> $ export VN_DEBUG=vtest $ vulkaninfo $ vkcube
If the host driver of the system is not new enough, it is a good idea to build
the host driver as well when building the Venus driver. Just remember to set
VK_ICD_FILENAMES when starting the vtest server so that the vtest
server finds the locally built host driver.
Because the driver requires
VIRTGPU_PARAM_CONTEXT_INIT from the virtio-gpu
kernel driver, one must make sure the guest kernel includes the changes from
To build crosvm,
$ mkdir crosvm $ cd crosvm $ wget https://storage.googleapis.com/git-repo-downloads/repo $ chmod +x repo $ ./repo init -g crosvm -u https://chromium.googlesource.com/chromiumos/manifest.git $ ./repo sync $ cd src/platform/crosvm $ RUSTFLAGS="-L<path-to-virglrenderer>/out/src" cargo build \ --features "x virgl_renderer virgl_renderer_next default-no-sandbox"
Note that crosvm must be built with
default-no-sandbox or started with
--disable-sandbox in this setup.
This is how one might want to start crosvm
$ sudo LD_LIBRARY_PATH=<...> VK_ICD_FILENAMES=<...> ./target/debug/crosvm run \ --gpu vulkan=true \ --display-window-keyboard \ --display-window-mouse \ --host_ip 192.168.0.1 \ --netmask 255.255.255.0 \ --mac 12:34:56:78:9a:bc \ --rwdisk disk.qcow2 \ -p root=/dev/vda1 \ <path-to-bzImage>
assuming a working system is installed to partition 1 of
CAP_NET_ADMIN is needed to set up the TAP network device.
Virtio-GPU and Virtio-WL¶
In this setup, the guest userspace uses Xwayland and a special Wayland compositor to connect guest X11/Wayland clients to the host Wayland compositor, using Virtio-WL as the transport. This setup is more tedious, but that should hopefully change over time.
For now, the guest kernel must be built from the
chromeos-5.10 branch of
the Chrome OS kernel. crosvm
should also be built with
wl-dmabuf feature rather than
To build minigbm and to enable minigbm support in virglrenderer,
$ git clone https://chromium.googlesource.com/chromiumos/platform/minigbm $ cd minigbm $ CFLAGS=-DDRV_<I915-or-your-driver> OUT=out DESTDIR=out/install make install $ cd ../virglrenderer $ meson configure out -Dminigbm_allocation=true $ ninja -C out
Make sure a host Wayland compositor is running. Replace
--display-window-keyboard --display-window-mouse by
--wayland-sock=<path-to-wayland-socket> when starting crosvm.
In the guest, build and start sommelier, the special Wayland compositor,
$ git clone https://chromium.googlesource.com/chromiumos/platform2 $ cd platform2/vm_tools/sommelier $ meson out -Dxwayland_path=/usr/bin/Xwayland -Dxwayland_gl_driver_path=/usr/lib/dri $ ninja -C out $ sudo chmod 777 /dev/wl0 $ ./out/sommelier -X --glamor --xwayland-gl-driver-path=<path-to-locally-built-gl-driver> \ sleep infinity
xdg-shell-unstable-v6 rather than the stable
xdg-shell from the host compositor. One must make sure the host
compositor still supports the older extension.
When virglrenderer is built with
-Dminigbm_allocation=true, the Venus
renderer might need to import GBM BOs. The imports will fail unless the host
driver supports the formats, especially multi-planar ones, and the DRM format
modifiers of the GBM BOs.
In the future, if virglrenderer’s
supported, the Venus renderer will require
VK_EXTERNAL_FENCE_HANDLE_TYPE_SYNC_FD_BIT from the host driver.
A WSI image of the Venus driver is an external image to the host driver. When
the WSI image is transitioned from
VK_IMAGE_LAYOUT_UNDEFINED after image
acquisition, the Venus driver does not request the Venus renderer to perform
an ownership transfer on the external image. It is unclear if the ownership
transfer is required or not. A specification issue has been filed for
clarifications. See the comment before
for more details.
The Venus renderer makes assumptions about
VkDeviceMemory that has
VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT. The assumptions are illegal and rely
on the current behaviors of the host drivers. It should be possible to remove
some of the assumptions and incrementally improve compatibilities with more
host drivers by imposing platform-specific requirements. But the long-term
plan is to create a new Vulkan extension for the host drivers to address this
specific use case.
The Venus renderer assumes a device memory that has
VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT can be exported as a mmapable dma-buf
(in the future, the plan is to export the device memory as an opaque fd). It
checking if the host driver can export the device memory.
The dma-buf is mapped (in the future, the plan is to import the opaque fd and
vkMapMemory) but the mapping is not accessed. Instead, the mapping
is passed to
KVM_SET_USER_MEMORY_REGION. The hypervisor, host KVM, and
the guest kernel work together to set up a write-back or write-combined guest
virtio_gpu_vram_mmap of the virtio-gpu kernel driver). CPU
accesses to the device memory are via the guest mapping, and are assumed to be
coherent when the device memory also has
VkImage or a
VkBuffer is created, the Venus renderer does not
know if the image or the buffer will be bound to such a device memory or not.
As a result, the Venus renderer unconditionally chains
VkImageCreateInfo and chains
checking for the host driver support.