We use cookies to make your experience better.
Learn about supproting Docker inside workspaces.
Container-based virtual machines (CVMs) allow users to run system-level programs, such as Docker and systemd, in their workspaces.
If you're a site admin or a site manager, you can enable CVMs as a workspace deployment option.
CVMs leverage the Sysbox container runtime, so the Kubernetes Node must run a supported Linux distro with the minimum kernel version. See Sysbox distro compatibility for more information.
The cluster must allow privileged containers and hostPath
mounts. See
Security for more information on why this is still secure.
You can use any cloud provider that supports the above requirements, but we have instructions on how to set up supported clusters on AWS and Google. Azure-hosted clusters will meet these requirements as long as you use Kubernetes version 1.18+.
The container-based virtual machine deployment option leverages the Sysbox container runtime to offer a VM-like user experience while retaining the footprint of a typical container.
Coder first launches a supervising container with additional privileges. This container is standard and included with the Coder release package. During the workspace build process, the supervising container launches an inner container using the Sysbox container runtime. This inner container is the user’s workspace.
The user cannot gain access to the supervising container at any point. The isolation between the user's workspace container and its outer, supervising container is what provides strong isolation.
NVIDIA GPUs can be added to CVMs on bare metal clusters only. This feature is not supported on Google Kubernetes Engine or other cloud providers at this time.
Support for NVIDIA GPUs is in beta. We do not support AMD GPUs at this time.
Coder doesn't support legacy versions of cluster-wide proxy services such as Istio, and CVMs do not currently support NFS as a file system.
See an opportunity to improve our docs? Make an edit.