This repository has been archived on 2024-02-04. You can view files and clone it, but cannot push or open issues or pull requests.
blog.polynom.me/_posts/2019-06-08-How-I-Play-Games.md

109 lines
7.1 KiB
Markdown
Raw Normal View History

2020-03-25 21:31:01 +00:00
---
title: How I Play Games on My Linux PC
published: true
---
I love Linux. In fact, I love it so much that it runs on every computer I use, except for my phone but that
can be changed. It always amazes me how much control Linux gives me about my computer and how easy it is
to create a script that just does everything that I was doing manually before.
Since Septemper of 2018, I decided to stop dual booting Windows and Linux and only use Linux. I mean, I could
play my most played games under Linux: *CS:GO, Split/Second Velocity (Wine), NieR: Automata (Wine).* But there
were still some games that I could not play as either have no Linux port or refuse to run with Wine. I love
playing *Tom Clancy's The Division* and *The Division 2*. I really enjoyed playing *Tom Clancy's Rainbow Six Siege* and
*Wildlands* was much fun. Except for *The Division*, none of these games runs under Wine. So what do?
# GPU Passthrough
Before even having the thought of switching to Linux "full-time", I stumbled across [this video](https://invidio.us/watch?v=16dbAUrtMX4) by Level1Linux.
It introduced me to the concept of hardware passthrough and I wanted to do it ever since. Now that my mainboard
has an IOMMU and my CPU supports all needed virtualization extensions, I was ready.
At that time I was using a AMD Ryzen 2400G and a Nvidia Geforce GTX 1060. I chose this particular CPU
as it contains an iGPU, allowing me to have video output of my host even when I pass the 1060 through
to my VM.
<!-- There are many great tutorials out there that teach you to do this thing but I was amazed at how well -->
<!-- the games run. It should have come to no suprise but it still did. -->
The only thing that I did not like was the fact that the Nvidia driver refuses to run in a Virtual Machine, so
I had to configure my VM via libvirt in a way that hides the fact that the driver is run inside a VM.
# Dynamic GPU Passthrough
While this allowed me to play *The Division*, it was tedious to reboot to not have the GPU bound to the
vfio-pci module so that I could use it on my host. Most guides expect you to have a second powerful GPU
so that you don't have to worry about the unavailable GPU but to me it seemed like a waste.
So I wrote myself a script which...
- unloaded all Nvidia kernel modules;
- started libvirt and loaded the vfio-pci module;
- bound the GPU to the vfio-pci module;
- started the VM.
The only problem with this was that the Nvidia modules kept being loaded by the X server. This was annoying
since I had to blacklist the modules, which prevented me from using the GPU on my host. The solution, albeit
very hacky, was a custom package which installed the kernel modules into a new folder from where the modules
were manually inserted using `insmod` by another script.
My host's video output comes from my Ryzen's iGPU. It is not powerful enough to run games like *Split/Second Velocity*
or *CS:GO* at an acceptable framerate, so what do?
Since the Nvidia driver for Linux is proprietary [PRIME offloading](https://wiki.archlinux.org/index.php/PRIME#PRIME_GPU_offloading) was not an option. I, however, discovered
a library which allowed the offloading of an application's rendering - if it uses GLX - onto another GPU: [primus](https://github.com/amonakov/primus).
It worked well enough for games that used OpenGL, like *CS:GO*. But when I tried launching *Split/Second Velocity*
using Wine, it crashed. Vulkan offloading was not possible with primus, but with [primus_vk](https://github.com/felixdoerre/primus_vk). This library I never got to work so I cannot say anything about it.
The only solution to that, from my point-of-view, was to create another script with launched a second X server
on the Nvidia GPU, start Openbox as a WM on that X server and create a seamless transition from my iGPU- to my
Nvidia-X-server using [barrier](https://github.com/debauchee/barrier). I then could start applications like
Steam on the Nvidia X server and use the GPU's full potential.
Since I was using barrier for the second X server I tried doing the same with barrier inside my VM and all I can
say is that it works very well. It made the entire "workflow" with the VM much less painful as I could just take
control of the host if I ever needed to without the need for a second keyboard.
# GPU Changes
Today, my PC runs the same AMD CPU. However, the Nvidia GPU got replaced with an AMD RX 590. This allowed me to
use the opensource amdgpu driver, which was and still is a huge plus for me. It complicated some things for me
though.
While I can now use PRIME offloading on any application I want, I cannot simply unbind the RX 590 from the amdgpu
driver while in X for use in my VM. While the driver exposes this functionality, it crashes the kernel as soon
as I try to suspend or shutdown my computer.
The only solution for this is to blacklist the amdgpu module when starting the kernel, bind the GPU to the vfio-pci
driver and pass it through. Then I can load the amdgpu module again and have it attach itself to my iGPU. When I am
done with using the VM, I can re-attach the GPU to the amdgpu driver and use it there.
There are some issues with this entire setup though:
- sometimes after re-attaching, the GPU does not run with full speed. While I can normally play *CS:GO* with ~80 FPS, it can be as low as ~55 FPS after re-attachment.
- the GPU cannot be reset by the Linux kernel. This means that the GPU has to be disabled inside Windows before shutting down the VM. Otherwise, the amdgpu module cannot bind to the GPU which even crashed my kernel.
# Some Freezes
Ignoring the GPU issue, since around Linux kernel 4.1x I experienced another issue: My computer would sometimes freeze
up when opening *Steam*. In even newer versions, it even freezed by PC when I gave my VM 10GB of RAM, but did not when
I gave my VM only 8GB.
By running htop with a really small refresh interval I was lucky to observe the probable cause of these freezes: The
kernel tried to swap as much as he could, thus making everything grind to a halt. The solution to this, even though
it *feels* hacky, is to just tell the kernel to swap less aggressively by setting `vm.swappiness` to either a much
lower value to swap later to to 0 to stop swapping.
# Audio
QEMU, which I used as libvirt's backend, allows you to "pass through" audio from inside the VM to your PulseAudio socket
on the host. This worked okay-ish at first, but now - presumably because something got updated inside QEMU - it
works well enough to play games. I get the occasional crackling but it is not distracting at all.
I also tried a software called [scream](https://github.com/duncanthrax/scream) which streamed the audio from a
virtual audio device inside the VM to the network. As the only network interface attached to my VM was going directly
to my host, I just set up the receiver application to listen only on this specific interface. This worked remarkebly
well as I never heard any crackling.
The only issue that I had with scream was that, for some reason, *Tom Clancy's The Division 2* would crash every 5
minutes when I was using scream. Without it, *The Division 2* never crashed.
# Conclusion
My solutions are probably not the most elegant or the most practical but
![](/assets/img/as-long-as-it-works.jpg)