wanjohiryan / Arc3dia

Self-Hosted Stadia: Play with your friends online from any device and at any time

Home Page:https://nestri.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Running linux based games with neko-rooms

impromedia opened this issue · comments

I want to run an Ubuntu game packaged in an AppImage.

I've changed the file /etc/entrypoint.sh to point to the game:

#!/bin/bash -e
#Add VirtualGL directories to path
export PATH="${PATH}:/opt/VirtualGL/bin"

Use VirtualGL to run wine with OpenGL if the GPU is available, otherwise use barebone wine
if [ -n "$(nvidia-smi --query-gpu=uuid --format=csv | sed -n 2p)" ]; then
export VGL_DISPLAY="${VGL_DISPLAY:-egl}"
export VGL_REFRESHRATE="$REFRESH"
cd /games && vglrun +wm game.AppImage --appimage-extract-and-run
else
cd /games && game.AppImage --appimage-extract-and-run
fi

it seems that the GPU are not available on the docker even I've installed nvidia-docker and Nvidia container toolkit
I'm using neeko-rooms to instantiate the qwantify sessions.
it is a work around to enable the GPU's in the container when it is started by neko-rooms?

Hi @impromedia

I haven't looked at neko-rooms yet. But this seems interesting.

What happens when you ssh into the container(s) and run nvidia-smi (assuming you are using a Nvidia gpu)?

Thank you for your fast replay. If I run nvidia-smi inside the container it is working:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M.
| | | MIG M. |
| 0 Tesla T4 On | 00000000:0B:00.0 Off | 0 |
| N/A 36C P8 9W / 70W | 70MiB / 15360MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla T4 On | 00000000:13:00.0 Off | 0 |
| N/A 35C P8 9W / 70W | 25MiB / 15360MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
+-----------------------------------------------------------------------------+

I've extracted the app image and when I start it, I get "segmentation fault (core dumped)" error
The app need minimum 8GB to run, maybe this is the issue.
How to increase it (on the host I have 320 GB) ?

It looks that the app Interface is allowed to use only 2GB memory.

image

There seems to be no issue with the containers accessing your Nvidia GPUs.

It looks that the app Interface is allowed to use only 2GB memory.

Try changing the shared memory shm to '8gb' in the docker-compose.yaml and see whether that helps

shm_size: '8gb'