Compare commits

..

No commits in common. "d756cc559e45682cdb062f3e75c6aaf8edf8c0e2" and "31e54bdc8551fa5c9984311341fcbd05938eca9d" have entirely different histories.

11 changed files with 78 additions and 651 deletions

View File

@ -1,24 +1,14 @@
![](https://dec05eba.com/images/gpu_screen_recorder_logo_small.png)
# GPU Screen Recorder # GPU Screen Recorder
This is a screen recorder that has minimal impact on system performance by recording a window using the GPU only, This is a screen recorder that has minimal impact on system performance by recording a window using the GPU only,
similar to shadowplay on windows. This is the fastest screen recording tool for Linux. similar to shadowplay on windows. This is the fastest screen recording tool for Linux.
This screen recorder can be used for recording your desktop offline, for live streaming and for nvidia shadowplay-like instant replay, This screen recorder can be used for recording your desktop offline, for live streaming and for nvidia shadowplay-like instant replay,
where only the last few minutes are saved. where only the last few seconds are saved.
Supported video codecs:
* H264 (default on Intel)
* HEVC (default on AMD and NVIDIA)
* AV1 (not currently supported on NVIDIA if you use GPU Screen Recorder flatpak)
Supported audio codecs:
* Opus (default)
* AAC
* FLAC
## Note ## Note
This software works with x11 and wayland, but when using AMD/Intel or Wayland then only monitors can be recorded. This software works with x11 and wayland, but when using AMD/Intel or Wayland then only monitors can be recorded.\
GPU Screen Recorder only supports h264 and hevc codecs at the moment which means that webm files are not supported.\
CPU usage may be higher on wayland than on x11 when using nvidia.
### TEMPORARY ISSUES ### TEMPORARY ISSUES
1) screen-direct capture has been temporary disabled as it causes issues with stuttering. This might be a nvfbc bug. 1) screen-direct capture has been temporary disabled as it causes issues with stuttering. This might be a nvfbc bug.
2) Recording the monitor on steam deck might fail sometimes. This happens even when using ffmpeg directly. This might be a steam deck driver bug. Recording a single window doesn't have this issue. 2) Recording the monitor on steam deck might fail sometimes. This happens even when using ffmpeg directly. This might be a steam deck driver bug. Recording a single window doesn't have this issue.
@ -31,7 +21,8 @@ For you as a user this only means that if you installed GPU Screen Recorder as a
On a system with a i5 4690k CPU and a GTX 1080 GPU:\ On a system with a i5 4690k CPU and a GTX 1080 GPU:\
When recording Legend of Zelda Breath of the Wild at 4k, fps drops from 30 to 7 when using OBS Studio + nvenc, however when using this screen recorder the fps remains at 30.\ When recording Legend of Zelda Breath of the Wild at 4k, fps drops from 30 to 7 when using OBS Studio + nvenc, however when using this screen recorder the fps remains at 30.\
When recording GTA V at 4k on highest settings, fps drops from 60 to 23 when using obs-nvfbc + nvenc, however when using this screen recorder the fps only drops to 58. The quality is also much better when using gpu screen recorder.\ When recording GTA V at 4k on highest settings, fps drops from 60 to 23 when using obs-nvfbc + nvenc, however when using this screen recorder the fps only drops to 58. The quality is also much better when using gpu screen recorder.\
It is recommended to save the video to a SSD because of the large file size, which a slow HDD might not be fast enough to handle. It is recommended to save the video to a SSD because of the large file size, which a slow HDD might not be fast enough to handle.\
Note that if you have a very powerful CPU and a not so powerful GPU and play a game that is bottlenecked by your GPU and barely uses your CPU then a CPU based screen recording (such as OBS with libx264 instead of nvenc) might perform slightly better than GPU Screen Recorder. At least on NVIDIA.
## Note about optimal performance on NVIDIA ## Note about optimal performance on NVIDIA
NVIDIA driver has a "feature" (read: bug) where it will downclock memory transfer rate when a program uses cuda (or nvenc, which uses cuda), such as GPU Screen Recorder. To work around this bug, GPU Screen Recorder can overclock your GPU memory transfer rate to it's normal optimal level.\ NVIDIA driver has a "feature" (read: bug) where it will downclock memory transfer rate when a program uses cuda (or nvenc, which uses cuda), such as GPU Screen Recorder. To work around this bug, GPU Screen Recorder can overclock your GPU memory transfer rate to it's normal optimal level.\
To enable overclocking for optimal performance use the `-oc` option when running GPU Screen Recorder. You also need to have "Coolbits" NVIDIA X setting set to "12" to enable overclocking. You can automatically add this option if you run `sudo nvidia-xconfig --cool-bits=12` and then reboot your computer.\ To enable overclocking for optimal performance use the `-oc` option when running GPU Screen Recorder. You also need to have "Coolbits" NVIDIA X setting set to "12" to enable overclocking. You can automatically add this option if you run `sudo nvidia-xconfig --cool-bits=12` and then reboot your computer.\
@ -138,7 +129,7 @@ Some linux distros (such as manjaro) disable hardware accelerated h264/hevc on A
## I have an old nvidia GPU that supports nvenc but I get a cuda error when trying to record ## I have an old nvidia GPU that supports nvenc but I get a cuda error when trying to record
Newer ffmpeg versions don't support older nvidia cards. Try installing GPU Screen Recorder flatpak from [flathub](https://flathub.org/apps/details/com.dec05eba.gpu_screen_recorder) instead. It comes with an older ffmpeg version which might work for your GPU. Newer ffmpeg versions don't support older nvidia cards. Try installing GPU Screen Recorder flatpak from [flathub](https://flathub.org/apps/details/com.dec05eba.gpu_screen_recorder) instead. It comes with an older ffmpeg version which might work for your GPU.
## I get a black screen/glitches while live streaming ## I get a black screen/glitches while live streaming
It seems like ffmpeg earlier than version 6.1 has some type of bug. Install ffmpeg 6.1 and then reinstall GPU Screen Recorder to fix this issue. The flatpak version of GPU Screen Recorder comes with ffmpeg 6.1 so no extra steps are needed. It seems like ffmpeg earlier than version 6.1 has some type of bug. Install ffmpeg 6.1 (ffmpeg-git in aur, ffmpeg in the offical repositories hasn't been updated yet) and then reinstall GPU Screen Recorder.
# Donations # Donations
If you want to donate you can donate via bitcoin or monero. If you want to donate you can donate via bitcoin or monero.

4
TODO
View File

@ -106,7 +106,3 @@ Support I915_FORMAT_MOD_Y_TILED_CCS (and other power saving modifiers, see https
Test if p2 state can be worked around by using pure nvenc api and overwriting cuInit/cuCtxCreate* to not do anything. Cuda might be loaded when using nvenc but it might not be used, with certain record options? (such as h264 p5). Test if p2 state can be worked around by using pure nvenc api and overwriting cuInit/cuCtxCreate* to not do anything. Cuda might be loaded when using nvenc but it might not be used, with certain record options? (such as h264 p5).
nvenc uses cuda when using b frames and rgb->yuv conversion, so convert the image ourselves instead.- nvenc uses cuda when using b frames and rgb->yuv conversion, so convert the image ourselves instead.-
Mesa doesn't support global headers (AV_CODEC_FLAG_GLOBAL_HEADER) with h264... which also breaks mkv since mkv requires global header. Right now gpu screen recorder will forcefully set video codec to hevc when h264 is requested for mkv files.
Drop frames if live streaming cant keep up with target fps, or dynamically change resolution/quality.

View File

@ -6,8 +6,8 @@ cd "$script_dir"
CC=${CC:-gcc} CC=${CC:-gcc}
CXX=${CXX:-g++} CXX=${CXX:-g++}
opts="-O2 -g0 -DNDEBUG -Wall -Wextra -Wshadow -g -fpermissive" opts="-O2 -g0 -DNDEBUG -Wall -Wextra -Wshadow"
[ -n "$DEBUG" ] && opts="-O0 -g3 -Wall -Wextra -Wshadow -fpermissive"; [ -n "$DEBUG" ] && opts="-O0 -g3 -Wall -Wextra -Wshadow";
build_wayland_protocol() { build_wayland_protocol() {
wayland-scanner private-code external/wlr-export-dmabuf-unstable-v1.xml external/wlr-export-dmabuf-unstable-v1-protocol.c wayland-scanner private-code external/wlr-export-dmabuf-unstable-v1.xml external/wlr-export-dmabuf-unstable-v1-protocol.c
@ -25,10 +25,9 @@ build_gsr_kms_server() {
} }
build_gsr() { build_gsr() {
dependencies="libavcodec libavformat libavutil x11 xcomposite xrandr libpulse libswresample libavfilter libva libcap libdrm wayland-egl wayland-client libpipewire-0.3" dependencies="libavcodec libavformat libavutil x11 xcomposite xrandr libpulse libswresample libavfilter libva libcap libdrm wayland-egl wayland-client"
includes="$(pkg-config --cflags $dependencies)" includes="$(pkg-config --cflags $dependencies)"
libs="$(pkg-config --libs $dependencies) -ldl -pthread -lm -lpipewire-0.3" libs="$(pkg-config --libs $dependencies) -ldl -pthread -lm"
$CXX -c src/pipewire.cpp $opts $includes
$CC -c src/capture/capture.c $opts $includes $CC -c src/capture/capture.c $opts $includes
$CC -c src/capture/nvfbc.c $opts $includes $CC -c src/capture/nvfbc.c $opts $includes
$CC -c src/capture/xcomposite_cuda.c $opts $includes $CC -c src/capture/xcomposite_cuda.c $opts $includes
@ -49,7 +48,7 @@ build_gsr() {
$CXX -c src/sound.cpp $opts $includes $CXX -c src/sound.cpp $opts $includes
$CXX -c src/main.cpp $opts $includes $CXX -c src/main.cpp $opts $includes
$CXX -o gpu-screen-recorder capture.o nvfbc.o kms_client.o egl.o cuda.o xnvctrl.o overclock.o window_texture.o shader.o \ $CXX -o gpu-screen-recorder capture.o nvfbc.o kms_client.o egl.o cuda.o xnvctrl.o overclock.o window_texture.o shader.o \
color_conversion.o utils.o library_loader.o xcomposite_cuda.o xcomposite_vaapi.o kms_vaapi.o kms_cuda.o wlr-export-dmabuf-unstable-v1-protocol.o sound.o pipewire.o main.o $libs $opts color_conversion.o utils.o library_loader.o xcomposite_cuda.o xcomposite_vaapi.o kms_vaapi.o kms_cuda.o wlr-export-dmabuf-unstable-v1-protocol.o sound.o main.o $libs $opts
} }
build_wayland_protocol build_wayland_protocol

View File

@ -1,24 +0,0 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1703013332,
"narHash": "sha256-+tFNwMvlXLbJZXiMHqYq77z/RfmpfpiI3yjL6o/Zo9M=",
"path": "/nix/store/50bgi74d890mpkp90w1jwc5g0dw4dccr-source",
"rev": "54aac082a4d9bb5bbc5c4e899603abfb76a3f6d6",
"type": "path"
},
"original": {
"id": "nixpkgs",
"type": "indirect"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

View File

@ -1,70 +0,0 @@
{
description = "A very basic flake";
outputs = { self, nixpkgs }: let
gsr = { stdenv
, lib
, fetchurl
, makeWrapper
, pkg-config
, libXcomposite
, libpulseaudio
, ffmpeg
, wayland
, libdrm
, libva
, libglvnd
, libXrandr
, pipewire
}:
stdenv.mkDerivation {
pname = "gpu-screen-recorder";
version = "unstable-2023-11-18";
# printf "r%s.%s\n" "$(git rev-list --count HEAD)" "$(git rev-parse --short HEAD)"
src = ./.;
#sourceRoot = ".";
nativeBuildInputs = [
pkg-config
makeWrapper
];
buildInputs = [
libXcomposite
libpulseaudio
ffmpeg
wayland
libdrm
libva
libXrandr
pipewire
];
buildPhase = ''
./build.sh
'';
postInstall = ''
install -Dt $out/bin gpu-screen-recorder gsr-kms-server
mkdir $out/bin/.wrapped
mv $out/bin/gpu-screen-recorder $out/bin/.wrapped/
makeWrapper "$out/bin/.wrapped/gpu-screen-recorder" "$out/bin/gpu-screen-recorder" \
--prefix LD_LIBRARY_PATH : ${libglvnd}/lib \
--prefix PATH : $out/bin
'';
meta = with lib; {
description = "A screen recorder that has minimal impact on system performance by recording a window using the GPU only";
homepage = "https://git.dec05eba.com/gpu-screen-recorder/about/";
license = licenses.gpl3Only;
maintainers = with maintainers; [ babbaj ];
platforms = [ "x86_64-linux" ];
};
};
in {
packages.x86_64-linux.gsr = nixpkgs.legacyPackages.x86_64-linux.callPackage gsr {};
packages.x86_64-linux.default = nixpkgs.legacyPackages.x86_64-linux.callPackage gsr {};
};
}

View File

@ -1 +0,0 @@
void init_pipewire();

View File

@ -244,17 +244,12 @@ int gsr_kms_client_init(gsr_kms_client *self, const char *card_path) {
fprintf(stderr, "gsr error: gsr_kms_client_init: fork failed, error: %s\n", strerror(errno)); fprintf(stderr, "gsr error: gsr_kms_client_init: fork failed, error: %s\n", strerror(errno));
goto err; goto err;
} else if(pid == 0) { /* child */ } else if(pid == 0) { /* child */
if(inside_flatpak) { if(has_perm) {
if(has_perm) {
const char *args[] = { "flatpak-spawn", "--host", "/var/lib/flatpak/app/com.dec05eba.gpu_screen_recorder/current/active/files/bin/gsr-kms-server", self->initial_socket_path, card_path, NULL };
execvp(args[0], (char *const*)args);
} else {
const char *args[] = { "flatpak-spawn", "--host", "pkexec", "flatpak", "run", "--command=gsr-kms-server", "com.dec05eba.gpu_screen_recorder", self->initial_socket_path, card_path, NULL };
execvp(args[0], (char *const*)args);
}
} else if(has_perm) {
const char *args[] = { server_filepath, self->initial_socket_path, card_path, NULL }; const char *args[] = { server_filepath, self->initial_socket_path, card_path, NULL };
execvp(args[0], (char *const*)args); execvp(args[0], (char *const*)args);
} else if(inside_flatpak) {
const char *args[] = { "flatpak-spawn", "--host", "pkexec", "flatpak", "run", "--command=gsr-kms-server", "com.dec05eba.gpu_screen_recorder", self->initial_socket_path, card_path, NULL };
execvp(args[0], (char *const*)args);
} else { } else {
const char *args[] = { "pkexec", server_filepath, self->initial_socket_path, card_path, NULL }; const char *args[] = { "pkexec", server_filepath, self->initial_socket_path, card_path, NULL };
execvp(args[0], (char *const*)args); execvp(args[0], (char *const*)args);

View File

@ -143,7 +143,6 @@ static uint32_t plane_get_properties(int drmfd, uint32_t plane_id, bool *is_curs
if(!props) if(!props)
return false; return false;
// TODO: Dont do this every frame
for(uint32_t i = 0; i < props->count_props; ++i) { for(uint32_t i = 0; i < props->count_props; ++i) {
drmModePropertyPtr prop = drmModeGetProperty(drmfd, props->props[i]); drmModePropertyPtr prop = drmModeGetProperty(drmfd, props->props[i]);
if(!prop) if(!prop)

View File

@ -5,7 +5,6 @@
#include <stdlib.h> #include <stdlib.h>
#include <string.h> #include <string.h>
#include <stdio.h> #include <stdio.h>
#include <math.h>
#include <X11/Xlib.h> #include <X11/Xlib.h>
#include <libavutil/hwcontext.h> #include <libavutil/hwcontext.h>
#include <libavutil/hwcontext_cuda.h> #include <libavutil/hwcontext_cuda.h>
@ -298,7 +297,7 @@ static int gsr_capture_nvfbc_start(gsr_capture *cap, AVCodecContext *video_codec
if(capture_region) if(capture_region)
create_capture_params.captureBox = (NVFBC_BOX){ x, y, width, height }; create_capture_params.captureBox = (NVFBC_BOX){ x, y, width, height };
create_capture_params.eTrackingType = tracking_type; create_capture_params.eTrackingType = tracking_type;
create_capture_params.dwSamplingRateMs = (uint32_t)ceilf(1000.0f / (float)cap_nvfbc->params.fps); create_capture_params.dwSamplingRateMs = 1000u / ((uint32_t)cap_nvfbc->params.fps + 1);
create_capture_params.bAllowDirectCapture = direct_capture ? NVFBC_TRUE : NVFBC_FALSE; create_capture_params.bAllowDirectCapture = direct_capture ? NVFBC_TRUE : NVFBC_FALSE;
create_capture_params.bPushModel = direct_capture ? NVFBC_TRUE : NVFBC_FALSE; create_capture_params.bPushModel = direct_capture ? NVFBC_TRUE : NVFBC_FALSE;
//create_capture_params.bDisableAutoModesetRecovery = true; // TODO: //create_capture_params.bDisableAutoModesetRecovery = true; // TODO:

View File

@ -23,7 +23,6 @@ extern "C" {
#include <sys/wait.h> #include <sys/wait.h>
#include "../include/sound.hpp" #include "../include/sound.hpp"
#include "../include/pipewire.hpp"
extern "C" { extern "C" {
#include <libavutil/pixfmt.h> #include <libavutil/pixfmt.h>
@ -206,7 +205,7 @@ static AVCodecID audio_codec_get_id(AudioCodec audio_codec) {
return AV_CODEC_ID_AAC; return AV_CODEC_ID_AAC;
} }
static AVSampleFormat audio_codec_get_sample_format(AudioCodec audio_codec, const AVCodec *codec, bool mix_audio) { static AVSampleFormat audio_codec_get_sample_format(AudioCodec audio_codec, const AVCodec *codec) {
switch(audio_codec) { switch(audio_codec) {
case AudioCodec::AAC: { case AudioCodec::AAC: {
return AV_SAMPLE_FMT_FLTP; return AV_SAMPLE_FMT_FLTP;
@ -223,10 +222,6 @@ static AVSampleFormat audio_codec_get_sample_format(AudioCodec audio_codec, cons
} }
} }
// Amix only works with float audio
if(mix_audio)
supports_s16 = false;
if(!supports_s16 && !supports_flt) { if(!supports_s16 && !supports_flt) {
fprintf(stderr, "Warning: opus audio codec is chosen but your ffmpeg version does not support s16/flt sample format and performance might be slightly worse. You can either rebuild ffmpeg with libopus instead of the built-in opus, use the flatpak version of gpu screen recorder or record with flac audio codec instead (-ac flac). Falling back to fltp audio sample format instead.\n"); fprintf(stderr, "Warning: opus audio codec is chosen but your ffmpeg version does not support s16/flt sample format and performance might be slightly worse. You can either rebuild ffmpeg with libopus instead of the built-in opus, use the flatpak version of gpu screen recorder or record with flac audio codec instead (-ac flac). Falling back to fltp audio sample format instead.\n");
} }
@ -276,7 +271,7 @@ static AVSampleFormat audio_format_to_sample_format(const AudioFormat audio_form
return AV_SAMPLE_FMT_S16; return AV_SAMPLE_FMT_S16;
} }
static AVCodecContext* create_audio_codec_context(int fps, AudioCodec audio_codec, bool mix_audio) { static AVCodecContext* create_audio_codec_context(int fps, AudioCodec audio_codec) {
const AVCodec *codec = avcodec_find_encoder(audio_codec_get_id(audio_codec)); const AVCodec *codec = avcodec_find_encoder(audio_codec_get_id(audio_codec));
if (!codec) { if (!codec) {
fprintf(stderr, "Error: Could not find %s audio encoder\n", audio_codec_get_name(audio_codec)); fprintf(stderr, "Error: Could not find %s audio encoder\n", audio_codec_get_name(audio_codec));
@ -287,7 +282,7 @@ static AVCodecContext* create_audio_codec_context(int fps, AudioCodec audio_code
assert(codec->type == AVMEDIA_TYPE_AUDIO); assert(codec->type == AVMEDIA_TYPE_AUDIO);
codec_context->codec_id = codec->id; codec_context->codec_id = codec->id;
codec_context->sample_fmt = audio_codec_get_sample_format(audio_codec, codec, mix_audio); codec_context->sample_fmt = audio_codec_get_sample_format(audio_codec, codec);
codec_context->bit_rate = audio_codec_get_get_bitrate(audio_codec); codec_context->bit_rate = audio_codec_get_get_bitrate(audio_codec);
codec_context->sample_rate = 48000; codec_context->sample_rate = 48000;
if(audio_codec == AudioCodec::AAC) if(audio_codec == AudioCodec::AAC)
@ -300,10 +295,9 @@ static AVCodecContext* create_audio_codec_context(int fps, AudioCodec audio_code
#endif #endif
codec_context->time_base.num = 1; codec_context->time_base.num = 1;
codec_context->time_base.den = codec_context->sample_rate; codec_context->time_base.den = AV_TIME_BASE;
codec_context->framerate.num = fps; codec_context->framerate.num = fps;
codec_context->framerate.den = 1; codec_context->framerate.den = 1;
codec_context->thread_count = 1;
codec_context->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; codec_context->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
return codec_context; return codec_context;
@ -329,7 +323,7 @@ static AVCodecContext *create_video_codec_context(AVPixelFormat pix_fmt,
codec_context->framerate.den = 1; codec_context->framerate.den = 1;
codec_context->sample_aspect_ratio.num = 0; codec_context->sample_aspect_ratio.num = 0;
codec_context->sample_aspect_ratio.den = 0; codec_context->sample_aspect_ratio.den = 0;
// High values reduce file size but increases time it takes to seek // High values reeduce file size but increases time it takes to seek
if(is_livestream) { if(is_livestream) {
codec_context->flags |= (AV_CODEC_FLAG_CLOSED_GOP | AV_CODEC_FLAG_LOW_DELAY); codec_context->flags |= (AV_CODEC_FLAG_CLOSED_GOP | AV_CODEC_FLAG_LOW_DELAY);
codec_context->flags2 |= AV_CODEC_FLAG2_FAST; codec_context->flags2 |= AV_CODEC_FLAG2_FAST;
@ -399,14 +393,14 @@ static AVCodecContext *create_video_codec_context(AVPixelFormat pix_fmt,
codec_context->global_quality = 180; codec_context->global_quality = 180;
break; break;
case VideoQuality::HIGH: case VideoQuality::HIGH:
codec_context->global_quality = 140;
break;
case VideoQuality::VERY_HIGH:
codec_context->global_quality = 120; codec_context->global_quality = 120;
break; break;
case VideoQuality::ULTRA: case VideoQuality::VERY_HIGH:
codec_context->global_quality = 100; codec_context->global_quality = 100;
break; break;
case VideoQuality::ULTRA:
codec_context->global_quality = 70;
break;
} }
} }
@ -726,16 +720,16 @@ static void open_video(AVCodecContext *codec_context, VideoQuality video_quality
} else { } else {
switch(video_quality) { switch(video_quality) {
case VideoQuality::MEDIUM: case VideoQuality::MEDIUM:
av_dict_set_int(&options, "qp", 36, 0); av_dict_set_int(&options, "qp", 40, 0);
break; break;
case VideoQuality::HIGH: case VideoQuality::HIGH:
av_dict_set_int(&options, "qp", 32, 0); av_dict_set_int(&options, "qp", 35, 0);
break; break;
case VideoQuality::VERY_HIGH: case VideoQuality::VERY_HIGH:
av_dict_set_int(&options, "qp", 28, 0); av_dict_set_int(&options, "qp", 30, 0);
break; break;
case VideoQuality::ULTRA: case VideoQuality::ULTRA:
av_dict_set_int(&options, "qp", 22, 0); av_dict_set_int(&options, "qp", 24, 0);
break; break;
} }
} }
@ -805,15 +799,14 @@ static void usage_full() {
fprintf(stderr, " and the video will only be saved when the gpu-screen-recorder is closed. This feature is similar to Nvidia's instant replay feature.\n"); fprintf(stderr, " and the video will only be saved when the gpu-screen-recorder is closed. This feature is similar to Nvidia's instant replay feature.\n");
fprintf(stderr, " This option has be between 5 and 1200. Note that the replay buffer size will not always be precise, because of keyframes. Optional, disabled by default.\n"); fprintf(stderr, " This option has be between 5 and 1200. Note that the replay buffer size will not always be precise, because of keyframes. Optional, disabled by default.\n");
fprintf(stderr, "\n"); fprintf(stderr, "\n");
fprintf(stderr, " -k Video codec to use. Should be either 'auto', 'h264', 'h265' or 'av1'. Defaults to 'auto' which defaults to 'h265' on AMD/Nvidia and 'h264' on intel.\n"); fprintf(stderr, " -k Video codec to use. Should be either 'auto', 'h264', 'h265', 'av1'. Defaults to 'auto' which defaults to 'h265' unless recording at fps higher than 60. Defaults to 'h264' on intel.\n");
fprintf(stderr, " Forcefully set to 'h264' if the file container type is 'flv'.\n"); fprintf(stderr, " Forcefully set to 'h264' if -c is 'flv'.\n");
fprintf(stderr, " Forcefully set to 'h265' on AMD/intel if video codec is 'h264' and if the file container type is 'mkv'.\n");
fprintf(stderr, "\n"); fprintf(stderr, "\n");
fprintf(stderr, " -ac Audio codec to use. Should be either 'aac', 'opus' or 'flac'. Defaults to 'opus' for .mp4/.mkv files, otherwise defaults to 'aac'.\n"); fprintf(stderr, " -ac Audio codec to use. Should be either 'aac', 'opus' or 'flac'. Defaults to 'opus' for .mp4/.mkv files, otherwise defaults to 'aac'.\n");
fprintf(stderr, " 'opus' and 'flac' is only supported by .mp4/.mkv files. 'opus' is recommended for best performance and smallest audio size.\n"); fprintf(stderr, " 'opus' and 'flac' is only supported by .mp4/.mkv files. 'opus' is recommended for best performance and smallest audio size.\n");
fprintf(stderr, "\n"); fprintf(stderr, "\n");
fprintf(stderr, " -oc Overclock memory transfer rate to the maximum performance level. This only applies to NVIDIA on X11 and exists to overcome a bug in NVIDIA driver where performance level\n"); fprintf(stderr, " -oc Overclock memory transfer rate to the maximum performance level. This only applies to NVIDIA on X11 and exists to overcome a bug in NVIDIA driver where performance level. The same issue exists on Wayland but overclocking is not possible on Wayland.\n");
fprintf(stderr, " is dropped when you record a game. Only needed if you are recording a game that is bottlenecked by GPU. The same issue exists on Wayland but overclocking is not possible on Wayland.\n"); fprintf(stderr, " is dropped when you record a game. Only needed if you are recording a game that is bottlenecked by GPU.\n");
fprintf(stderr, " Works only if your have \"Coolbits\" set to \"12\" in NVIDIA X settings, see README for more information. Note! use at your own risk! Optional, disabled by default.\n"); fprintf(stderr, " Works only if your have \"Coolbits\" set to \"12\" in NVIDIA X settings, see README for more information. Note! use at your own risk! Optional, disabled by default.\n");
fprintf(stderr, "\n"); fprintf(stderr, "\n");
fprintf(stderr, " -fm Framerate mode. Should be either 'cfr' or 'vfr'. Defaults to 'cfr' on NVIDIA X11 and 'vfr' on AMD/Intel X11/Wayland or NVIDIA Wayland.\n"); fprintf(stderr, " -fm Framerate mode. Should be either 'cfr' or 'vfr'. Defaults to 'cfr' on NVIDIA X11 and 'vfr' on AMD/Intel X11/Wayland or NVIDIA Wayland.\n");
@ -980,7 +973,6 @@ struct AudioTrack {
AVFilterGraph *graph = nullptr; AVFilterGraph *graph = nullptr;
AVFilterContext *sink = nullptr; AVFilterContext *sink = nullptr;
int stream_index = 0; int stream_index = 0;
int64_t pts = 0;
}; };
static std::future<void> save_replay_thread; static std::future<void> save_replay_thread;
@ -1375,9 +1367,6 @@ struct Arg {
}; };
int main(int argc, char **argv) { int main(int argc, char **argv) {
init_pipewire();
return 0;
signal(SIGINT, stop_handler); signal(SIGINT, stop_handler);
signal(SIGUSR1, save_replay_handler); signal(SIGUSR1, save_replay_handler);
@ -1459,7 +1448,7 @@ int main(int argc, char **argv) {
AudioCodec audio_codec = AudioCodec::OPUS; AudioCodec audio_codec = AudioCodec::OPUS;
const char *audio_codec_to_use = args["-ac"].value(); const char *audio_codec_to_use = args["-ac"].value();
if(!audio_codec_to_use) if(!audio_codec_to_use)
audio_codec_to_use = "opus"; audio_codec_to_use = "aac";
if(strcmp(audio_codec_to_use, "aac") == 0) { if(strcmp(audio_codec_to_use, "aac") == 0) {
audio_codec = AudioCodec::AAC; audio_codec = AudioCodec::AAC;
@ -1472,6 +1461,12 @@ int main(int argc, char **argv) {
usage(); usage();
} }
if(audio_codec != AudioCodec::AAC) {
audio_codec_to_use = "aac";
audio_codec = AudioCodec::AAC;
fprintf(stderr, "Info: audio codec is forcefully set to aac at the moment because of issues with opus/flac. This is a temporary issue\n");
}
bool overclock = false; bool overclock = false;
const char *overclock_str = args["-oc"].value(); const char *overclock_str = args["-oc"].value();
if(!overclock_str) if(!overclock_str)
@ -1542,7 +1537,6 @@ int main(int argc, char **argv) {
if(!audio_input_arg.values.empty()) if(!audio_input_arg.values.empty())
audio_inputs = get_pulseaudio_inputs(); audio_inputs = get_pulseaudio_inputs();
std::vector<MergedAudioInputs> requested_audio_inputs; std::vector<MergedAudioInputs> requested_audio_inputs;
bool uses_amix = false;
// Manually check if the audio inputs we give exist. This is only needed for pipewire, not pulseaudio. // Manually check if the audio inputs we give exist. This is only needed for pipewire, not pulseaudio.
// Pipewire instead DEFAULTS TO THE DEFAULT AUDIO INPUT. THAT'S RETARDED. // Pipewire instead DEFAULTS TO THE DEFAULT AUDIO INPUT. THAT'S RETARDED.
@ -1552,9 +1546,6 @@ int main(int argc, char **argv) {
continue; continue;
requested_audio_inputs.push_back({parse_audio_input_arg(audio_input)}); requested_audio_inputs.push_back({parse_audio_input_arg(audio_input)});
if(requested_audio_inputs.back().audio_inputs.size() > 1)
uses_amix = true;
for(AudioInput &request_audio_input : requested_audio_inputs.back().audio_inputs) { for(AudioInput &request_audio_input : requested_audio_inputs.back().audio_inputs) {
bool match = false; bool match = false;
for(const auto &existing_audio_input : audio_inputs) { for(const auto &existing_audio_input : audio_inputs) {
@ -1922,18 +1913,11 @@ int main(int argc, char **argv) {
file_extension = file_extension.substr(0, comma_index); file_extension = file_extension.substr(0, comma_index);
} }
if(gpu_inf.vendor != GSR_GPU_VENDOR_NVIDIA && file_extension == "mkv" && strcmp(video_codec_to_use, "h264") == 0) {
video_codec_to_use = "h265";
video_codec = VideoCodec::HEVC;
fprintf(stderr, "Warning: video codec was forcefully set to h265 because mkv container is used and mesa (AMD and Intel driver) does not support h264 in mkv files\n");
}
switch(audio_codec) { switch(audio_codec) {
case AudioCodec::AAC: { case AudioCodec::AAC: {
break; break;
} }
case AudioCodec::OPUS: { case AudioCodec::OPUS: {
// TODO: Also check mpegts?
if(file_extension != "mp4" && file_extension != "mkv") { if(file_extension != "mp4" && file_extension != "mkv") {
audio_codec_to_use = "aac"; audio_codec_to_use = "aac";
audio_codec = AudioCodec::AAC; audio_codec = AudioCodec::AAC;
@ -1942,15 +1926,10 @@ int main(int argc, char **argv) {
break; break;
} }
case AudioCodec::FLAC: { case AudioCodec::FLAC: {
// TODO: Also check mpegts?
if(file_extension != "mp4" && file_extension != "mkv") { if(file_extension != "mp4" && file_extension != "mkv") {
audio_codec_to_use = "aac"; audio_codec_to_use = "aac";
audio_codec = AudioCodec::AAC; audio_codec = AudioCodec::AAC;
fprintf(stderr, "Warning: flac audio codec is only supported by .mp4 and .mkv files, falling back to aac instead\n"); fprintf(stderr, "Warning: flac audio codec is only supported by .mp4 and .mkv files, falling back to aac instead\n");
} else if(uses_amix) {
audio_codec_to_use = "opus";
audio_codec = AudioCodec::OPUS;
fprintf(stderr, "Warning: flac audio codec is not supported when mixing audio sources, falling back to opus instead\n");
} }
break; break;
} }
@ -1981,6 +1960,10 @@ int main(int argc, char **argv) {
fprintf(stderr, "Info: using h264 encoder because a codec was not specified and your gpu does not support h265\n"); fprintf(stderr, "Info: using h264 encoder because a codec was not specified and your gpu does not support h265\n");
video_codec_to_use = "h264"; video_codec_to_use = "h264";
video_codec = VideoCodec::H264; video_codec = VideoCodec::H264;
} else if(fps > 60) {
fprintf(stderr, "Info: using h264 encoder because a codec was not specified and fps is more than 60\n");
video_codec_to_use = "h264";
video_codec = VideoCodec::H264;
} else { } else {
fprintf(stderr, "Info: using h265 encoder because a codec was not specified\n"); fprintf(stderr, "Info: using h265 encoder because a codec was not specified\n");
video_codec_to_use = "h265"; video_codec_to_use = "h265";
@ -2077,7 +2060,7 @@ int main(int argc, char **argv) {
framerate_mode_str = "cfr"; framerate_mode_str = "cfr";
} }
if(is_livestream && recording_saved_script) { if(is_livestream) {
fprintf(stderr, "Warning: live stream detected, -sc script is ignored\n"); fprintf(stderr, "Warning: live stream detected, -sc script is ignored\n");
recording_saved_script = nullptr; recording_saved_script = nullptr;
} }
@ -2101,8 +2084,7 @@ int main(int argc, char **argv) {
int audio_stream_index = VIDEO_STREAM_INDEX + 1; int audio_stream_index = VIDEO_STREAM_INDEX + 1;
for(const MergedAudioInputs &merged_audio_inputs : requested_audio_inputs) { for(const MergedAudioInputs &merged_audio_inputs : requested_audio_inputs) {
const bool use_amix = merged_audio_inputs.audio_inputs.size() > 1; AVCodecContext *audio_codec_context = create_audio_codec_context(fps, audio_codec);
AVCodecContext *audio_codec_context = create_audio_codec_context(fps, audio_codec, use_amix);
AVStream *audio_stream = nullptr; AVStream *audio_stream = nullptr;
if(replay_buffer_size_secs == -1) if(replay_buffer_size_secs == -1)
@ -2123,6 +2105,7 @@ int main(int argc, char **argv) {
std::vector<AVFilterContext*> src_filter_ctx; std::vector<AVFilterContext*> src_filter_ctx;
AVFilterGraph *graph = nullptr; AVFilterGraph *graph = nullptr;
AVFilterContext *sink = nullptr; AVFilterContext *sink = nullptr;
bool use_amix = merged_audio_inputs.audio_inputs.size() > 1;
if(use_amix) { if(use_amix) {
int err = init_filter_graph(audio_codec_context, &graph, &sink, src_filter_ctx, merged_audio_inputs.audio_inputs.size()); int err = init_filter_graph(audio_codec_context, &graph, &sink, src_filter_ctx, merged_audio_inputs.audio_inputs.size());
if(err < 0) { if(err < 0) {
@ -2147,16 +2130,15 @@ int main(int argc, char **argv) {
if(audio_input.name.empty()) { if(audio_input.name.empty()) {
audio_device.sound_device.handle = NULL; audio_device.sound_device.handle = NULL;
audio_device.sound_device.frames = 0; audio_device.sound_device.frames = 0;
audio_device.frame = NULL;
} else { } else {
if(sound_device_get_by_name(&audio_device.sound_device, audio_input.name.c_str(), audio_input.description.c_str(), num_channels, audio_codec_context->frame_size, audio_codec_context_get_audio_format(audio_codec_context)) != 0) { if(sound_device_get_by_name(&audio_device.sound_device, audio_input.name.c_str(), audio_input.description.c_str(), num_channels, audio_codec_context->frame_size, audio_codec_context_get_audio_format(audio_codec_context)) != 0) {
fprintf(stderr, "Error: failed to get \"%s\" sound device\n", audio_input.name.c_str()); fprintf(stderr, "Error: failed to get \"%s\" sound device\n", audio_input.name.c_str());
_exit(1); _exit(1);
} }
audio_device.frame = create_audio_frame(audio_codec_context);
} }
audio_device.frame = create_audio_frame(audio_codec_context);
audio_device.frame->pts = 0;
audio_devices.push_back(std::move(audio_device)); audio_devices.push_back(std::move(audio_device));
} }
@ -2197,8 +2179,8 @@ int main(int argc, char **argv) {
const double start_time_pts = clock_get_monotonic_seconds(); const double start_time_pts = clock_get_monotonic_seconds();
double start_time = clock_get_monotonic_seconds(); double start_time = clock_get_monotonic_seconds(); // todo - target_fps to make first frame start immediately?
double frame_timer_start = start_time - target_fps; // We want to capture the first frame immediately double frame_timer_start = start_time;
int fps_counter = 0; int fps_counter = 0;
AVFrame *frame = av_frame_alloc(); AVFrame *frame = av_frame_alloc();
@ -2254,6 +2236,7 @@ int main(int argc, char **argv) {
const double target_audio_hz = 1.0 / (double)audio_track.codec_context->sample_rate; const double target_audio_hz = 1.0 / (double)audio_track.codec_context->sample_rate;
double received_audio_time = clock_get_monotonic_seconds(); double received_audio_time = clock_get_monotonic_seconds();
const int64_t timeout_ms = std::round((1000.0 / (double)audio_track.codec_context->sample_rate) * 1000.0); const int64_t timeout_ms = std::round((1000.0 / (double)audio_track.codec_context->sample_rate) * 1000.0);
int64_t prev_pts = 0;
while(running) { while(running) {
void *sound_buffer; void *sound_buffer;
@ -2273,7 +2256,7 @@ int main(int argc, char **argv) {
} }
// TODO: Is this |received_audio_time| really correct? // TODO: Is this |received_audio_time| really correct?
int64_t num_missing_frames = std::round((this_audio_frame_time - received_audio_time) / target_audio_hz / (int64_t)audio_track.codec_context->frame_size); int64_t num_missing_frames = std::round((this_audio_frame_time - received_audio_time) / target_audio_hz / (int64_t)audio_device.frame->nb_samples);
if(got_audio_data) if(got_audio_data)
num_missing_frames = std::max((int64_t)0, num_missing_frames - 1); num_missing_frames = std::max((int64_t)0, num_missing_frames - 1);
@ -2292,7 +2275,7 @@ int main(int argc, char **argv) {
//audio_track.frame->data[0] = empty_audio; //audio_track.frame->data[0] = empty_audio;
received_audio_time = this_audio_frame_time; received_audio_time = this_audio_frame_time;
if(needs_audio_conversion) if(needs_audio_conversion)
swr_convert(swr, &audio_device.frame->data[0], audio_track.codec_context->frame_size, (const uint8_t**)&empty_audio, audio_track.codec_context->frame_size); swr_convert(swr, &audio_device.frame->data[0], audio_device.frame->nb_samples, (const uint8_t**)&empty_audio, audio_track.codec_context->frame_size);
else else
audio_device.frame->data[0] = empty_audio; audio_device.frame->data[0] = empty_audio;
@ -2305,6 +2288,12 @@ int main(int argc, char **argv) {
fprintf(stderr, "Error: failed to add audio frame to filter\n"); fprintf(stderr, "Error: failed to add audio frame to filter\n");
} }
} else { } else {
audio_device.frame->pts = (this_audio_frame_time - record_start_time) * (double)AV_TIME_BASE;
const bool same_pts = audio_device.frame->pts == prev_pts;
prev_pts = audio_device.frame->pts;
if(same_pts)
continue;
ret = avcodec_send_frame(audio_track.codec_context, audio_device.frame); ret = avcodec_send_frame(audio_track.codec_context, audio_device.frame);
if(ret >= 0) { if(ret >= 0) {
// TODO: Move to separate thread because this could write to network (for example when livestreaming) // TODO: Move to separate thread because this could write to network (for example when livestreaming)
@ -2313,7 +2302,6 @@ int main(int argc, char **argv) {
fprintf(stderr, "Failed to encode audio!\n"); fprintf(stderr, "Failed to encode audio!\n");
} }
} }
audio_device.frame->pts += audio_track.codec_context->frame_size;
} }
} }
@ -2323,10 +2311,16 @@ int main(int argc, char **argv) {
if(got_audio_data) { if(got_audio_data) {
// TODO: Instead of converting audio, get float audio from alsa. Or does alsa do conversion internally to get this format? // TODO: Instead of converting audio, get float audio from alsa. Or does alsa do conversion internally to get this format?
if(needs_audio_conversion) if(needs_audio_conversion)
swr_convert(swr, &audio_device.frame->data[0], audio_track.codec_context->frame_size, (const uint8_t**)&sound_buffer, audio_track.codec_context->frame_size); swr_convert(swr, &audio_device.frame->data[0], audio_device.frame->nb_samples, (const uint8_t**)&sound_buffer, audio_track.codec_context->frame_size);
else else
audio_device.frame->data[0] = (uint8_t*)sound_buffer; audio_device.frame->data[0] = (uint8_t*)sound_buffer;
audio_device.frame->pts = (this_audio_frame_time - record_start_time) * (double)AV_TIME_BASE;
const bool same_pts = audio_device.frame->pts == prev_pts;
prev_pts = audio_device.frame->pts;
if(same_pts)
continue;
if(audio_track.graph) { if(audio_track.graph) {
std::lock_guard<std::mutex> lock(audio_filter_mutex); std::lock_guard<std::mutex> lock(audio_filter_mutex);
// TODO: av_buffersrc_add_frame // TODO: av_buffersrc_add_frame
@ -2342,8 +2336,6 @@ int main(int argc, char **argv) {
fprintf(stderr, "Failed to encode audio!\n"); fprintf(stderr, "Failed to encode audio!\n");
} }
} }
audio_device.frame->pts += audio_track.codec_context->frame_size;
} }
} }
@ -2361,6 +2353,7 @@ int main(int argc, char **argv) {
int64_t video_pts_counter = 0; int64_t video_pts_counter = 0;
int64_t video_prev_pts = 0; int64_t video_prev_pts = 0;
int64_t audio_prev_pts = 0;
while(running) { while(running) {
double frame_start = clock_get_monotonic_seconds(); double frame_start = clock_get_monotonic_seconds();
@ -2381,7 +2374,15 @@ int main(int argc, char **argv) {
int err = 0; int err = 0;
while ((err = av_buffersink_get_frame(audio_track.sink, aframe)) >= 0) { while ((err = av_buffersink_get_frame(audio_track.sink, aframe)) >= 0) {
aframe->pts = audio_track.pts; const double this_audio_frame_time = clock_get_monotonic_seconds();
aframe->pts = (this_audio_frame_time - record_start_time) * (double)AV_TIME_BASE;
const bool same_pts = aframe->pts == audio_prev_pts;
audio_prev_pts = aframe->pts;
if(same_pts) {
av_frame_unref(aframe);
continue;
}
err = avcodec_send_frame(audio_track.codec_context, aframe); err = avcodec_send_frame(audio_track.codec_context, aframe);
if(err >= 0){ if(err >= 0){
// TODO: Move to separate thread because this could write to network (for example when livestreaming) // TODO: Move to separate thread because this could write to network (for example when livestreaming)
@ -2390,7 +2391,6 @@ int main(int argc, char **argv) {
fprintf(stderr, "Failed to encode audio!\n"); fprintf(stderr, "Failed to encode audio!\n");
} }
av_frame_unref(aframe); av_frame_unref(aframe);
audio_track.pts += audio_track.codec_context->frame_size;
} }
} }
} }

View File

@ -1,457 +0,0 @@
#include <pipewire/pipewire.h>
#include <spa/param/audio/format-utils.h>
#include <spa/debug/types.h>
#include <spa/param/audio/type-info.h>
#include <vector>
#define STR(x) #x
#define AUDIO_CHANNELS 2
struct target_client {
const char *app_name;
const char *binary;
uint32_t id;
struct spa_hook client_listener;
};
struct target_port {
uint32_t id;
struct target_node *node;
};
struct target_node {
uint32_t client_id;
uint32_t id;
const char *app_name;
std::vector<struct target_port> ports;
};
struct sink_port {
uint32_t id;
const char* channel;
};
struct data {
struct pw_core *core;
// The stream we will capture
struct pw_stream *stream;
// The context to use.
struct pw_context *context;
// Object to accessing global events.
struct pw_registry *registry;
// Listener for global events.
struct spa_hook registry_listener;
// The capture sink.
struct pw_proxy *sink_proxy;
// Listener for the sink events.
struct spa_hook sink_proxy_listener;
// The event loop to use.
struct pw_thread_loop *thread_loop;
// The id of the sink that we created.
uint32_t sink_id;
// The serial of the sink.
uint32_t sink_serial;
// Sequence number for forcing a server round trip
int seq;
std::vector<struct sink_port> sink_ports;
std::vector<struct target_client> targets;
std::vector<struct target_node> nodes;
std::vector<struct target_port> ports;
struct spa_audio_info format;
};
static void on_process(void *userdata)
{
struct data *data = static_cast<struct data *>(userdata);
struct pw_buffer *b;
struct spa_buffer *buf;
if ((b = pw_stream_dequeue_buffer(data->stream)) == NULL) {
pw_log_warn("out of buffers: %m");
return;
}
buf = b->buffer;
if (buf->datas[0].data == NULL)
return;
printf("got a frame of size %d\n", buf->datas[0].chunk->size);
pw_stream_queue_buffer(data->stream, b);
}
/* [on_process] */
static void on_param_changed(void *userdata, uint32_t id, const struct spa_pod *param)
{
struct data *data = static_cast<struct data *>(userdata);
if (param == NULL || id != SPA_PARAM_Format)
return;
if (spa_format_parse(param,
&data->format.media_type,
&data->format.media_subtype) < 0)
return;
if (data->format.media_type != SPA_MEDIA_TYPE_audio ||
data->format.media_subtype != SPA_MEDIA_SUBTYPE_raw)
return;
if (spa_format_audio_raw_parse(param, &data->format.info.raw) < 0)
return;
printf("got audio format:\n");
printf(" channels: %d\n", data->format.info.raw.channels);
printf(" rate: %d\n", data->format.info.raw.rate);
}
void register_target_client(struct data *data, uint32_t id, const char* app_name) {
struct target_client client = {};
client.binary = NULL;
client.app_name = strdup(app_name);
client.id = id;
data->targets.push_back(client);
}
void register_target_node(struct data *data, uint32_t id, uint32_t client_id, const char* app_name) {
struct target_node node = {};
node.app_name = strdup(app_name);
node.id = id;
node.client_id = client_id;
data->nodes.push_back(node);
}
void register_target_port(struct data *data, struct target_node *node, uint32_t id) {
struct target_port port = {};
port.id = id;
port.node = node;
data->ports.push_back(port);
}
static void registry_event_global(void *raw_data, uint32_t id,
uint32_t permissions, const char *type, uint32_t version,
const struct spa_dict *props)
{
if (!type || !props)
return;
struct data *data = static_cast<struct data *>(raw_data);
if (id == data->sink_id) {
const char *serial = spa_dict_lookup(props, PW_KEY_OBJECT_SERIAL);
if (!serial) {
data->sink_serial = 0;
printf("No serial found on capture sink\n");
} else {
data->sink_serial = strtoul(serial, NULL, 10);
}
}
if (strcmp(type, PW_TYPE_INTERFACE_Port) == 0) {
const char *nid, *dir, *chn;
if (
!(nid = spa_dict_lookup(props, PW_KEY_NODE_ID)) ||
!(dir = spa_dict_lookup(props, PW_KEY_PORT_DIRECTION)) ||
!(chn = spa_dict_lookup(props, PW_KEY_AUDIO_CHANNEL))
) {
printf("One or more props not set\n");
return;
}
uint32_t node_id = strtoul(nid, NULL, 10);
printf("Port: node id %u\n", node_id);
if (strcmp(dir, "in") == 0 && node_id == data->sink_id && data->sink_id != SPA_ID_INVALID) {
printf("=======\n");
printf("Found our own sink's port: %d sink_id %d channel %s\n", id, data->sink_id, chn);
printf("=======\n");
data->sink_ports.push_back(
{ id, strdup(chn), }
);
} else if (strcmp(dir, "out") == 0) {
if (data->sink_id == SPA_ID_INVALID) {
printf("Want to process port %d but sink_id is invalid\n", id);
return;
}
struct target_node *n = NULL;
for (auto t : data->nodes) {
if (t.id == node_id) {
n = &t;
break;
}
}
if (!n) {
printf("Target not found\n");
return;
}
printf("Target found\n");
uint32_t p = 0;
for (auto sink_port : data->sink_ports) {
printf("%s = %s\n", sink_port.channel, chn);
if (strcmp(sink_port.channel, chn) == 0) {
p = sink_port.id;
break;
}
}
if (!p) {
printf("Failed to find port for channel %s of port %d\n", chn, id);
return;
}
struct pw_properties *link_props = pw_properties_new(
PW_KEY_OBJECT_LINGER, "false",
NULL
);
pw_properties_setf(link_props, PW_KEY_LINK_OUTPUT_NODE, "%u", node_id);
pw_properties_setf(link_props, PW_KEY_LINK_OUTPUT_PORT, "%u", id);
pw_properties_setf(link_props, PW_KEY_LINK_INPUT_NODE, "%u", data->sink_id);
pw_properties_setf(link_props, PW_KEY_LINK_INPUT_PORT, "%u", p);
printf(
"Connecting (%d, %d) -> (%d, %d)\n",
node_id, id,
data->sink_id, p
);
struct pw_proxy *link_proxy = static_cast<struct pw_proxy *>(
pw_core_create_object(
data->core, "link-factory",
PW_TYPE_INTERFACE_Link, PW_VERSION_LINK, &link_props->dict, 0
)
);
data->seq = pw_core_sync(data->core, PW_ID_CORE, data->seq);
pw_properties_free(link_props);
if (!link_proxy) {
printf("!!!!! Failed to connect port %u of node %u to capture sink\n", id, node_id);
return;
}
printf("Connected!\n");
}
} else if (strcmp(type, PW_TYPE_INTERFACE_Client) == 0) {
const char *client_app_name = spa_dict_lookup(props, PW_KEY_APP_NAME);
printf("Client: app name %s id %d\n", client_app_name, id);
register_target_client(
data,
id,
client_app_name
);
} else if (strcmp(type, PW_TYPE_INTERFACE_Node) == 0) {
const char *node_name, *media_class;
if (!(node_name = spa_dict_lookup(props, PW_KEY_NODE_NAME)) ||
!(media_class = spa_dict_lookup(props, PW_KEY_MEDIA_CLASS))) {
return;
}
printf("Node: media_class %s node_app %s id %d\n", media_class, node_name, id);
if (strcmp(media_class, "Stream/Output/Audio") == 0) {
const char *node_app_name = spa_dict_lookup(props, PW_KEY_APP_NAME);
if (!node_app_name) {
node_app_name = node_name;
}
uint32_t client_id = 0;
const char *client_id_str = spa_dict_lookup(props, PW_KEY_CLIENT_ID);
if (client_id_str) {
client_id = strtoul(client_id_str, NULL, 10);
}
register_target_node(
data,
id,
client_id,
node_app_name
);
}
}
}
static const struct pw_stream_events stream_events = {
PW_VERSION_STREAM_EVENTS,
.param_changed = on_param_changed,
.process = on_process,
};
static const struct pw_registry_events registry_events = {
PW_VERSION_REGISTRY_EVENTS,
.global = registry_event_global,
};
static void on_sink_proxy_bound(void *userdata, uint32_t global_id) {
struct data *data = static_cast<struct data*>(userdata);
data->sink_id = global_id;
printf("Got id %d\n", global_id);
}
static void on_sink_proxy_error(void *data, int seq, int res, const char *message)
{
printf("[pipewire] App capture sink error: seq:%d res:%d :%s", seq, res, message);
}
static const struct pw_proxy_events sink_proxy_events = {
PW_VERSION_PROXY_EVENTS,
.bound = on_sink_proxy_bound,
.error = on_sink_proxy_error,
};
void init_pipewire() {
struct data data = {
0,
sink_id: SPA_ID_INVALID,
sink_serial: 0,
seq: 0,
sink_ports: std::vector<struct sink_port> {},
targets: std::vector<struct target_client> {},
nodes: std::vector<struct target_node> {},
ports: std::vector<struct target_port> {},
};
const struct spa_pod *params[1];
uint8_t buffer[2048];
struct spa_pod_builder b = SPA_POD_BUILDER_INIT(buffer, sizeof(buffer));
struct pw_properties *props;
pw_init(NULL, NULL);
data.thread_loop = pw_thread_loop_new("PipeWire thread loop", NULL);
pw_thread_loop_lock(data.thread_loop);
if (pw_thread_loop_start(data.thread_loop) < 0) {
printf("Failed to start thread loop");
return;
}
data.context = pw_context_new(pw_thread_loop_get_loop(data.thread_loop), NULL, 0);
data.core = pw_context_connect(data.context, NULL, 0);
pw_core_sync(data.core, PW_ID_CORE, 0);
//pw_thread_loop_wait(data.thread_loop);
pw_thread_loop_unlock(data.thread_loop);
props = pw_properties_new(
PW_KEY_MEDIA_TYPE, "Audio",
PW_KEY_MEDIA_CATEGORY, "Capture",
PW_KEY_MEDIA_ROLE, "Screen",
PW_KEY_NODE_NAME, "GSR",
PW_KEY_NODE_VIRTUAL, "true",
PW_KEY_AUDIO_CHANNELS, "" STR(AUDIO_CHANNELS) "",
SPA_KEY_AUDIO_POSITION, "FL,FR",
PW_KEY_FACTORY_NAME, "support.null-audio-sink",
PW_KEY_MEDIA_CLASS, "Audio/Sink/Internal",
NULL
);
data.sink_proxy = static_cast<pw_proxy *>(
pw_core_create_object(
data.core,
"adapter",
PW_TYPE_INTERFACE_Node, PW_VERSION_NODE, &props->dict, 0
)
);
pw_proxy_add_listener(
data.sink_proxy,
&data.sink_proxy_listener,
&sink_proxy_events,
&data
);
data.registry = pw_core_get_registry(data.core, PW_VERSION_REGISTRY, 0);
printf("Got registry\n");
spa_zero(data.registry_listener);
pw_registry_add_listener(data.registry, &data.registry_listener, &registry_events, &data);
printf("Listener registered\n");
printf("Waiting for id\n");
while (data.sink_id == SPA_ID_INVALID || data.sink_serial == 0) {
printf("Poll\n");
pw_loop_iterate(pw_thread_loop_get_loop(data.thread_loop), -1);
}
printf("Got id\n");
enum spa_audio_channel channels[8];
channels[0] = SPA_AUDIO_CHANNEL_FL;
channels[1] = SPA_AUDIO_CHANNEL_FL;
channels[2] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[3] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[4] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[5] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[6] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[7] = SPA_AUDIO_CHANNEL_UNKNOWN;
params[0] = spa_pod_builder_add_object(
&b,
SPA_TYPE_OBJECT_Format, SPA_PARAM_EnumFormat,
SPA_FORMAT_mediaType, SPA_POD_Id(SPA_MEDIA_TYPE_audio),
SPA_FORMAT_mediaSubtype, SPA_POD_Id(SPA_MEDIA_SUBTYPE_raw),
SPA_FORMAT_AUDIO_channels, SPA_POD_Int(AUDIO_CHANNELS),
SPA_FORMAT_AUDIO_position, SPA_POD_Array(sizeof(enum spa_audio_channel), SPA_TYPE_Id, AUDIO_CHANNELS, channels),
SPA_FORMAT_AUDIO_format, SPA_POD_CHOICE_ENUM_Id(
8, SPA_AUDIO_FORMAT_U8, SPA_AUDIO_FORMAT_S16_LE, SPA_AUDIO_FORMAT_S32_LE,
SPA_AUDIO_FORMAT_F32_LE, SPA_AUDIO_FORMAT_U8P, SPA_AUDIO_FORMAT_S16P,
SPA_AUDIO_FORMAT_S32P, SPA_AUDIO_FORMAT_F32P
)
);
data.stream = pw_stream_new(
data.core,
"GSR",
pw_properties_new(
PW_KEY_NODE_NAME, "GSR",
PW_KEY_NODE_DESCRIPTION, "GSR Audio Capture",
PW_KEY_MEDIA_TYPE, "Audio",
PW_KEY_MEDIA_CATEGORY, "Capture",
PW_KEY_MEDIA_ROLE, "Production",
PW_KEY_NODE_WANT_DRIVER, "true",
PW_KEY_STREAM_CAPTURE_SINK, "true",
NULL
)
);
struct pw_properties *stream_props = pw_properties_new(NULL, NULL);
pw_properties_setf(stream_props, PW_KEY_TARGET_OBJECT, "%u", data.sink_serial);
pw_stream_update_properties(data.stream, &stream_props->dict);
pw_properties_free(stream_props);
pw_stream_connect(
data.stream,
PW_DIRECTION_INPUT,
PW_ID_ANY,
static_cast<pw_stream_flags>(PW_STREAM_FLAG_AUTOCONNECT | PW_STREAM_FLAG_MAP_BUFFERS),
params,
1
);
struct spa_hook stream_listener;
pw_stream_add_listener(
data.stream,
&stream_listener,
&stream_events,
&data
);
while (true) {
pw_loop_iterate(pw_thread_loop_get_loop(data.thread_loop), -1);
}
pw_proxy_destroy((struct pw_proxy *) data.registry);
pw_proxy_destroy(data.sink_proxy);
pw_stream_destroy(data.stream);
pw_context_destroy(data.context);
pw_thread_loop_destroy(data.thread_loop);
}