Compare commits

...

11 Commits

Author SHA1 Message Date
93d6ea77c0 Cleanup 2023-12-28 16:03:48 +01:00
d756cc559e Get PipeWire somewhat running 2023-12-27 19:50:50 +01:00
dec05eba
84f9a04272 Small changes who cares 2023-12-23 12:40:16 +01:00
dec05eba
02ee8b8d0f Add logo to README 2023-12-06 23:46:36 +01:00
dec05eba
f75640a1d5 m 2023-12-06 14:55:51 +01:00
dec05eba
ad40fa04d6 old info gone 2023-12-06 14:55:23 +01:00
dec05eba
ae92727965 Update readme with info about codecs 2023-12-06 14:48:46 +01:00
dec05eba
852882bae3 Fix opus and flac audio sources, fix crash when live streaming without an audio source 2023-12-03 00:59:07 +01:00
dec05eba
1260598e9e Reconfigure quality for av1 and hevc vaapi 2023-12-01 11:17:29 +01:00
dec05eba
8e66363352 flatpak: run gsr kms server on host if the file has root capacity 2023-11-30 18:44:45 +01:00
dec05eba
72d75d0f4a Workaround mesa (amd and intel driver issue): use hevc when mkv is used since mesa doesn't support global headers for h264 2023-11-30 18:27:39 +01:00
11 changed files with 810 additions and 77 deletions

View File

@@ -1,14 +1,24 @@
![](https://dec05eba.com/images/gpu_screen_recorder_logo_small.png)
# GPU Screen Recorder
This is a screen recorder that has minimal impact on system performance by recording a window using the GPU only,
similar to shadowplay on windows. This is the fastest screen recording tool for Linux.
This screen recorder can be used for recording your desktop offline, for live streaming and for nvidia shadowplay-like instant replay,
where only the last few seconds are saved.
where only the last few minutes are saved.
Supported video codecs:
* H264 (default on Intel)
* HEVC (default on AMD and NVIDIA)
* AV1 (not currently supported on NVIDIA if you use GPU Screen Recorder flatpak)
Supported audio codecs:
* Opus (default)
* AAC
* FLAC
## Note
This software works with x11 and wayland, but when using AMD/Intel or Wayland then only monitors can be recorded.\
GPU Screen Recorder only supports h264 and hevc codecs at the moment which means that webm files are not supported.\
CPU usage may be higher on wayland than on x11 when using nvidia.
This software works with x11 and wayland, but when using AMD/Intel or Wayland then only monitors can be recorded.
### TEMPORARY ISSUES
1) screen-direct capture has been temporary disabled as it causes issues with stuttering. This might be a nvfbc bug.
2) Recording the monitor on steam deck might fail sometimes. This happens even when using ffmpeg directly. This might be a steam deck driver bug. Recording a single window doesn't have this issue.
@@ -21,8 +31,7 @@ For you as a user this only means that if you installed GPU Screen Recorder as a
On a system with a i5 4690k CPU and a GTX 1080 GPU:\
When recording Legend of Zelda Breath of the Wild at 4k, fps drops from 30 to 7 when using OBS Studio + nvenc, however when using this screen recorder the fps remains at 30.\
When recording GTA V at 4k on highest settings, fps drops from 60 to 23 when using obs-nvfbc + nvenc, however when using this screen recorder the fps only drops to 58. The quality is also much better when using gpu screen recorder.\
It is recommended to save the video to a SSD because of the large file size, which a slow HDD might not be fast enough to handle.\
Note that if you have a very powerful CPU and a not so powerful GPU and play a game that is bottlenecked by your GPU and barely uses your CPU then a CPU based screen recording (such as OBS with libx264 instead of nvenc) might perform slightly better than GPU Screen Recorder. At least on NVIDIA.
It is recommended to save the video to a SSD because of the large file size, which a slow HDD might not be fast enough to handle.
## Note about optimal performance on NVIDIA
NVIDIA driver has a "feature" (read: bug) where it will downclock memory transfer rate when a program uses cuda (or nvenc, which uses cuda), such as GPU Screen Recorder. To work around this bug, GPU Screen Recorder can overclock your GPU memory transfer rate to it's normal optimal level.\
To enable overclocking for optimal performance use the `-oc` option when running GPU Screen Recorder. You also need to have "Coolbits" NVIDIA X setting set to "12" to enable overclocking. You can automatically add this option if you run `sudo nvidia-xconfig --cool-bits=12` and then reboot your computer.\
@@ -129,7 +138,7 @@ Some linux distros (such as manjaro) disable hardware accelerated h264/hevc on A
## I have an old nvidia GPU that supports nvenc but I get a cuda error when trying to record
Newer ffmpeg versions don't support older nvidia cards. Try installing GPU Screen Recorder flatpak from [flathub](https://flathub.org/apps/details/com.dec05eba.gpu_screen_recorder) instead. It comes with an older ffmpeg version which might work for your GPU.
## I get a black screen/glitches while live streaming
It seems like ffmpeg earlier than version 6.1 has some type of bug. Install ffmpeg 6.1 (ffmpeg-git in aur, ffmpeg in the offical repositories hasn't been updated yet) and then reinstall GPU Screen Recorder.
It seems like ffmpeg earlier than version 6.1 has some type of bug. Install ffmpeg 6.1 and then reinstall GPU Screen Recorder to fix this issue. The flatpak version of GPU Screen Recorder comes with ffmpeg 6.1 so no extra steps are needed.
# Donations
If you want to donate you can donate via bitcoin or monero.

4
TODO
View File

@@ -106,3 +106,7 @@ Support I915_FORMAT_MOD_Y_TILED_CCS (and other power saving modifiers, see https
Test if p2 state can be worked around by using pure nvenc api and overwriting cuInit/cuCtxCreate* to not do anything. Cuda might be loaded when using nvenc but it might not be used, with certain record options? (such as h264 p5).
nvenc uses cuda when using b frames and rgb->yuv conversion, so convert the image ourselves instead.-
Mesa doesn't support global headers (AV_CODEC_FLAG_GLOBAL_HEADER) with h264... which also breaks mkv since mkv requires global header. Right now gpu screen recorder will forcefully set video codec to hevc when h264 is requested for mkv files.
Drop frames if live streaming cant keep up with target fps, or dynamically change resolution/quality.

View File

@@ -6,8 +6,8 @@ cd "$script_dir"
CC=${CC:-gcc}
CXX=${CXX:-g++}
opts="-O2 -g0 -DNDEBUG -Wall -Wextra -Wshadow"
[ -n "$DEBUG" ] && opts="-O0 -g3 -Wall -Wextra -Wshadow";
opts="-O2 -g0 -DNDEBUG -Wall -Wextra -Wshadow -g -fpermissive"
[ -n "$DEBUG" ] && opts="-O0 -g3 -Wall -Wextra -Wshadow -fpermissive";
build_wayland_protocol() {
wayland-scanner private-code external/wlr-export-dmabuf-unstable-v1.xml external/wlr-export-dmabuf-unstable-v1-protocol.c
@@ -25,9 +25,10 @@ build_gsr_kms_server() {
}
build_gsr() {
dependencies="libavcodec libavformat libavutil x11 xcomposite xrandr libpulse libswresample libavfilter libva libcap libdrm wayland-egl wayland-client"
dependencies="libavcodec libavformat libavutil x11 xcomposite xrandr libpulse libswresample libavfilter libva libcap libdrm wayland-egl wayland-client libpipewire-0.3"
includes="$(pkg-config --cflags $dependencies)"
libs="$(pkg-config --libs $dependencies) -ldl -pthread -lm"
libs="$(pkg-config --libs $dependencies) -ldl -pthread -lm -lpipewire-0.3"
$CXX -c src/pipewire.cpp $opts $includes
$CC -c src/capture/capture.c $opts $includes
$CC -c src/capture/nvfbc.c $opts $includes
$CC -c src/capture/xcomposite_cuda.c $opts $includes
@@ -48,7 +49,7 @@ build_gsr() {
$CXX -c src/sound.cpp $opts $includes
$CXX -c src/main.cpp $opts $includes
$CXX -o gpu-screen-recorder capture.o nvfbc.o kms_client.o egl.o cuda.o xnvctrl.o overclock.o window_texture.o shader.o \
color_conversion.o utils.o library_loader.o xcomposite_cuda.o xcomposite_vaapi.o kms_vaapi.o kms_cuda.o wlr-export-dmabuf-unstable-v1-protocol.o sound.o main.o $libs $opts
color_conversion.o utils.o library_loader.o xcomposite_cuda.o xcomposite_vaapi.o kms_vaapi.o kms_cuda.o wlr-export-dmabuf-unstable-v1-protocol.o sound.o pipewire.o main.o $libs $opts
}
build_wayland_protocol

24
flake.lock generated Normal file
View File

@@ -0,0 +1,24 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1703013332,
"narHash": "sha256-+tFNwMvlXLbJZXiMHqYq77z/RfmpfpiI3yjL6o/Zo9M=",
"path": "/nix/store/50bgi74d890mpkp90w1jwc5g0dw4dccr-source",
"rev": "54aac082a4d9bb5bbc5c4e899603abfb76a3f6d6",
"type": "path"
},
"original": {
"id": "nixpkgs",
"type": "indirect"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

70
flake.nix Normal file
View File

@@ -0,0 +1,70 @@
{
description = "A very basic flake";
outputs = { self, nixpkgs }: let
gsr = { stdenv
, lib
, fetchurl
, makeWrapper
, pkg-config
, libXcomposite
, libpulseaudio
, ffmpeg
, wayland
, libdrm
, libva
, libglvnd
, libXrandr
, pipewire
}:
stdenv.mkDerivation {
pname = "gpu-screen-recorder";
version = "unstable-2023-11-18";
# printf "r%s.%s\n" "$(git rev-list --count HEAD)" "$(git rev-parse --short HEAD)"
src = ./.;
#sourceRoot = ".";
nativeBuildInputs = [
pkg-config
makeWrapper
];
buildInputs = [
libXcomposite
libpulseaudio
ffmpeg
wayland
libdrm
libva
libXrandr
pipewire
];
buildPhase = ''
./build.sh
'';
postInstall = ''
install -Dt $out/bin gpu-screen-recorder gsr-kms-server
mkdir $out/bin/.wrapped
mv $out/bin/gpu-screen-recorder $out/bin/.wrapped/
makeWrapper "$out/bin/.wrapped/gpu-screen-recorder" "$out/bin/gpu-screen-recorder" \
--prefix LD_LIBRARY_PATH : ${libglvnd}/lib \
--prefix PATH : $out/bin
'';
meta = with lib; {
description = "A screen recorder that has minimal impact on system performance by recording a window using the GPU only";
homepage = "https://git.dec05eba.com/gpu-screen-recorder/about/";
license = licenses.gpl3Only;
maintainers = with maintainers; [ babbaj ];
platforms = [ "x86_64-linux" ];
};
};
in {
packages.x86_64-linux.gsr = nixpkgs.legacyPackages.x86_64-linux.callPackage gsr {};
packages.x86_64-linux.default = nixpkgs.legacyPackages.x86_64-linux.callPackage gsr {};
};
}

112
include/pipewire.hpp Normal file
View File

@@ -0,0 +1,112 @@
#ifndef __GSR_PIPEWIRE_HPP__
#define __GSR_PIPEWIRE_HPP__
#include <pipewire/pipewire.h>
#include <spa/param/audio/format-utils.h>
#include <spa/debug/types.h>
#include <spa/param/audio/type-info.h>
#include <vector>
#include <string>
#include <optional>
struct capture_config {
// The node_name to look for. If not set, then every node_name matches.
std::optional<std::string> name;
// Wether to look for an match (false) or match everything that does not
// match name (true).
bool exclude;
// Whether name refers to an application (false) or an input device (true).
bool device;
// The amount of channels to create.
int channels;
};
struct target_port {
uint32_t id;
uint32_t node_id;
const char *channel_name;
};
struct target_node {
uint32_t client_id;
uint32_t id;
const char *app_name;
};
struct target_input {
uint32_t id;
};
struct sink_port {
uint32_t id;
const char* channel;
};
struct capture_stream {
struct pw_core *core;
// The stream we will capture
struct pw_stream *stream;
// The context to use.
struct pw_context *context;
// Object to accessing global events.
struct pw_registry *registry;
// Listener for global events.
struct spa_hook registry_listener;
// The capture sink.
struct pw_proxy *sink_proxy;
// Listener for the sink events.
struct spa_hook sink_proxy_listener;
// The event loop to use.
struct pw_thread_loop *thread_loop;
// The id of the sink that we created.
uint32_t sink_id;
// The serial of the sink.
uint32_t sink_serial;
// Sequence number for forcing a server round trip
int seq;
std::vector<struct sink_port> sink_ports;
std::vector<struct target_node> nodes;
std::vector<struct target_port> ports;
std::vector<struct target_input> inputs;
struct capture_config config;
struct spa_audio_info format;
};
/*
* Returns whether the capture stream is ready to have other ports attached to it.
**/
bool capture_stream_is_ready(struct capture_stream *data);
/*
* Initialises the PipeWire API.
**/
void init_pipewire();
/*
* Creates a capture stream using the provided config.
**/
struct capture_stream create_capture_stream(struct capture_config config);
/*
* Frees the resources held by the capture stream.
**/
void free_capture_stream(struct capture_stream *data);
#endif /*__GSR_PIPEWIRE_HPP__*/

View File

@@ -244,12 +244,17 @@ int gsr_kms_client_init(gsr_kms_client *self, const char *card_path) {
fprintf(stderr, "gsr error: gsr_kms_client_init: fork failed, error: %s\n", strerror(errno));
goto err;
} else if(pid == 0) { /* child */
if(inside_flatpak) {
if(has_perm) {
const char *args[] = { server_filepath, self->initial_socket_path, card_path, NULL };
const char *args[] = { "flatpak-spawn", "--host", "/var/lib/flatpak/app/com.dec05eba.gpu_screen_recorder/current/active/files/bin/gsr-kms-server", self->initial_socket_path, card_path, NULL };
execvp(args[0], (char *const*)args);
} else if(inside_flatpak) {
} else {
const char *args[] = { "flatpak-spawn", "--host", "pkexec", "flatpak", "run", "--command=gsr-kms-server", "com.dec05eba.gpu_screen_recorder", self->initial_socket_path, card_path, NULL };
execvp(args[0], (char *const*)args);
}
} else if(has_perm) {
const char *args[] = { server_filepath, self->initial_socket_path, card_path, NULL };
execvp(args[0], (char *const*)args);
} else {
const char *args[] = { "pkexec", server_filepath, self->initial_socket_path, card_path, NULL };
execvp(args[0], (char *const*)args);

View File

@@ -143,6 +143,7 @@ static uint32_t plane_get_properties(int drmfd, uint32_t plane_id, bool *is_curs
if(!props)
return false;
// TODO: Dont do this every frame
for(uint32_t i = 0; i < props->count_props; ++i) {
drmModePropertyPtr prop = drmModeGetProperty(drmfd, props->props[i]);
if(!prop)

View File

@@ -5,6 +5,7 @@
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <math.h>
#include <X11/Xlib.h>
#include <libavutil/hwcontext.h>
#include <libavutil/hwcontext_cuda.h>
@@ -297,7 +298,7 @@ static int gsr_capture_nvfbc_start(gsr_capture *cap, AVCodecContext *video_codec
if(capture_region)
create_capture_params.captureBox = (NVFBC_BOX){ x, y, width, height };
create_capture_params.eTrackingType = tracking_type;
create_capture_params.dwSamplingRateMs = 1000u / ((uint32_t)cap_nvfbc->params.fps + 1);
create_capture_params.dwSamplingRateMs = (uint32_t)ceilf(1000.0f / (float)cap_nvfbc->params.fps);
create_capture_params.bAllowDirectCapture = direct_capture ? NVFBC_TRUE : NVFBC_FALSE;
create_capture_params.bPushModel = direct_capture ? NVFBC_TRUE : NVFBC_FALSE;
//create_capture_params.bDisableAutoModesetRecovery = true; // TODO:

View File

@@ -18,11 +18,13 @@ extern "C" {
#include <mutex>
#include <map>
#include <signal.h>
#include <optional>
#include <sys/stat.h>
#include <unistd.h>
#include <sys/wait.h>
#include "../include/sound.hpp"
#include "../include/pipewire.hpp"
extern "C" {
#include <libavutil/pixfmt.h>
@@ -205,7 +207,7 @@ static AVCodecID audio_codec_get_id(AudioCodec audio_codec) {
return AV_CODEC_ID_AAC;
}
static AVSampleFormat audio_codec_get_sample_format(AudioCodec audio_codec, const AVCodec *codec) {
static AVSampleFormat audio_codec_get_sample_format(AudioCodec audio_codec, const AVCodec *codec, bool mix_audio) {
switch(audio_codec) {
case AudioCodec::AAC: {
return AV_SAMPLE_FMT_FLTP;
@@ -222,6 +224,10 @@ static AVSampleFormat audio_codec_get_sample_format(AudioCodec audio_codec, cons
}
}
// Amix only works with float audio
if(mix_audio)
supports_s16 = false;
if(!supports_s16 && !supports_flt) {
fprintf(stderr, "Warning: opus audio codec is chosen but your ffmpeg version does not support s16/flt sample format and performance might be slightly worse. You can either rebuild ffmpeg with libopus instead of the built-in opus, use the flatpak version of gpu screen recorder or record with flac audio codec instead (-ac flac). Falling back to fltp audio sample format instead.\n");
}
@@ -271,7 +277,7 @@ static AVSampleFormat audio_format_to_sample_format(const AudioFormat audio_form
return AV_SAMPLE_FMT_S16;
}
static AVCodecContext* create_audio_codec_context(int fps, AudioCodec audio_codec) {
static AVCodecContext* create_audio_codec_context(int fps, AudioCodec audio_codec, bool mix_audio) {
const AVCodec *codec = avcodec_find_encoder(audio_codec_get_id(audio_codec));
if (!codec) {
fprintf(stderr, "Error: Could not find %s audio encoder\n", audio_codec_get_name(audio_codec));
@@ -282,7 +288,7 @@ static AVCodecContext* create_audio_codec_context(int fps, AudioCodec audio_code
assert(codec->type == AVMEDIA_TYPE_AUDIO);
codec_context->codec_id = codec->id;
codec_context->sample_fmt = audio_codec_get_sample_format(audio_codec, codec);
codec_context->sample_fmt = audio_codec_get_sample_format(audio_codec, codec, mix_audio);
codec_context->bit_rate = audio_codec_get_get_bitrate(audio_codec);
codec_context->sample_rate = 48000;
if(audio_codec == AudioCodec::AAC)
@@ -295,9 +301,10 @@ static AVCodecContext* create_audio_codec_context(int fps, AudioCodec audio_code
#endif
codec_context->time_base.num = 1;
codec_context->time_base.den = AV_TIME_BASE;
codec_context->time_base.den = codec_context->sample_rate;
codec_context->framerate.num = fps;
codec_context->framerate.den = 1;
codec_context->thread_count = 1;
codec_context->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
return codec_context;
@@ -323,7 +330,7 @@ static AVCodecContext *create_video_codec_context(AVPixelFormat pix_fmt,
codec_context->framerate.den = 1;
codec_context->sample_aspect_ratio.num = 0;
codec_context->sample_aspect_ratio.den = 0;
// High values reeduce file size but increases time it takes to seek
// High values reduce file size but increases time it takes to seek
if(is_livestream) {
codec_context->flags |= (AV_CODEC_FLAG_CLOSED_GOP | AV_CODEC_FLAG_LOW_DELAY);
codec_context->flags2 |= AV_CODEC_FLAG2_FAST;
@@ -393,13 +400,13 @@ static AVCodecContext *create_video_codec_context(AVPixelFormat pix_fmt,
codec_context->global_quality = 180;
break;
case VideoQuality::HIGH:
codec_context->global_quality = 120;
codec_context->global_quality = 140;
break;
case VideoQuality::VERY_HIGH:
codec_context->global_quality = 100;
codec_context->global_quality = 120;
break;
case VideoQuality::ULTRA:
codec_context->global_quality = 70;
codec_context->global_quality = 100;
break;
}
}
@@ -720,16 +727,16 @@ static void open_video(AVCodecContext *codec_context, VideoQuality video_quality
} else {
switch(video_quality) {
case VideoQuality::MEDIUM:
av_dict_set_int(&options, "qp", 40, 0);
av_dict_set_int(&options, "qp", 36, 0);
break;
case VideoQuality::HIGH:
av_dict_set_int(&options, "qp", 35, 0);
av_dict_set_int(&options, "qp", 32, 0);
break;
case VideoQuality::VERY_HIGH:
av_dict_set_int(&options, "qp", 30, 0);
av_dict_set_int(&options, "qp", 28, 0);
break;
case VideoQuality::ULTRA:
av_dict_set_int(&options, "qp", 24, 0);
av_dict_set_int(&options, "qp", 22, 0);
break;
}
}
@@ -799,14 +806,15 @@ static void usage_full() {
fprintf(stderr, " and the video will only be saved when the gpu-screen-recorder is closed. This feature is similar to Nvidia's instant replay feature.\n");
fprintf(stderr, " This option has be between 5 and 1200. Note that the replay buffer size will not always be precise, because of keyframes. Optional, disabled by default.\n");
fprintf(stderr, "\n");
fprintf(stderr, " -k Video codec to use. Should be either 'auto', 'h264', 'h265', 'av1'. Defaults to 'auto' which defaults to 'h265' unless recording at fps higher than 60. Defaults to 'h264' on intel.\n");
fprintf(stderr, " Forcefully set to 'h264' if -c is 'flv'.\n");
fprintf(stderr, " -k Video codec to use. Should be either 'auto', 'h264', 'h265' or 'av1'. Defaults to 'auto' which defaults to 'h265' on AMD/Nvidia and 'h264' on intel.\n");
fprintf(stderr, " Forcefully set to 'h264' if the file container type is 'flv'.\n");
fprintf(stderr, " Forcefully set to 'h265' on AMD/intel if video codec is 'h264' and if the file container type is 'mkv'.\n");
fprintf(stderr, "\n");
fprintf(stderr, " -ac Audio codec to use. Should be either 'aac', 'opus' or 'flac'. Defaults to 'opus' for .mp4/.mkv files, otherwise defaults to 'aac'.\n");
fprintf(stderr, " 'opus' and 'flac' is only supported by .mp4/.mkv files. 'opus' is recommended for best performance and smallest audio size.\n");
fprintf(stderr, "\n");
fprintf(stderr, " -oc Overclock memory transfer rate to the maximum performance level. This only applies to NVIDIA on X11 and exists to overcome a bug in NVIDIA driver where performance level. The same issue exists on Wayland but overclocking is not possible on Wayland.\n");
fprintf(stderr, " is dropped when you record a game. Only needed if you are recording a game that is bottlenecked by GPU.\n");
fprintf(stderr, " -oc Overclock memory transfer rate to the maximum performance level. This only applies to NVIDIA on X11 and exists to overcome a bug in NVIDIA driver where performance level\n");
fprintf(stderr, " is dropped when you record a game. Only needed if you are recording a game that is bottlenecked by GPU. The same issue exists on Wayland but overclocking is not possible on Wayland.\n");
fprintf(stderr, " Works only if your have \"Coolbits\" set to \"12\" in NVIDIA X settings, see README for more information. Note! use at your own risk! Optional, disabled by default.\n");
fprintf(stderr, "\n");
fprintf(stderr, " -fm Framerate mode. Should be either 'cfr' or 'vfr'. Defaults to 'cfr' on NVIDIA X11 and 'vfr' on AMD/Intel X11/Wayland or NVIDIA Wayland.\n");
@@ -973,6 +981,7 @@ struct AudioTrack {
AVFilterGraph *graph = nullptr;
AVFilterContext *sink = nullptr;
int stream_index = 0;
int64_t pts = 0;
};
static std::future<void> save_replay_thread;
@@ -1367,6 +1376,23 @@ struct Arg {
};
int main(int argc, char **argv) {
init_pipewire();
/*struct capture_config config = {
.name = std::optional<std::string>("Firefox"),
.exclude = false,
.device = false,
.channels = 2,
};*/
struct capture_config config = {
.name = std::optional<std::string>("alsa_input.usb-DCMT_Technology_USB_Condenser_Microphone_214b206000000178-00.mono-fallback"),
.exclude = false,
.device = true,
.channels = 1,
};
auto cstream = create_capture_stream(config);
free_capture_stream(&cstream);
return 0;
signal(SIGINT, stop_handler);
signal(SIGUSR1, save_replay_handler);
@@ -1448,7 +1474,7 @@ int main(int argc, char **argv) {
AudioCodec audio_codec = AudioCodec::OPUS;
const char *audio_codec_to_use = args["-ac"].value();
if(!audio_codec_to_use)
audio_codec_to_use = "aac";
audio_codec_to_use = "opus";
if(strcmp(audio_codec_to_use, "aac") == 0) {
audio_codec = AudioCodec::AAC;
@@ -1461,12 +1487,6 @@ int main(int argc, char **argv) {
usage();
}
if(audio_codec != AudioCodec::AAC) {
audio_codec_to_use = "aac";
audio_codec = AudioCodec::AAC;
fprintf(stderr, "Info: audio codec is forcefully set to aac at the moment because of issues with opus/flac. This is a temporary issue\n");
}
bool overclock = false;
const char *overclock_str = args["-oc"].value();
if(!overclock_str)
@@ -1537,6 +1557,7 @@ int main(int argc, char **argv) {
if(!audio_input_arg.values.empty())
audio_inputs = get_pulseaudio_inputs();
std::vector<MergedAudioInputs> requested_audio_inputs;
bool uses_amix = false;
// Manually check if the audio inputs we give exist. This is only needed for pipewire, not pulseaudio.
// Pipewire instead DEFAULTS TO THE DEFAULT AUDIO INPUT. THAT'S RETARDED.
@@ -1546,6 +1567,9 @@ int main(int argc, char **argv) {
continue;
requested_audio_inputs.push_back({parse_audio_input_arg(audio_input)});
if(requested_audio_inputs.back().audio_inputs.size() > 1)
uses_amix = true;
for(AudioInput &request_audio_input : requested_audio_inputs.back().audio_inputs) {
bool match = false;
for(const auto &existing_audio_input : audio_inputs) {
@@ -1913,11 +1937,18 @@ int main(int argc, char **argv) {
file_extension = file_extension.substr(0, comma_index);
}
if(gpu_inf.vendor != GSR_GPU_VENDOR_NVIDIA && file_extension == "mkv" && strcmp(video_codec_to_use, "h264") == 0) {
video_codec_to_use = "h265";
video_codec = VideoCodec::HEVC;
fprintf(stderr, "Warning: video codec was forcefully set to h265 because mkv container is used and mesa (AMD and Intel driver) does not support h264 in mkv files\n");
}
switch(audio_codec) {
case AudioCodec::AAC: {
break;
}
case AudioCodec::OPUS: {
// TODO: Also check mpegts?
if(file_extension != "mp4" && file_extension != "mkv") {
audio_codec_to_use = "aac";
audio_codec = AudioCodec::AAC;
@@ -1926,10 +1957,15 @@ int main(int argc, char **argv) {
break;
}
case AudioCodec::FLAC: {
// TODO: Also check mpegts?
if(file_extension != "mp4" && file_extension != "mkv") {
audio_codec_to_use = "aac";
audio_codec = AudioCodec::AAC;
fprintf(stderr, "Warning: flac audio codec is only supported by .mp4 and .mkv files, falling back to aac instead\n");
} else if(uses_amix) {
audio_codec_to_use = "opus";
audio_codec = AudioCodec::OPUS;
fprintf(stderr, "Warning: flac audio codec is not supported when mixing audio sources, falling back to opus instead\n");
}
break;
}
@@ -1960,10 +1996,6 @@ int main(int argc, char **argv) {
fprintf(stderr, "Info: using h264 encoder because a codec was not specified and your gpu does not support h265\n");
video_codec_to_use = "h264";
video_codec = VideoCodec::H264;
} else if(fps > 60) {
fprintf(stderr, "Info: using h264 encoder because a codec was not specified and fps is more than 60\n");
video_codec_to_use = "h264";
video_codec = VideoCodec::H264;
} else {
fprintf(stderr, "Info: using h265 encoder because a codec was not specified\n");
video_codec_to_use = "h265";
@@ -2060,7 +2092,7 @@ int main(int argc, char **argv) {
framerate_mode_str = "cfr";
}
if(is_livestream) {
if(is_livestream && recording_saved_script) {
fprintf(stderr, "Warning: live stream detected, -sc script is ignored\n");
recording_saved_script = nullptr;
}
@@ -2084,7 +2116,8 @@ int main(int argc, char **argv) {
int audio_stream_index = VIDEO_STREAM_INDEX + 1;
for(const MergedAudioInputs &merged_audio_inputs : requested_audio_inputs) {
AVCodecContext *audio_codec_context = create_audio_codec_context(fps, audio_codec);
const bool use_amix = merged_audio_inputs.audio_inputs.size() > 1;
AVCodecContext *audio_codec_context = create_audio_codec_context(fps, audio_codec, use_amix);
AVStream *audio_stream = nullptr;
if(replay_buffer_size_secs == -1)
@@ -2105,7 +2138,6 @@ int main(int argc, char **argv) {
std::vector<AVFilterContext*> src_filter_ctx;
AVFilterGraph *graph = nullptr;
AVFilterContext *sink = nullptr;
bool use_amix = merged_audio_inputs.audio_inputs.size() > 1;
if(use_amix) {
int err = init_filter_graph(audio_codec_context, &graph, &sink, src_filter_ctx, merged_audio_inputs.audio_inputs.size());
if(err < 0) {
@@ -2130,15 +2162,16 @@ int main(int argc, char **argv) {
if(audio_input.name.empty()) {
audio_device.sound_device.handle = NULL;
audio_device.sound_device.frames = 0;
audio_device.frame = NULL;
} else {
if(sound_device_get_by_name(&audio_device.sound_device, audio_input.name.c_str(), audio_input.description.c_str(), num_channels, audio_codec_context->frame_size, audio_codec_context_get_audio_format(audio_codec_context)) != 0) {
fprintf(stderr, "Error: failed to get \"%s\" sound device\n", audio_input.name.c_str());
_exit(1);
}
audio_device.frame = create_audio_frame(audio_codec_context);
}
audio_device.frame = create_audio_frame(audio_codec_context);
audio_device.frame->pts = 0;
audio_devices.push_back(std::move(audio_device));
}
@@ -2179,8 +2212,8 @@ int main(int argc, char **argv) {
const double start_time_pts = clock_get_monotonic_seconds();
double start_time = clock_get_monotonic_seconds(); // todo - target_fps to make first frame start immediately?
double frame_timer_start = start_time;
double start_time = clock_get_monotonic_seconds();
double frame_timer_start = start_time - target_fps; // We want to capture the first frame immediately
int fps_counter = 0;
AVFrame *frame = av_frame_alloc();
@@ -2236,7 +2269,6 @@ int main(int argc, char **argv) {
const double target_audio_hz = 1.0 / (double)audio_track.codec_context->sample_rate;
double received_audio_time = clock_get_monotonic_seconds();
const int64_t timeout_ms = std::round((1000.0 / (double)audio_track.codec_context->sample_rate) * 1000.0);
int64_t prev_pts = 0;
while(running) {
void *sound_buffer;
@@ -2256,7 +2288,7 @@ int main(int argc, char **argv) {
}
// TODO: Is this |received_audio_time| really correct?
int64_t num_missing_frames = std::round((this_audio_frame_time - received_audio_time) / target_audio_hz / (int64_t)audio_device.frame->nb_samples);
int64_t num_missing_frames = std::round((this_audio_frame_time - received_audio_time) / target_audio_hz / (int64_t)audio_track.codec_context->frame_size);
if(got_audio_data)
num_missing_frames = std::max((int64_t)0, num_missing_frames - 1);
@@ -2275,7 +2307,7 @@ int main(int argc, char **argv) {
//audio_track.frame->data[0] = empty_audio;
received_audio_time = this_audio_frame_time;
if(needs_audio_conversion)
swr_convert(swr, &audio_device.frame->data[0], audio_device.frame->nb_samples, (const uint8_t**)&empty_audio, audio_track.codec_context->frame_size);
swr_convert(swr, &audio_device.frame->data[0], audio_track.codec_context->frame_size, (const uint8_t**)&empty_audio, audio_track.codec_context->frame_size);
else
audio_device.frame->data[0] = empty_audio;
@@ -2288,12 +2320,6 @@ int main(int argc, char **argv) {
fprintf(stderr, "Error: failed to add audio frame to filter\n");
}
} else {
audio_device.frame->pts = (this_audio_frame_time - record_start_time) * (double)AV_TIME_BASE;
const bool same_pts = audio_device.frame->pts == prev_pts;
prev_pts = audio_device.frame->pts;
if(same_pts)
continue;
ret = avcodec_send_frame(audio_track.codec_context, audio_device.frame);
if(ret >= 0) {
// TODO: Move to separate thread because this could write to network (for example when livestreaming)
@@ -2302,6 +2328,7 @@ int main(int argc, char **argv) {
fprintf(stderr, "Failed to encode audio!\n");
}
}
audio_device.frame->pts += audio_track.codec_context->frame_size;
}
}
@@ -2311,16 +2338,10 @@ int main(int argc, char **argv) {
if(got_audio_data) {
// TODO: Instead of converting audio, get float audio from alsa. Or does alsa do conversion internally to get this format?
if(needs_audio_conversion)
swr_convert(swr, &audio_device.frame->data[0], audio_device.frame->nb_samples, (const uint8_t**)&sound_buffer, audio_track.codec_context->frame_size);
swr_convert(swr, &audio_device.frame->data[0], audio_track.codec_context->frame_size, (const uint8_t**)&sound_buffer, audio_track.codec_context->frame_size);
else
audio_device.frame->data[0] = (uint8_t*)sound_buffer;
audio_device.frame->pts = (this_audio_frame_time - record_start_time) * (double)AV_TIME_BASE;
const bool same_pts = audio_device.frame->pts == prev_pts;
prev_pts = audio_device.frame->pts;
if(same_pts)
continue;
if(audio_track.graph) {
std::lock_guard<std::mutex> lock(audio_filter_mutex);
// TODO: av_buffersrc_add_frame
@@ -2336,6 +2357,8 @@ int main(int argc, char **argv) {
fprintf(stderr, "Failed to encode audio!\n");
}
}
audio_device.frame->pts += audio_track.codec_context->frame_size;
}
}
@@ -2353,7 +2376,6 @@ int main(int argc, char **argv) {
int64_t video_pts_counter = 0;
int64_t video_prev_pts = 0;
int64_t audio_prev_pts = 0;
while(running) {
double frame_start = clock_get_monotonic_seconds();
@@ -2374,15 +2396,7 @@ int main(int argc, char **argv) {
int err = 0;
while ((err = av_buffersink_get_frame(audio_track.sink, aframe)) >= 0) {
const double this_audio_frame_time = clock_get_monotonic_seconds();
aframe->pts = (this_audio_frame_time - record_start_time) * (double)AV_TIME_BASE;
const bool same_pts = aframe->pts == audio_prev_pts;
audio_prev_pts = aframe->pts;
if(same_pts) {
av_frame_unref(aframe);
continue;
}
aframe->pts = audio_track.pts;
err = avcodec_send_frame(audio_track.codec_context, aframe);
if(err >= 0){
// TODO: Move to separate thread because this could write to network (for example when livestreaming)
@@ -2391,6 +2405,7 @@ int main(int argc, char **argv) {
fprintf(stderr, "Failed to encode audio!\n");
}
av_frame_unref(aframe);
audio_track.pts += audio_track.codec_context->frame_size;
}
}
}

491
src/pipewire.cpp Normal file
View File

@@ -0,0 +1,491 @@
#include <pipewire/pipewire.h>
#include <spa/param/audio/format-utils.h>
#include <spa/debug/types.h>
#include <spa/param/audio/type-info.h>
#include <vector>
#include <string>
#include <optional>
#include "../include/pipewire.hpp"
static void on_process(void *userdata)
{
struct capture_stream *data = static_cast<struct capture_stream *>(userdata);
struct pw_buffer *b;
struct spa_buffer *buf;
if ((b = pw_stream_dequeue_buffer(data->stream)) == NULL) {
pw_log_warn("out of buffers: %m");
return;
}
buf = b->buffer;
if (buf->datas[0].data == NULL)
return;
//printf("got a frame of size %d\n", buf->datas[0].chunk->size);
pw_stream_queue_buffer(data->stream, b);
}
/* [on_process] */
static void on_param_changed(void *userdata, uint32_t id, const struct spa_pod *param)
{
struct capture_stream *data = static_cast<struct capture_stream *>(userdata);
if (param == NULL || id != SPA_PARAM_Format)
return;
if (spa_format_parse(param,
&data->format.media_type,
&data->format.media_subtype) < 0)
return;
if (data->format.media_type != SPA_MEDIA_TYPE_audio ||
data->format.media_subtype != SPA_MEDIA_SUBTYPE_raw)
return;
if (spa_format_audio_raw_parse(param, &data->format.info.raw) < 0)
return;
printf("got audio format:\n");
printf(" channels: %d\n", data->format.info.raw.channels);
printf(" rate: %d\n", data->format.info.raw.rate);
}
void register_target_node(struct capture_stream *data, uint32_t id, uint32_t client_id, const char* app_name) {
struct target_node node = {};
node.app_name = strdup(app_name);
node.id = id;
node.client_id = client_id;
data->nodes.push_back(node);
}
void register_target_port(struct capture_stream *data, uint32_t id, uint32_t node_id, const char *channel_name) {
struct target_port port = {};
port.id = id;
port.node_id = node_id;
port.channel_name = strdup(channel_name);
data->ports.push_back(port);
}
void register_target_input(struct capture_stream *data, uint32_t id) {
struct target_input port = {
.id = id,
};
data->inputs.push_back(port);
}
bool has_matching_node_or_input(struct capture_stream *capture, uint32_t node_id) {
// Find the corresponding node.
bool has_matching_node = false;
for (auto t : capture->nodes) {
if (t.id == node_id) {
has_matching_node = true;
break;
}
}
if (has_matching_node) {
return true;
}
// Find the corresponding input.
bool has_matching_input = false;
for (auto t : capture->inputs) {
if (t.id == node_id) {
has_matching_input = true;
break;
}
}
if (has_matching_input) {
return true;
}
return false;
}
bool connect_port_to_sink(struct capture_stream *data, uint32_t node_id, uint32_t id, const char *channel_name) {
// Find the corresponding node.
if (!has_matching_node_or_input(data, node_id)) {
printf("No matching node found\n");
return false;
}
// Find the correct sink port to attach to.
uint32_t sink_dst_port_id = 0;
for (auto sink_port : data->sink_ports) {
printf("%s = %s\n", sink_port.channel, channel_name);
if (strcmp(sink_port.channel, channel_name) == 0) {
sink_dst_port_id = sink_port.id;
break;
}
}
if (!sink_dst_port_id) {
return false;
}
// Connect the port to the sink.
struct pw_properties *link_props = pw_properties_new(
PW_KEY_OBJECT_LINGER, "false",
NULL
);
pw_properties_setf(link_props, PW_KEY_LINK_OUTPUT_NODE, "%u", node_id);
pw_properties_setf(link_props, PW_KEY_LINK_OUTPUT_PORT, "%u", id);
pw_properties_setf(link_props, PW_KEY_LINK_INPUT_NODE, "%u", data->sink_id);
pw_properties_setf(link_props, PW_KEY_LINK_INPUT_PORT, "%u", sink_dst_port_id);
printf(
"[DBG] Connecting (%d, %d) -> (%d, %d)\n",
node_id, id,
data->sink_id, sink_dst_port_id
);
struct pw_proxy *link_proxy = static_cast<struct pw_proxy *>(
pw_core_create_object(
data->core, "link-factory",
PW_TYPE_INTERFACE_Link, PW_VERSION_LINK, &link_props->dict, 0
)
);
data->seq = pw_core_sync(data->core, PW_ID_CORE, data->seq);
pw_properties_free(link_props);
if (!link_proxy) {
printf("[ERR] Failed to connect port %u of node %u to capture sink\n", id, node_id);
return false;
}
}
void connect_ports_to_sink(struct capture_stream *data) {
for (auto port : data->ports) {
printf("[DBG] Attempting to connect port %d\n", port.id);
connect_port_to_sink(
data,
port.node_id,
port.id,
port.channel_name
);
}
}
static void registry_event_global(void *raw_data, uint32_t id,
uint32_t permissions, const char *type, uint32_t version,
const struct spa_dict *props)
{
if (!type || !props)
return;
struct capture_stream *data = static_cast<struct capture_stream *>(raw_data);
if (id == data->sink_id) {
const char *serial = spa_dict_lookup(props, PW_KEY_OBJECT_SERIAL);
if (!serial) {
data->sink_serial = 0;
} else {
data->sink_serial = strtoul(serial, NULL, 10);
}
}
if (strcmp(type, PW_TYPE_INTERFACE_Port) == 0) {
const char *nid, *dir, *chn;
if (
!(nid = spa_dict_lookup(props, PW_KEY_NODE_ID)) ||
!(dir = spa_dict_lookup(props, PW_KEY_PORT_DIRECTION)) ||
!(chn = spa_dict_lookup(props, PW_KEY_AUDIO_CHANNEL))
) {
return;
}
uint32_t node_id = strtoul(nid, NULL, 10);
if (strcmp(dir, "in") == 0 && node_id == data->sink_id && data->sink_id != SPA_ID_INVALID) {
// This port belongs to our own capture sink.
printf("[DBG] Own sink %d (%s) found\n", id, chn);
data->sink_ports.push_back(
{ id, strdup(chn), }
);
} else if (strcmp(dir, "out") == 0) {
if (!capture_stream_is_ready(data)) {
// We're not ready to connect streams, so just track it for later.
printf("[DBG] Capture sink is not ready yet for %d\n", id);
register_target_port(
data,
id,
node_id,
chn
);
return;
}
connect_port_to_sink(
data,
node_id,
id,
chn
);
}
} else if (strcmp(type, PW_TYPE_INTERFACE_Node) == 0) {
const char *node_name, *media_class;
if (!(node_name = spa_dict_lookup(props, PW_KEY_NODE_NAME)) ||
!(media_class = spa_dict_lookup(props, PW_KEY_MEDIA_CLASS))) {
return;
}
if (strcmp(media_class, "Stream/Output/Audio") == 0) {
const char *node_app_name = spa_dict_lookup(props, PW_KEY_APP_NAME);
if (!node_app_name) {
node_app_name = node_name;
}
if (data->config.name.has_value()) {
bool matches = strcmp(
node_app_name,
data->config.name.value().c_str()
) == 0;
printf("[DBG] Node: name %s matches %d exclude %d\n", node_app_name, matches, data->config.exclude);
if (!(matches ^ data->config.exclude)) {
return;
}
}
uint32_t client_id = 0;
const char *client_id_str = spa_dict_lookup(props, PW_KEY_CLIENT_ID);
if (client_id_str) {
client_id = strtoul(client_id_str, NULL, 10);
}
register_target_node(
data,
id,
client_id,
node_app_name
);
} else if (strcmp(media_class, "Audio/Source") == 0) {
if (!data->config.device || data->config.exclude || !data->config.name.has_value()) {
return;
}
if (strcmp(node_name, data->config.name.value().c_str()) != 0) {
return;
}
printf("[DBG] Tracking input source %d (%s)\n", id, node_name);
register_target_input(data, id);
}
}
}
static const struct pw_stream_events stream_events = {
PW_VERSION_STREAM_EVENTS,
.param_changed = on_param_changed,
.process = on_process,
};
static const struct pw_registry_events registry_events = {
PW_VERSION_REGISTRY_EVENTS,
.global = registry_event_global,
};
static void on_sink_proxy_bound(void *userdata, uint32_t global_id) {
struct capture_stream *data = static_cast<struct capture_stream *>(userdata);
data->sink_id = global_id;
printf("[DBG] Got proxy sink id %d\n", global_id);
}
static void on_sink_proxy_error(void *data, int seq, int res, const char *message)
{
printf("[pipewire] App capture sink error: seq:%d res:%d :%s", seq, res, message);
}
static const struct pw_proxy_events sink_proxy_events = {
PW_VERSION_PROXY_EVENTS,
.bound = on_sink_proxy_bound,
.error = on_sink_proxy_error,
};
void init_pipewire() {
pw_init(NULL, NULL);
}
bool capture_stream_is_ready(struct capture_stream *data) {
return data->sink_id != SPA_ID_INVALID &&
data->sink_serial != SPA_ID_INVALID &&
data->sink_ports.size() == data->config.channels;
}
enum spa_audio_channel *channel_num_to_channels(int num) {
enum spa_audio_channel channels[8];
if (num == 1) {
// Mono
channels[0] = SPA_AUDIO_CHANNEL_MONO;
channels[1] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[2] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[3] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[4] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[5] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[6] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[7] = SPA_AUDIO_CHANNEL_UNKNOWN;
} else if (num == 2) {
// Probably FL,FR.
channels[0] = SPA_AUDIO_CHANNEL_FL;
channels[1] = SPA_AUDIO_CHANNEL_FL;
channels[2] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[3] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[4] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[5] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[6] = SPA_AUDIO_CHANNEL_UNKNOWN;
channels[7] = SPA_AUDIO_CHANNEL_UNKNOWN;
}
return channels;
}
const char *channel_num_to_position(int num) {
if (num == 1) {
return "MONO";
} else if (num == 2) {
return "FL,FR";
}
}
struct capture_stream create_capture_stream(struct capture_config config) {
struct capture_stream data = {
0,
sink_id: SPA_ID_INVALID,
sink_serial: SPA_ID_INVALID,
seq: 0,
sink_ports: std::vector<struct sink_port> {},
nodes: std::vector<struct target_node> {},
ports: std::vector<struct target_port> {},
inputs: std::vector<struct target_input> {},
config: config,
};
const struct spa_pod *params[1];
uint8_t buffer[2048];
struct spa_pod_builder b = SPA_POD_BUILDER_INIT(buffer, sizeof(buffer));
struct pw_properties *props;
pw_init(NULL, NULL);
data.thread_loop = pw_thread_loop_new("PipeWire thread loop", NULL);
pw_thread_loop_lock(data.thread_loop);
if (pw_thread_loop_start(data.thread_loop) < 0) {
printf("Failed to start thread loop");
return;
}
data.context = pw_context_new(pw_thread_loop_get_loop(data.thread_loop), NULL, 0);
data.core = pw_context_connect(data.context, NULL, 0);
pw_core_sync(data.core, PW_ID_CORE, 0);
//pw_thread_loop_wait(data.thread_loop);
pw_thread_loop_unlock(data.thread_loop);
char numbuf[2];
sprintf(numbuf, "%d", data.config.channels);
props = pw_properties_new(
PW_KEY_MEDIA_TYPE, "Audio",
PW_KEY_MEDIA_CATEGORY, "Capture",
PW_KEY_MEDIA_ROLE, "Screen",
PW_KEY_NODE_NAME, "GSR",
PW_KEY_NODE_VIRTUAL, "true",
PW_KEY_AUDIO_CHANNELS, numbuf,
SPA_KEY_AUDIO_POSITION, channel_num_to_position(data.config.channels),
PW_KEY_FACTORY_NAME, "support.null-audio-sink",
PW_KEY_MEDIA_CLASS, "Audio/Sink/Internal",
NULL
);
data.sink_proxy = static_cast<pw_proxy *>(
pw_core_create_object(
data.core,
"adapter",
PW_TYPE_INTERFACE_Node, PW_VERSION_NODE, &props->dict, 0
)
);
pw_proxy_add_listener(
data.sink_proxy,
&data.sink_proxy_listener,
&sink_proxy_events,
&data
);
data.registry = pw_core_get_registry(data.core, PW_VERSION_REGISTRY, 0);
printf("Got registry\n");
spa_zero(data.registry_listener);
pw_registry_add_listener(data.registry, &data.registry_listener, &registry_events, &data);
printf("Listener registered\n");
printf("Waiting for id\n");
while (!capture_stream_is_ready(&data)) {
printf("Poll\n");
pw_loop_iterate(pw_thread_loop_get_loop(data.thread_loop), -1);
}
printf("Node Setup complete\n");
// Connect delayed ports to the sink
//pw_thread_loop_lock(data.thread_loop);
connect_ports_to_sink(&data);
//pw_thread_loop_unlock(data.thread_loop);
auto channels = channel_num_to_channels(data.config.channels);
params[0] = spa_pod_builder_add_object(
&b,
SPA_TYPE_OBJECT_Format, SPA_PARAM_EnumFormat,
SPA_FORMAT_mediaType, SPA_POD_Id(SPA_MEDIA_TYPE_audio),
SPA_FORMAT_mediaSubtype, SPA_POD_Id(SPA_MEDIA_SUBTYPE_raw),
SPA_FORMAT_AUDIO_channels, SPA_POD_Int(data.config.channels),
SPA_FORMAT_AUDIO_position, SPA_POD_Array(sizeof(enum spa_audio_channel), SPA_TYPE_Id, data.config.channels, channels),
SPA_FORMAT_AUDIO_format, SPA_POD_CHOICE_ENUM_Id(
8, SPA_AUDIO_FORMAT_U8, SPA_AUDIO_FORMAT_S16_LE, SPA_AUDIO_FORMAT_S32_LE,
SPA_AUDIO_FORMAT_F32_LE, SPA_AUDIO_FORMAT_U8P, SPA_AUDIO_FORMAT_S16P,
SPA_AUDIO_FORMAT_S32P, SPA_AUDIO_FORMAT_F32P
)
);
data.stream = pw_stream_new(
data.core,
"GSR",
pw_properties_new(
PW_KEY_NODE_NAME, "GSR",
PW_KEY_NODE_DESCRIPTION, "GSR Audio Capture",
PW_KEY_MEDIA_TYPE, "Audio",
PW_KEY_MEDIA_CATEGORY, "Capture",
PW_KEY_MEDIA_ROLE, "Production",
PW_KEY_NODE_WANT_DRIVER, "true",
PW_KEY_STREAM_CAPTURE_SINK, "true",
NULL
)
);
struct pw_properties *stream_props = pw_properties_new(NULL, NULL);
pw_properties_setf(stream_props, PW_KEY_TARGET_OBJECT, "%u", data.sink_serial);
pw_stream_update_properties(data.stream, &stream_props->dict);
pw_properties_free(stream_props);
pw_stream_connect(
data.stream,
PW_DIRECTION_INPUT,
PW_ID_ANY,
static_cast<pw_stream_flags>(PW_STREAM_FLAG_AUTOCONNECT | PW_STREAM_FLAG_MAP_BUFFERS),
params,
1
);
struct spa_hook stream_listener;
pw_stream_add_listener(
data.stream,
&stream_listener,
&stream_events,
&data
);
while (true) {
pw_loop_iterate(pw_thread_loop_get_loop(data.thread_loop), -1);
}
}
void free_capture_stream(struct capture_stream *data) {
pw_proxy_destroy((struct pw_proxy *) data->registry);
pw_proxy_destroy(data->sink_proxy);
pw_stream_destroy(data->stream);
pw_context_destroy(data->context);
pw_thread_loop_destroy(data->thread_loop);
}