Audio Grabber Feature (#1570)

* Creating Audio Grabber

Creating Audio Grabber

Creating Audio Grabber.

Successfully began capturing audio in windows. Starting to implement a hard-coded UV Visualizer.

Got Windows DirectSound Implementation working.
Hardcoded basic VU Meter.

Begin working on linux audio grabber implementation.

Finished Linux Draft Implementation.
Minor Mods to windows implementation.

Windows:
 - Free memory used by device id.
 - Prevent starting audio if the grabber is disabled
 - More debug logging

Linux:
 - Prevent starting audio if the grabber is disabled

Added strings to english
Removed "custom" from device selection
Made hard-coded visualizer values configurable.
wrote values to imageData with BGR priority to enable configurable values to be set in RGB format.
created logic to support "Automatic" to enable the API to select the default device.

Add language key for audio in "Remote Control" section.
Removed audio configuration for number of channels. This was causing an error with some devices.

Fixed logic to update capture while its active.
Optimizing code .

UI Tweaks
Destructuring.

Fixed build error on linux.

Custom Effects - Clean-ups and Enhancements (#1163)

* Cleanup EffectFileHandler

* Support Custom Effect Schemas and align EffectFileHandler

* Change back to colon prefix for system effects

* WebSockets - Fix error in handling fragmented frames

* Correct missing colon updates

* Update json with image file location for custom gif effects

* Image effect deletion - considere full filename is stored in JSON

* Correct selection lists indentions

Creating Audio Grabber

Creating Audio Grabber

Creating Audio Grabber.

Successfully began capturing audio in windows. Starting to implement a hard-coded UV Visualizer.

Got Windows DirectSound Implementation working.
Hardcoded basic VU Meter.

Begin working on linux audio grabber implementation.

Finished Linux Draft Implementation.
Minor Mods to windows implementation.

Windows:
 - Free memory used by device id.
 - Prevent starting audio if the grabber is disabled
 - More debug logging

Linux:
 - Prevent starting audio if the grabber is disabled

Added strings to english
Removed "custom" from device selection
Made hard-coded visualizer values configurable.
wrote values to imageData with BGR priority to enable configurable values to be set in RGB format.
created logic to support "Automatic" to enable the API to select the default device.

Add language key for audio in "Remote Control" section.
Removed audio configuration for number of channels. This was causing an error with some devices.

Fixed logic to update capture while its active.
Optimizing code .

UI Tweaks
Destructuring.

Fixed build error on linux.

Commented setVideoMode from AudioGrabber.

Linux Threading changes.

Implementing new API

Continuing to implement audio into new APIs

Fixed Audio Grabber for DirectSound on Windows
Fixed UI for Audio Grabber Configuration
Default AUDIO to off unless specified.

fixed missing #ifdef for audio grabber.

Added logic to calculate a dynamic multiplier from the signal input.

Updating linux api for discovering devices.

Fixed HTML/JS issues with view.
Fixed NPE in Windows.
Disabled setting thread priority in linux.

updated the schema options check to pass through hidden states and commented the change.

Updated grabber start conditions
Updated Audio grabber to instantiate similar to video grabber

Updated windows grabber to set "started" flag to false when shutting down.
Removed "tryStart" to prevent enabling audio capture unnecessarily.

Fixing instance audio grabber device configuration

Added configurable resolution
Reduced tolerance to 5%
Fixed issue where grabber failed for additional instances when "start" was called multiple times.

Fixed resolution calculation

Change averaging algorithm to prevent overflowing the sum.

Updated logic to stop audio grabber when disabled.

Fix integer casting and rounding.

Restart grabber on configuration change.
Fix missing include/grabber/AudioGrabber.
Disable tolerance.

Added configurable tolerance.
Fixed tolerance algorithm.
reset multiplier on configuration change.

Line Endings

Proposed change and questions/request to fix

implementing more of LordGrey's suggestions.

Fix mode for snd_pcm_open. Latest ALSA uses SND_PCM_NONBLOCK instead of SND_PCM_OPEN_NONBLOCK
defaulted multiplier to 0 "auto"
defaulted tolerance to 20%

changed 100 to 100.0 for pixel value percentage calculation to fix value from being 0.

missed a 100 as a double so precision isn't lost during math operation.

Fix Windows grabber and further cleanups

Enable Audio grabbing in standard build

Remove empty methods

Fix audio capture priority setting

Remove unused code

Clean-up default config

Allow additional json-editor attributes

Allow multiple effects and resetting to defaults

Correct default values

Allow to build for Qt < 5.14

Update CodeQL build dependency

Update build dependencies

Remove effect1 placeholder

* Renamed uvMeter to VU Meter (Volume Unit)
- Fixed issues flagged by code scanning bot.

* Moved stop call into destructor of implementing class.

* Removed commented linux audio channel configuration logic.

---------

Co-authored-by: Michael Rochelle <michael@j2inn.com>
This commit is contained in:
Michael Rochelle
2023-02-19 00:36:39 -08:00
committed by GitHub
parent a1bfa63343
commit acdf733936
50 changed files with 2390 additions and 243 deletions

View File

@@ -21,7 +21,7 @@
"component":
{
"type" : "string",
"enum" : ["ALL", "SMOOTHING", "BLACKBORDER", "FORWARDER", "BOBLIGHTSERVER", "GRABBER", "V4L", "LEDDEVICE"],
"enum" : ["ALL", "SMOOTHING", "BLACKBORDER", "FORWARDER", "BOBLIGHTSERVER", "GRABBER", "V4L", "AUDIO", "LEDDEVICE"],
"required": true
},
"state":

View File

@@ -29,6 +29,18 @@
#include <grabber/V4L2Grabber.h>
#endif
#if defined(ENABLE_AUDIO)
#include <grabber/AudioGrabber.h>
#ifdef WIN32
#include <grabber/AudioGrabberWindows.h>
#endif
#ifdef __linux__
#include <grabber/AudioGrabberLinux.h>
#endif
#endif
#if defined(ENABLE_X11)
#include <grabber/X11Grabber.h>
#endif
@@ -554,6 +566,7 @@ void JsonAPI::handleServerInfoCommand(const QJsonObject &message, const QString
info["ledDevices"] = ledDevices;
QJsonObject grabbers;
// SCREEN
QJsonObject screenGrabbers;
if (GrabberWrapper::getInstance() != nullptr)
{
@@ -573,6 +586,7 @@ void JsonAPI::handleServerInfoCommand(const QJsonObject &message, const QString
}
screenGrabbers["available"] = availableScreenGrabbers;
// VIDEO
QJsonObject videoGrabbers;
if (GrabberWrapper::getInstance() != nullptr)
{
@@ -592,8 +606,31 @@ void JsonAPI::handleServerInfoCommand(const QJsonObject &message, const QString
}
videoGrabbers["available"] = availableVideoGrabbers;
// AUDIO
QJsonObject audioGrabbers;
if (GrabberWrapper::getInstance() != nullptr)
{
QStringList activeGrabbers = GrabberWrapper::getInstance()->getActive(_hyperion->getInstanceIndex(), GrabberTypeFilter::AUDIO);
QJsonArray activeGrabberNames;
for (auto grabberName : activeGrabbers)
{
activeGrabberNames.append(grabberName);
}
audioGrabbers["active"] = activeGrabberNames;
}
QJsonArray availableAudioGrabbers;
for (auto grabber : GrabberWrapper::availableGrabbers(GrabberTypeFilter::AUDIO))
{
availableAudioGrabbers.append(grabber);
}
audioGrabbers["available"] = availableAudioGrabbers;
grabbers.insert("screen", screenGrabbers);
grabbers.insert("video", videoGrabbers);
grabbers.insert("audio", audioGrabbers);
info["grabbers"] = grabbers;
info["videomode"] = QString(videoMode2String(_hyperion->getCurrentVideoMode()));
@@ -1607,6 +1644,7 @@ void JsonAPI::handleInputSourceCommand(const QJsonObject& message, const QString
QJsonObject inputSourcesDiscovered;
inputSourcesDiscovered.insert("sourceType", sourceType);
QJsonArray videoInputs;
QJsonArray audioInputs;
#if defined(ENABLE_V4L2) || defined(ENABLE_MF)
@@ -1623,6 +1661,24 @@ void JsonAPI::handleInputSourceCommand(const QJsonObject& message, const QString
}
else
#endif
#if defined(ENABLE_AUDIO)
if (sourceType == "audio")
{
AudioGrabber* grabber;
#ifdef WIN32
grabber = new AudioGrabberWindows();
#endif
#ifdef __linux__
grabber = new AudioGrabberLinux();
#endif
QJsonObject params;
audioInputs = grabber->discover(params);
delete grabber;
}
else
#endif
{
DebugIf(verbose, _log, "sourceType: [%s]", QSTRING_CSTR(sourceType));
@@ -1719,6 +1775,7 @@ void JsonAPI::handleInputSourceCommand(const QJsonObject& message, const QString
}
inputSourcesDiscovered["video_sources"] = videoInputs;
inputSourcesDiscovered["audio_sources"] = audioInputs;
DebugIf(verbose, _log, "response: [%s]", QString(QJsonDocument(inputSourcesDiscovered).toJson(QJsonDocument::Compact)).toUtf8().constData());

View File

@@ -115,6 +115,9 @@ void MessageForwarder::enableTargets(bool enable, const QJsonObject& config)
case hyperion::COMP_V4L:
connect(_hyperion, &Hyperion::forwardV4lProtoMessage, this, &MessageForwarder::forwardFlatbufferMessage, Qt::UniqueConnection);
break;
case hyperion::COMP_AUDIO:
connect(_hyperion, &Hyperion::forwardAudioProtoMessage, this, &MessageForwarder::forwardFlatbufferMessage, Qt::UniqueConnection);
break;
#if defined(ENABLE_FLATBUF_SERVER)
case hyperion::COMP_FLATBUFSERVER:
#endif
@@ -153,6 +156,7 @@ void MessageForwarder::handlePriorityChanges(int priority)
switch (activeCompId) {
case hyperion::COMP_GRABBER:
disconnect(_hyperion, &Hyperion::forwardV4lProtoMessage, nullptr, nullptr);
disconnect(_hyperion, &Hyperion::forwardAudioProtoMessage, nullptr, nullptr);
#if defined(ENABLE_FLATBUF_SERVER) || defined(ENABLE_PROTOBUF_SERVER)
disconnect(_hyperion, &Hyperion::forwardBufferMessage, nullptr, nullptr);
#endif
@@ -160,11 +164,20 @@ void MessageForwarder::handlePriorityChanges(int priority)
break;
case hyperion::COMP_V4L:
disconnect(_hyperion, &Hyperion::forwardSystemProtoMessage, nullptr, nullptr);
disconnect(_hyperion, &Hyperion::forwardAudioProtoMessage, nullptr, nullptr);
#if defined(ENABLE_FLATBUF_SERVER) || defined(ENABLE_PROTOBUF_SERVER)
disconnect(_hyperion, &Hyperion::forwardBufferMessage, nullptr, nullptr);
#endif
connect(_hyperion, &Hyperion::forwardV4lProtoMessage, this, &MessageForwarder::forwardFlatbufferMessage, Qt::UniqueConnection);
break;
case hyperion::COMP_AUDIO:
disconnect(_hyperion, &Hyperion::forwardSystemProtoMessage, nullptr, nullptr);
disconnect(_hyperion, &Hyperion::forwardV4lProtoMessage, nullptr, nullptr);
#if defined(ENABLE_FLATBUF_SERVER) || defined(ENABLE_PROTOBUF_SERVER)
disconnect(_hyperion, &Hyperion::forwardBufferMessage, nullptr, nullptr);
#endif
connect(_hyperion, &Hyperion::forwardAudioProtoMessage, this, &MessageForwarder::forwardFlatbufferMessage, Qt::UniqueConnection);
break;
#if defined(ENABLE_FLATBUF_SERVER)
case hyperion::COMP_FLATBUFSERVER:
#endif
@@ -172,6 +185,7 @@ void MessageForwarder::handlePriorityChanges(int priority)
case hyperion::COMP_PROTOSERVER:
#endif
#if defined(ENABLE_FLATBUF_SERVER) || defined(ENABLE_PROTOBUF_SERVER)
disconnect(_hyperion, &Hyperion::forwardAudioProtoMessage, nullptr, nullptr);
disconnect(_hyperion, &Hyperion::forwardSystemProtoMessage, nullptr, nullptr);
disconnect(_hyperion, &Hyperion::forwardV4lProtoMessage, nullptr, nullptr);
connect(_hyperion, &Hyperion::forwardBufferMessage, this, &MessageForwarder::forwardFlatbufferMessage, Qt::UniqueConnection);
@@ -180,6 +194,7 @@ void MessageForwarder::handlePriorityChanges(int priority)
default:
disconnect(_hyperion, &Hyperion::forwardSystemProtoMessage, nullptr, nullptr);
disconnect(_hyperion, &Hyperion::forwardV4lProtoMessage, nullptr, nullptr);
disconnect(_hyperion, &Hyperion::forwardAudioProtoMessage, nullptr, nullptr);
#if defined(ENABLE_FLATBUF_SERVER) || defined(ENABLE_PROTOBUF_SERVER)
disconnect(_hyperion, &Hyperion::forwardBufferMessage, nullptr, nullptr);
#endif
@@ -373,6 +388,7 @@ void MessageForwarder::stopFlatbufferTargets()
{
disconnect(_hyperion, &Hyperion::forwardSystemProtoMessage, nullptr, nullptr);
disconnect(_hyperion, &Hyperion::forwardV4lProtoMessage, nullptr, nullptr);
disconnect(_hyperion, &Hyperion::forwardAudioProtoMessage, nullptr, nullptr);
#if defined(ENABLE_FLATBUF_SERVER) || defined(ENABLE_PROTOBUF_SERVER)
disconnect(_hyperion, &Hyperion::forwardBufferMessage, nullptr, nullptr);
#endif

View File

@@ -33,3 +33,7 @@ endif(ENABLE_QT)
if (ENABLE_DX)
add_subdirectory(directx)
endif(ENABLE_DX)
if (ENABLE_AUDIO)
add_subdirectory(audio)
endif()

View File

@@ -0,0 +1,201 @@
#include <grabber/AudioGrabber.h>
#include <math.h>
#include <QImage>
#include <QObject>
#include <QJsonObject>
#include <QJsonArray>
#include <QJsonValue>
// Constants
namespace {
const uint16_t RESOLUTION = 255;
}
#if (QT_VERSION < QT_VERSION_CHECK(5, 14, 0))
namespace QColorConstants
{
const QColor Black = QColor(0xFF, 0x00, 0x00);
const QColor Red = QColor(0xFF, 0x00, 0x00);
const QColor Green = QColor(0x00, 0xFF, 0x00);
const QColor Blue = QColor(0x00, 0x00, 0xFF);
const QColor Yellow = QColor(0xFF, 0xFF, 0x00);
}
#endif
//End of constants
AudioGrabber::AudioGrabber()
: Grabber("AudioGrabber")
, _deviceProperties()
, _device("none")
, _hotColor(QColorConstants::Red)
, _warnValue(80)
, _warnColor(QColorConstants::Yellow)
, _safeValue(45)
, _safeColor(QColorConstants::Green)
, _multiplier(0)
, _tolerance(20)
, _dynamicMultiplier(INT16_MAX)
, _started(false)
{
}
AudioGrabber::~AudioGrabber()
{
freeResources();
}
void AudioGrabber::freeResources()
{
}
void AudioGrabber::setDevice(const QString& device)
{
_device = device;
if (_started)
{
this->stop();
this->start();
}
}
void AudioGrabber::setConfiguration(const QJsonObject& config)
{
QJsonArray hotColorArray = config["hotColor"].toArray(QJsonArray::fromVariantList(QList<QVariant>({ QVariant(255), QVariant(0), QVariant(0) })));
QJsonArray warnColorArray = config["warnColor"].toArray(QJsonArray::fromVariantList(QList<QVariant>({ QVariant(255), QVariant(255), QVariant(0) })));
QJsonArray safeColorArray = config["safeColor"].toArray(QJsonArray::fromVariantList(QList<QVariant>({ QVariant(0), QVariant(255), QVariant(0) })));
_hotColor = QColor(hotColorArray.at(0).toInt(), hotColorArray.at(1).toInt(), hotColorArray.at(2).toInt());
_warnColor = QColor(warnColorArray.at(0).toInt(), warnColorArray.at(1).toInt(), warnColorArray.at(2).toInt());
_safeColor = QColor(safeColorArray.at(0).toInt(), safeColorArray.at(1).toInt(), safeColorArray.at(2).toInt());
_warnValue = config["warnValue"].toInt(80);
_safeValue = config["safeValue"].toInt(45);
_multiplier = config["multiplier"].toDouble(0);
_tolerance = config["tolerance"].toInt(20);
}
void AudioGrabber::resetMultiplier()
{
_dynamicMultiplier = INT16_MAX;
}
void AudioGrabber::processAudioFrame(int16_t* buffer, int length)
{
// Apply Visualizer and Construct Image
// TODO: Pass Audio Frame to python and let the script calculate the image.
// TODO: Support Stereo capture with different meters per side
// Default VUMeter - Later Make this pluggable for different audio effects
double averageAmplitude = 0;
// Calculate the the average amplitude value in the buffer
for (int i = 0; i < length; i++)
{
averageAmplitude += fabs(buffer[i]) / length;
}
double * currentMultiplier;
if (_multiplier < std::numeric_limits<double>::epsilon())
{
// Dynamically calculate multiplier.
const double pendingMultiplier = INT16_MAX / fmax(1.0, averageAmplitude + ((_tolerance / 100.0) * averageAmplitude));
if (pendingMultiplier < _dynamicMultiplier)
_dynamicMultiplier = pendingMultiplier;
currentMultiplier = &_dynamicMultiplier;
}
else
{
// User defined multiplier
currentMultiplier = &_multiplier;
}
// Apply multiplier to average amplitude
const double result = averageAmplitude * (*currentMultiplier);
// Calculate the average percentage
const double percentage = fmin(result / INT16_MAX, 1);
// Calculate the value
const int value = static_cast<int>(ceil(percentage * RESOLUTION));
// Draw Image
QImage image(1, RESOLUTION, QImage::Format_RGB888);
image.fill(QColorConstants::Black);
int safePixelValue = static_cast<int>(round(( _safeValue / 100.0) * RESOLUTION));
int warnPixelValue = static_cast<int>(round(( _warnValue / 100.0) * RESOLUTION));
for (int i = 0; i < RESOLUTION; i++)
{
QColor color = QColorConstants::Black;
int position = RESOLUTION - i;
if (position < safePixelValue)
{
color = _safeColor;
}
else if (position < warnPixelValue)
{
color = _warnColor;
}
else
{
color = _hotColor;
}
if (position < value)
{
image.setPixelColor(0, i, color);
}
else
{
image.setPixelColor(0, i, QColorConstants::Black);
}
}
// Convert to Image<ColorRGB>
Image<ColorRgb> finalImage (static_cast<unsigned>(image.width()), static_cast<unsigned>(image.height()));
for (int y = 0; y < image.height(); y++)
{
memcpy((unsigned char*)finalImage.memptr() + y * image.width() * 3, static_cast<unsigned char*>(image.scanLine(y)), image.width() * 3);
}
emit newFrame(finalImage);
}
Logger* AudioGrabber::getLog()
{
return _log;
}
bool AudioGrabber::start()
{
resetMultiplier();
_started = true;
return true;
}
void AudioGrabber::stop()
{
_started = false;
}
void AudioGrabber::restart()
{
stop();
start();
}
QJsonArray AudioGrabber::discover(const QJsonObject& /*params*/)
{
QJsonArray result; // Return empty result
return result;
}

View File

@@ -0,0 +1,317 @@
#include <grabber/AudioGrabberLinux.h>
#include <alsa/asoundlib.h>
#include <QJsonObject>
#include <QJsonArray>
typedef void* (*THREADFUNCPTR)(void*);
AudioGrabberLinux::AudioGrabberLinux()
: AudioGrabber()
, _isRunning{ false }
, _captureDevice {nullptr}
, _sampleRate(44100)
{
}
AudioGrabberLinux::~AudioGrabberLinux()
{
this->stop();
}
void AudioGrabberLinux::refreshDevices()
{
Debug(_log, "Enumerating Audio Input Devices");
_deviceProperties.clear();
snd_ctl_t* deviceHandle;
int soundCard {-1};
int error {-1};
int cardInput {-1};
snd_ctl_card_info_t* cardInfo;
snd_pcm_info_t* deviceInfo;
snd_ctl_card_info_alloca(&cardInfo);
snd_pcm_info_alloca(&deviceInfo);
while (snd_card_next(&soundCard) > -1)
{
if (soundCard < 0)
{
break;
}
char cardId[32];
sprintf(cardId, "hw:%d", soundCard);
if ((error = snd_ctl_open(&deviceHandle, cardId, SND_CTL_NONBLOCK)) < 0)
{
Error(_log, "Erorr opening device: (%i): %s", soundCard, snd_strerror(error));
continue;
}
if ((error = snd_ctl_card_info(deviceHandle, cardInfo)) < 0)
{
Error(_log, "Erorr getting hardware info: (%i): %s", soundCard, snd_strerror(error));
snd_ctl_close(deviceHandle);
continue;
}
cardInput = -1;
while (true)
{
if (snd_ctl_pcm_next_device(deviceHandle, &cardInput) < 0)
Error(_log, "Error selecting device input");
if (cardInput < 0)
break;
snd_pcm_info_set_device(deviceInfo, static_cast<uint>(cardInput));
snd_pcm_info_set_subdevice(deviceInfo, 0);
snd_pcm_info_set_stream(deviceInfo, SND_PCM_STREAM_CAPTURE);
if ((error = snd_ctl_pcm_info(deviceHandle, deviceInfo)) < 0)
{
if (error != -ENOENT)
Error(_log, "Digital Audio Info: (%i): %s", soundCard, snd_strerror(error));
continue;
}
AudioGrabber::DeviceProperties device;
device.id = QString("hw:%1,%2").arg(snd_pcm_info_get_card(deviceInfo)).arg(snd_pcm_info_get_device(deviceInfo));
device.name = QString("%1: %2").arg(snd_ctl_card_info_get_name(cardInfo),snd_pcm_info_get_name(deviceInfo));
Debug(_log, "Found sound card (%s): %s", QSTRING_CSTR(device.id), QSTRING_CSTR(device.name));
_deviceProperties.insert(device.id, device);
}
snd_ctl_close(deviceHandle);
}
}
bool AudioGrabberLinux::configureCaptureInterface()
{
int error = -1;
QString name = (_device.isEmpty() || _device == "auto") ? "default" : (_device);
if ((error = snd_pcm_open(&_captureDevice, QSTRING_CSTR(name) , SND_PCM_STREAM_CAPTURE, SND_PCM_NONBLOCK)) < 0)
{
Error(_log, "Failed to open audio device: %s, - %s", QSTRING_CSTR(_device), snd_strerror(error));
return false;
}
if ((error = snd_pcm_hw_params_malloc(&_captureDeviceConfig)) < 0)
{
Error(_log, "Failed to create hardware parameters: %s", snd_strerror(error));
snd_pcm_close(_captureDevice);
return false;
}
if ((error = snd_pcm_hw_params_any(_captureDevice, _captureDeviceConfig)) < 0)
{
Error(_log, "Failed to initialize hardware parameters: %s", snd_strerror(error));
snd_pcm_hw_params_free(_captureDeviceConfig);
snd_pcm_close(_captureDevice);
return false;
}
if ((error = snd_pcm_hw_params_set_access(_captureDevice, _captureDeviceConfig, SND_PCM_ACCESS_RW_INTERLEAVED)) < 0)
{
Error(_log, "Failed to configure interleaved mode: %s", snd_strerror(error));
snd_pcm_hw_params_free(_captureDeviceConfig);
snd_pcm_close(_captureDevice);
return false;
}
if ((error = snd_pcm_hw_params_set_format(_captureDevice, _captureDeviceConfig, SND_PCM_FORMAT_S16_LE)) < 0)
{
Error(_log, "Failed to configure capture format: %s", snd_strerror(error));
snd_pcm_hw_params_free(_captureDeviceConfig);
snd_pcm_close(_captureDevice);
return false;
}
if ((error = snd_pcm_hw_params_set_rate_near(_captureDevice, _captureDeviceConfig, &_sampleRate, nullptr)) < 0)
{
Error(_log, "Failed to configure sample rate: %s", snd_strerror(error));
snd_pcm_hw_params_free(_captureDeviceConfig);
snd_pcm_close(_captureDevice);
return false;
}
if ((error = snd_pcm_hw_params(_captureDevice, _captureDeviceConfig)) < 0)
{
Error(_log, "Failed to configure hardware parameters: %s", snd_strerror(error));
snd_pcm_hw_params_free(_captureDeviceConfig);
snd_pcm_close(_captureDevice);
return false;
}
snd_pcm_hw_params_free(_captureDeviceConfig);
if ((error = snd_pcm_prepare(_captureDevice)) < 0)
{
Error(_log, "Failed to prepare audio interface: %s", snd_strerror(error));
snd_pcm_close(_captureDevice);
return false;
}
if ((error = snd_pcm_start(_captureDevice)) < 0)
{
Error(_log, "Failed to start audio interface: %s", snd_strerror(error));
snd_pcm_close(_captureDevice);
return false;
}
return true;
}
bool AudioGrabberLinux::start()
{
if (!_isEnabled)
return false;
if (_isRunning.load(std::memory_order_acquire))
return true;
Debug(_log, "Start Audio With %s", QSTRING_CSTR(getDeviceName(_device)));
if (!configureCaptureInterface())
return false;
_isRunning.store(true, std::memory_order_release);
pthread_attr_t threadAttributes;
int threadPriority = 1;
sched_param schedulerParameter;
schedulerParameter.sched_priority = threadPriority;
if (pthread_attr_init(&threadAttributes) != 0)
{
Debug(_log, "Failed to create thread attributes");
stop();
return false;
}
if (pthread_create(&_audioThread, &threadAttributes, static_cast<THREADFUNCPTR>(&AudioThreadRunner), static_cast<void*>(this)) != 0)
{
Debug(_log, "Failed to create audio capture thread");
stop();
return false;
}
AudioGrabber::start();
return true;
}
void AudioGrabberLinux::stop()
{
if (!_isRunning.load(std::memory_order_acquire))
return;
Debug(_log, "Stopping Audio Interface");
_isRunning.store(false, std::memory_order_release);
if (_audioThread != 0) {
pthread_join(_audioThread, NULL);
}
snd_pcm_close(_captureDevice);
AudioGrabber::stop();
}
void AudioGrabberLinux::processAudioBuffer(snd_pcm_sframes_t frames)
{
if (!_isRunning.load(std::memory_order_acquire))
return;
ssize_t bytes = snd_pcm_frames_to_bytes(_captureDevice, frames);
int16_t * buffer = static_cast<int16_t*>(calloc(static_cast<size_t>(bytes / 2), sizeof(int16_t)));
if (frames == 0)
{
buffer[0] = 0;
processAudioFrame(buffer, 1);
}
else
{
snd_pcm_sframes_t framesRead = snd_pcm_readi(_captureDevice, buffer, static_cast<snd_pcm_uframes_t>(frames));
if (framesRead < frames)
{
Error(_log, "Error reading audio. Got %d frames instead of %d", framesRead, frames);
}
else
{
processAudioFrame(buffer, static_cast<int>(snd_pcm_frames_to_bytes(_captureDevice, framesRead)) / 2);
}
}
free(buffer);
}
QJsonArray AudioGrabberLinux::discover(const QJsonObject& /*params*/)
{
refreshDevices();
QJsonArray devices;
for (auto deviceIterator = _deviceProperties.begin(); deviceIterator != _deviceProperties.end(); ++deviceIterator)
{
// Device
QJsonObject device;
QJsonArray deviceInputs;
device["device"] = deviceIterator.key();
device["device_name"] = deviceIterator.value().name;
device["type"] = "audio";
devices.append(device);
}
return devices;
}
QString AudioGrabberLinux::getDeviceName(const QString& devicePath) const
{
if (devicePath.isEmpty() || devicePath == "auto")
{
return "Default Audio Device";
}
return _deviceProperties.value(devicePath).name;
}
static void * AudioThreadRunner(void* params)
{
AudioGrabberLinux* This = static_cast<AudioGrabberLinux*>(params);
Debug(This->getLog(), "Audio Thread Started");
snd_pcm_sframes_t framesAvailable = 0;
while (This->_isRunning.load(std::memory_order_acquire))
{
snd_pcm_wait(This->_captureDevice, 1000);
if ((framesAvailable = snd_pcm_avail(This->_captureDevice)) > 0)
This->processAudioBuffer(framesAvailable);
sched_yield();
}
Debug(This->getLog(), "Audio Thread Shutting Down");
return nullptr;
}

View File

@@ -0,0 +1,354 @@
#include <grabber/AudioGrabberWindows.h>
#include <QImage>
#include <QJsonObject>
#include <QJsonArray>
#pragma comment(lib,"dsound.lib")
#pragma comment(lib, "dxguid.lib")
// Constants
namespace {
const int AUDIO_NOTIFICATION_COUNT{ 4 };
} //End of constants
AudioGrabberWindows::AudioGrabberWindows() : AudioGrabber()
{
}
AudioGrabberWindows::~AudioGrabberWindows()
{
this->stop();
}
void AudioGrabberWindows::refreshDevices()
{
Debug(_log, "Refreshing Audio Devices");
_deviceProperties.clear();
// Enumerate Devices
if (FAILED(DirectSoundCaptureEnumerate(DirectSoundEnumProcessor, (VOID*)&_deviceProperties)))
{
Error(_log, "Failed to enumerate audio devices.");
}
}
bool AudioGrabberWindows::configureCaptureInterface()
{
CLSID deviceId {};
if (!this->_device.isEmpty() && this->_device != "auto")
{
LPCOLESTR clsid = reinterpret_cast<const wchar_t*>(_device.utf16());
HRESULT res = CLSIDFromString(clsid, &deviceId);
if (FAILED(res))
{
Error(_log, "Failed to get CLSID for '%s' with error: 0x%08x: %s", QSTRING_CSTR(_device), res, std::system_category().message(res).c_str());
return false;
}
}
// Create Capture Device
HRESULT res = DirectSoundCaptureCreate8(&deviceId, &recordingDevice, NULL);
if (FAILED(res))
{
Error(_log, "Failed to create capture device: '%s' with error: 0x%08x: %s", QSTRING_CSTR(_device), res, std::system_category().message(res).c_str());
return false;
}
// Define Audio Format & Create Buffer
WAVEFORMATEX audioFormat { WAVE_FORMAT_PCM, 1, 44100, 88200, 2, 16, 0 };
// wFormatTag, nChannels, nSamplesPerSec, mAvgBytesPerSec,
// nBlockAlign, wBitsPerSample, cbSize
notificationSize = max(1024, audioFormat.nAvgBytesPerSec / 8);
notificationSize -= notificationSize % audioFormat.nBlockAlign;
bufferCaptureSize = notificationSize * AUDIO_NOTIFICATION_COUNT;
DSCBUFFERDESC bufferDesc;
bufferDesc.dwSize = sizeof(DSCBUFFERDESC);
bufferDesc.dwFlags = 0;
bufferDesc.dwBufferBytes = bufferCaptureSize;
bufferDesc.dwReserved = 0;
bufferDesc.lpwfxFormat = &audioFormat;
bufferDesc.dwFXCount = 0;
bufferDesc.lpDSCFXDesc = NULL;
// Create Capture Device's Buffer
LPDIRECTSOUNDCAPTUREBUFFER preBuffer;
if (FAILED(recordingDevice->CreateCaptureBuffer(&bufferDesc, &preBuffer, NULL)))
{
Error(_log, "Failed to create capture buffer: '%s'", QSTRING_CSTR(getDeviceName(_device)));
recordingDevice->Release();
return false;
}
bufferCapturePosition = 0;
// Query Capture8 Buffer
if (FAILED(preBuffer->QueryInterface(IID_IDirectSoundCaptureBuffer8, (LPVOID*)&recordingBuffer)))
{
Error(_log, "Failed to retrieve recording buffer");
preBuffer->Release();
return false;
}
preBuffer->Release();
// Create Notifications
LPDIRECTSOUNDNOTIFY8 notify;
if (FAILED(recordingBuffer->QueryInterface(IID_IDirectSoundNotify8, (LPVOID *) &notify)))
{
Error(_log, "Failed to configure buffer notifications: '%s'", QSTRING_CSTR(getDeviceName(_device)));
recordingDevice->Release();
recordingBuffer->Release();
return false;
}
// Create Events
notificationEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
if (notificationEvent == NULL)
{
Error(_log, "Failed to configure buffer notifications events: '%s'", QSTRING_CSTR(getDeviceName(_device)));
notify->Release();
recordingDevice->Release();
recordingBuffer->Release();
return false;
}
// Configure Notifications
DSBPOSITIONNOTIFY positionNotify[AUDIO_NOTIFICATION_COUNT];
for (int i = 0; i < AUDIO_NOTIFICATION_COUNT; i++)
{
positionNotify[i].dwOffset = (notificationSize * i) + notificationSize - 1;
positionNotify[i].hEventNotify = notificationEvent;
}
// Set Notifications
notify->SetNotificationPositions(AUDIO_NOTIFICATION_COUNT, positionNotify);
notify->Release();
return true;
}
bool AudioGrabberWindows::start()
{
if (!_isEnabled)
{
return false;
}
if (this->isRunning.load(std::memory_order_acquire))
{
return true;
}
//Test, if configured device currently exists
refreshDevices();
if (!_deviceProperties.contains(_device))
{
_device = "auto";
Warning(_log, "Configured audio device is not available. Using '%s'", QSTRING_CSTR(getDeviceName(_device)));
}
Info(_log, "Capture audio from %s", QSTRING_CSTR(getDeviceName(_device)));
if (!this->configureCaptureInterface())
{
return false;
}
if (FAILED(recordingBuffer->Start(DSCBSTART_LOOPING)))
{
Error(_log, "Failed starting audio capture from '%s'", QSTRING_CSTR(getDeviceName(_device)));
return false;
}
this->isRunning.store(true, std::memory_order_release);
DWORD threadId;
this->audioThread = CreateThread(
NULL,
16,
AudioThreadRunner,
(void *) this,
0,
&threadId
);
if (this->audioThread == NULL)
{
Error(_log, "Failed to create audio capture thread");
this->stop();
return false;
}
AudioGrabber::start();
return true;
}
void AudioGrabberWindows::stop()
{
if (!this->isRunning.load(std::memory_order_acquire))
{
return;
}
Info(_log, "Shutting down audio capture from: '%s'", QSTRING_CSTR(getDeviceName(_device)));
this->isRunning.store(false, std::memory_order_release);
if (FAILED(recordingBuffer->Stop()))
{
Error(_log, "Audio capture failed to stop: '%s'", QSTRING_CSTR(getDeviceName(_device)));
}
if (FAILED(recordingBuffer->Release()))
{
Error(_log, "Failed to release recording buffer: '%s'", QSTRING_CSTR(getDeviceName(_device)));
}
if (FAILED(recordingDevice->Release()))
{
Error(_log, "Failed to release recording device: '%s'", QSTRING_CSTR(getDeviceName(_device)));
}
CloseHandle(notificationEvent);
CloseHandle(this->audioThread);
AudioGrabber::stop();
}
DWORD WINAPI AudioGrabberWindows::AudioThreadRunner(LPVOID param)
{
AudioGrabberWindows* This = (AudioGrabberWindows*) param;
while (This->isRunning.load(std::memory_order_acquire))
{
DWORD result = WaitForMultipleObjects(1, &This->notificationEvent, true, 500);
switch (result)
{
case WAIT_OBJECT_0:
This->processAudioBuffer();
break;
}
}
Debug(This->_log, "Audio capture thread stopped.");
return 0;
}
void AudioGrabberWindows::processAudioBuffer()
{
DWORD readPosition;
DWORD capturePosition;
// Primary segment
VOID* capturedAudio;
DWORD capturedAudioLength;
// Wrap around segment
VOID* capturedAudio2;
DWORD capturedAudio2Length;
LONG lockSize;
if (FAILED(recordingBuffer->GetCurrentPosition(&capturePosition, &readPosition)))
{
// Failed to get current position
Error(_log, "Failed to get buffer position.");
return;
}
lockSize = readPosition - bufferCapturePosition;
if (lockSize < 0)
{
lockSize += bufferCaptureSize;
}
// Block Align
lockSize -= (lockSize % notificationSize);
if (lockSize == 0)
{
return;
}
// Lock Capture Buffer
if (FAILED(recordingBuffer->Lock(bufferCapturePosition, lockSize, &capturedAudio, &capturedAudioLength,
&capturedAudio2, &capturedAudio2Length, 0)))
{
// Handle Lock Error
return;
}
bufferCapturePosition += capturedAudioLength;
bufferCapturePosition %= bufferCaptureSize; // Circular Buffer
int frameSize = capturedAudioLength + capturedAudio2Length;
int16_t * readBuffer = new int16_t[frameSize];
// Buffer wrapped around, read second position
if (capturedAudio2 != NULL)
{
bufferCapturePosition += capturedAudio2Length;
bufferCapturePosition %= bufferCaptureSize; // Circular Buffer
}
// Copy Buffer into memory
CopyMemory(readBuffer, capturedAudio, capturedAudioLength);
if (capturedAudio2 != NULL)
{
CopyMemory(readBuffer + capturedAudioLength, capturedAudio2, capturedAudio2Length);
}
// Release Buffer Lock
recordingBuffer->Unlock(capturedAudio, capturedAudioLength, capturedAudio2, capturedAudio2Length);
// Process Audio Frame
this->processAudioFrame(readBuffer, frameSize);
delete[] readBuffer;
}
QJsonArray AudioGrabberWindows::discover(const QJsonObject& params)
{
refreshDevices();
QJsonArray devices;
for (auto deviceIterator = _deviceProperties.begin(); deviceIterator != _deviceProperties.end(); ++deviceIterator)
{
// Device
QJsonObject device;
QJsonArray deviceInputs;
device["device"] = deviceIterator.value().id;
device["device_name"] = deviceIterator.value().name;
device["type"] = "audio";
devices.append(device);
}
return devices;
}
QString AudioGrabberWindows::getDeviceName(const QString& devicePath) const
{
if (devicePath.isEmpty() || devicePath == "auto")
{
return "Default Device";
}
return _deviceProperties.value(devicePath).name;
}

View File

@@ -0,0 +1,63 @@
#include <grabber/AudioWrapper.h>
#include <hyperion/GrabberWrapper.h>
#include <QObject>
#include <QMetaType>
AudioWrapper::AudioWrapper()
: GrabberWrapper("AudioGrabber", &_grabber)
, _grabber()
{
// register the image type
qRegisterMetaType<Image<ColorRgb>>("Image<ColorRgb>");
connect(&_grabber, &AudioGrabber::newFrame, this, &AudioWrapper::newFrame, Qt::DirectConnection);
}
AudioWrapper::~AudioWrapper()
{
stop();
}
bool AudioWrapper::start()
{
return (_grabber.start() && GrabberWrapper::start());
}
void AudioWrapper::stop()
{
_grabber.stop();
GrabberWrapper::stop();
}
void AudioWrapper::action()
{
// Dummy we will push the audio images
}
void AudioWrapper::newFrame(const Image<ColorRgb>& image)
{
emit systemImage(_grabberName, image);
}
void AudioWrapper::handleSettingsUpdate(settings::type type, const QJsonDocument& config)
{
if (type == settings::AUDIO)
{
const QJsonObject& obj = config.object();
// set global grabber state
setAudioGrabberState(obj["enable"].toBool(false));
if (getAudioGrabberState())
{
_grabber.setDevice(obj["device"].toString());
_grabber.setConfiguration(obj);
_grabber.restart();
}
else
{
stop();
}
}
}

View File

@@ -0,0 +1,35 @@
# Define the current source locations
SET( CURRENT_HEADER_DIR ${CMAKE_SOURCE_DIR}/include/grabber )
SET( CURRENT_SOURCE_DIR ${CMAKE_SOURCE_DIR}/libsrc/grabber/audio )
if (WIN32)
FILE ( GLOB AUDIO_GRABBER_SOURCES "${CURRENT_HEADER_DIR}/Audio*Windows.h" "${CURRENT_HEADER_DIR}/AudioGrabber.h" "${CURRENT_HEADER_DIR}/AudioWrapper.h" "${CURRENT_SOURCE_DIR}/*.h" "${CURRENT_SOURCE_DIR}/*Windows.cpp" "${CURRENT_SOURCE_DIR}/AudioGrabber.cpp" "${CURRENT_SOURCE_DIR}/AudioWrapper.cpp")
elseif(${CMAKE_SYSTEM} MATCHES "Linux")
FILE ( GLOB AUDIO_GRABBER_SOURCES "${CURRENT_HEADER_DIR}/Audio*Linux.h" "${CURRENT_HEADER_DIR}/AudioGrabber.h" "${CURRENT_HEADER_DIR}/AudioWrapper.h" "${CURRENT_SOURCE_DIR}/*.h" "${CURRENT_SOURCE_DIR}/*Linux.cpp" "${CURRENT_SOURCE_DIR}/AudioGrabber.cpp" "${CURRENT_SOURCE_DIR}/AudioWrapper.cpp")
elseif (APPLE)
#TODO
#FILE ( GLOB AUDIO_GRABBER_SOURCES "${CURRENT_HEADER_DIR}/Audio*Apple.h" "${CURRENT_HEADER_DIR}/AudioGrabber.h" "${CURRENT_HEADER_DIR}/AudioWrapper.h" "${CURRENT_SOURCE_DIR}/*.h" "${CURRENT_SOURCE_DIR}/*Apple.cpp" "${CURRENT_SOURCE_DIR}/AudioGrabber.cpp" "${CURRENT_SOURCE_DIR}/AudioWrapper.cpp")
endif()
add_library( audio-grabber ${AUDIO_GRABBER_SOURCES} )
set(AUDIO_LIBS hyperion)
if (WIN32)
set(AUDIO_LIBS ${AUDIO_LIBS} DSound)
elseif(${CMAKE_SYSTEM} MATCHES "Linux")
find_package(ALSA REQUIRED)
if (ALSA_FOUND)
include_directories(${ALSA_INCLUDE_DIRS})
set(AUDIO_LIBS ${AUDIO_LIBS} ${ALSA_LIBRARIES})
endif(ALSA_FOUND)
set(THREADS_PREFER_PTHREAD_FLAG ON)
find_package(Threads REQUIRED)
set(AUDIO_LIBS ${AUDIO_LIBS} Threads::Threads) # PRIVATE
endif()
target_link_libraries(audio-grabber ${AUDIO_LIBS} ${QT_LIBRARIES})

View File

@@ -20,6 +20,10 @@ CaptureCont::CaptureCont(Hyperion* hyperion)
, _v4lCaptPrio(0)
, _v4lCaptName()
, _v4lInactiveTimer(new QTimer(this))
, _audioCaptEnabled(false)
, _audioCaptPrio(0)
, _audioCaptName()
, _audioInactiveTimer(new QTimer(this))
{
// settings changes
connect(_hyperion, &Hyperion::settingsChanged, this, &CaptureCont::handleSettingsUpdate);
@@ -37,6 +41,11 @@ CaptureCont::CaptureCont(Hyperion* hyperion)
_v4lInactiveTimer->setSingleShot(true);
_v4lInactiveTimer->setInterval(1000);
// inactive timer audio
connect(_audioInactiveTimer, &QTimer::timeout, this, &CaptureCont::setAudioInactive);
_audioInactiveTimer->setSingleShot(true);
_audioInactiveTimer->setInterval(1000);
// init
handleSettingsUpdate(settings::INSTCAPTURE, _hyperion->getSetting(settings::INSTCAPTURE));
}
@@ -65,6 +74,17 @@ void CaptureCont::handleSystemImage(const QString& name, const Image<ColorRgb>&
_hyperion->setInputImage(_systemCaptPrio, image);
}
void CaptureCont::handleAudioImage(const QString& name, const Image<ColorRgb>& image)
{
if (_audioCaptName != name)
{
_hyperion->registerInput(_audioCaptPrio, hyperion::COMP_AUDIO, "System", name);
_audioCaptName = name;
}
_audioInactiveTimer->start();
_hyperion->setInputImage(_audioCaptPrio, image);
}
void CaptureCont::setSystemCaptureEnable(bool enable)
{
if(_systemCaptEnabled != enable)
@@ -111,24 +131,56 @@ void CaptureCont::setV4LCaptureEnable(bool enable)
}
}
void CaptureCont::setAudioCaptureEnable(bool enable)
{
if (_audioCaptEnabled != enable)
{
if (enable)
{
_hyperion->registerInput(_audioCaptPrio, hyperion::COMP_AUDIO);
connect(GlobalSignals::getInstance(), &GlobalSignals::setAudioImage, this, &CaptureCont::handleAudioImage);
connect(GlobalSignals::getInstance(), &GlobalSignals::setAudioImage, _hyperion, &Hyperion::forwardAudioProtoMessage);
}
else
{
disconnect(GlobalSignals::getInstance(), &GlobalSignals::setAudioImage, this, 0);
_hyperion->clear(_audioCaptPrio);
_audioInactiveTimer->stop();
_audioCaptName = "";
}
_audioCaptEnabled = enable;
_hyperion->setNewComponentState(hyperion::COMP_AUDIO, enable);
emit GlobalSignals::getInstance()->requestSource(hyperion::COMP_AUDIO, int(_hyperion->getInstanceIndex()), enable);
}
}
void CaptureCont::handleSettingsUpdate(settings::type type, const QJsonDocument& config)
{
if(type == settings::INSTCAPTURE)
{
const QJsonObject& obj = config.object();
if(_v4lCaptPrio != obj["v4lPriority"].toInt(240))
{
setV4LCaptureEnable(false); // clear prio
_v4lCaptPrio = obj["v4lPriority"].toInt(240);
}
if(_systemCaptPrio != obj["systemPriority"].toInt(250))
{
setSystemCaptureEnable(false); // clear prio
_systemCaptPrio = obj["systemPriority"].toInt(250);
}
if (_audioCaptPrio != obj["audioPriority"].toInt(230))
{
setAudioCaptureEnable(false); // clear prio
_audioCaptPrio = obj["audioPriority"].toInt(230);
}
setV4LCaptureEnable(obj["v4lEnable"].toBool(false));
setSystemCaptureEnable(obj["systemEnable"].toBool(false));
setAudioCaptureEnable(obj["audioEnable"].toBool(true));
}
}
@@ -142,6 +194,10 @@ void CaptureCont::handleCompStateChangeRequest(hyperion::Components component, b
{
setV4LCaptureEnable(enable);
}
else if (component == hyperion::COMP_AUDIO)
{
setAudioCaptureEnable(enable);
}
}
void CaptureCont::setV4lInactive()
@@ -153,3 +209,8 @@ void CaptureCont::setSystemInactive()
{
_hyperion->setInputInactive(_systemCaptPrio);
}
void CaptureCont::setAudioInactive()
{
_hyperion->setInputInactive(_audioCaptPrio);
}

View File

@@ -20,6 +20,7 @@ ComponentRegister::ComponentRegister(Hyperion* hyperion)
bool areScreenGrabberAvailable = !GrabberWrapper::availableGrabbers(GrabberTypeFilter::VIDEO).isEmpty();
bool areVideoGrabberAvailable = !GrabberWrapper::availableGrabbers(GrabberTypeFilter::VIDEO).isEmpty();
bool areAudioGrabberAvailable = !GrabberWrapper::availableGrabbers(GrabberTypeFilter::AUDIO).isEmpty();
bool flatBufServerAvailable { false };
bool protoBufServerAvailable{ false };
@@ -36,6 +37,11 @@ ComponentRegister::ComponentRegister(Hyperion* hyperion)
vect << COMP_GRABBER;
}
if (areAudioGrabberAvailable)
{
vect << COMP_AUDIO;
}
if (areVideoGrabberAvailable)
{
vect << COMP_V4L;

View File

@@ -18,8 +18,10 @@ const int GrabberWrapper::DEFAULT_PIXELDECIMATION = 8;
/// Map of Hyperion instances with grabber name that requested screen capture
QMap<int, QString> GrabberWrapper::GRABBER_SYS_CLIENTS = QMap<int, QString>();
QMap<int, QString> GrabberWrapper::GRABBER_V4L_CLIENTS = QMap<int, QString>();
QMap<int, QString> GrabberWrapper::GRABBER_AUDIO_CLIENTS = QMap<int, QString>();
bool GrabberWrapper::GLOBAL_GRABBER_SYS_ENABLE = false;
bool GrabberWrapper::GLOBAL_GRABBER_V4L_ENABLE = false;
bool GrabberWrapper::GLOBAL_GRABBER_AUDIO_ENABLE = false;
GrabberWrapper::GrabberWrapper(const QString& grabberName, Grabber * ggrabber, int updateRate_Hz)
: _grabberName(grabberName)
@@ -38,9 +40,12 @@ GrabberWrapper::GrabberWrapper(const QString& grabberName, Grabber * ggrabber, i
connect(_timer, &QTimer::timeout, this, &GrabberWrapper::action);
// connect the image forwarding
(_grabberName.startsWith("V4L"))
? connect(this, &GrabberWrapper::systemImage, GlobalSignals::getInstance(), &GlobalSignals::setV4lImage)
: connect(this, &GrabberWrapper::systemImage, GlobalSignals::getInstance(), &GlobalSignals::setSystemImage);
if (_grabberName.startsWith("V4L"))
connect(this, &GrabberWrapper::systemImage, GlobalSignals::getInstance(), &GlobalSignals::setV4lImage);
else if (_grabberName.startsWith("Audio"))
connect(this, &GrabberWrapper::systemImage, GlobalSignals::getInstance(), &GlobalSignals::setAudioImage);
else
connect(this, &GrabberWrapper::systemImage, GlobalSignals::getInstance(), &GlobalSignals::setSystemImage);
// listen for source requests
connect(GlobalSignals::getInstance(), &GlobalSignals::requestSource, this, &GrabberWrapper::handleSourceRequest);
@@ -99,6 +104,12 @@ QStringList GrabberWrapper::getActive(int inst, GrabberTypeFilter type) const
result << GRABBER_V4L_CLIENTS.value(inst);
}
if (type == GrabberTypeFilter::AUDIO || type == GrabberTypeFilter::ALL)
{
if (GRABBER_AUDIO_CLIENTS.contains(inst))
result << GRABBER_AUDIO_CLIENTS.value(inst);
}
return result;
}
@@ -148,6 +159,13 @@ QStringList GrabberWrapper::availableGrabbers(GrabberTypeFilter type)
#endif
}
if (type == GrabberTypeFilter::AUDIO || type == GrabberTypeFilter::ALL)
{
#ifdef ENABLE_AUDIO
grabbers << "audio";
#endif
}
return grabbers;
}
@@ -187,7 +205,9 @@ void GrabberWrapper::updateTimer(int interval)
void GrabberWrapper::handleSettingsUpdate(settings::type type, const QJsonDocument& config)
{
if(type == settings::SYSTEMCAPTURE && !_grabberName.startsWith("V4L"))
if (type == settings::SYSTEMCAPTURE &&
!_grabberName.startsWith("V4L") &&
!_grabberName.startsWith("Audio"))
{
// extract settings
const QJsonObject& obj = config.object();
@@ -235,26 +255,42 @@ void GrabberWrapper::handleSettingsUpdate(settings::type type, const QJsonDocume
void GrabberWrapper::handleSourceRequest(hyperion::Components component, int hyperionInd, bool listen)
{
if(component == hyperion::Components::COMP_GRABBER && !_grabberName.startsWith("V4L"))
if (component == hyperion::Components::COMP_GRABBER &&
!_grabberName.startsWith("V4L") &&
!_grabberName.startsWith("Audio"))
{
if(listen)
if (listen)
GRABBER_SYS_CLIENTS.insert(hyperionInd, _grabberName);
else
GRABBER_SYS_CLIENTS.remove(hyperionInd);
if(GRABBER_SYS_CLIENTS.empty() || !getSysGrabberState())
if (GRABBER_SYS_CLIENTS.empty() || !getSysGrabberState())
stop();
else
start();
}
else if(component == hyperion::Components::COMP_V4L && _grabberName.startsWith("V4L"))
else if (component == hyperion::Components::COMP_V4L &&
_grabberName.startsWith("V4L"))
{
if(listen)
if (listen)
GRABBER_V4L_CLIENTS.insert(hyperionInd, _grabberName);
else
GRABBER_V4L_CLIENTS.remove(hyperionInd);
if(GRABBER_V4L_CLIENTS.empty() || !getV4lGrabberState())
if (GRABBER_V4L_CLIENTS.empty() || !getV4lGrabberState())
stop();
else
start();
}
else if (component == hyperion::Components::COMP_AUDIO &&
_grabberName.startsWith("Audio"))
{
if (listen)
GRABBER_AUDIO_CLIENTS.insert(hyperionInd, _grabberName);
else
GRABBER_AUDIO_CLIENTS.remove(hyperionInd);
if (GRABBER_AUDIO_CLIENTS.empty())
stop();
else
start();
@@ -264,6 +300,11 @@ void GrabberWrapper::handleSourceRequest(hyperion::Components component, int hyp
void GrabberWrapper::tryStart()
{
// verify start condition
if(!_grabberName.startsWith("V4L") && !GRABBER_SYS_CLIENTS.empty() && getSysGrabberState())
if (!_grabberName.startsWith("V4L") &&
!_grabberName.startsWith("Audio") &&
!GRABBER_SYS_CLIENTS.empty() &&
getSysGrabberState())
{
start();
}
}

View File

@@ -501,6 +501,20 @@ bool SettingsManager::handleConfigUpgrade(QJsonObject& config)
Debug(_log, "GrabberV4L2 records migrated");
}
if (config.contains("grabberAudio"))
{
QJsonObject newGrabberAudioConfig = config["grabberAudio"].toObject();
//Add new element enable
if (!newGrabberAudioConfig.contains("enable"))
{
newGrabberAudioConfig["enable"] = false;
migrated = true;
}
config["grabberAudio"] = newGrabberAudioConfig;
Debug(_log, "GrabberAudio records migrated");
}
if (config.contains("framegrabber"))
{
QJsonObject newFramegrabberConfig = config["framegrabber"].toObject();

View File

@@ -27,6 +27,10 @@
{
"$ref": "schema-grabberV4L2.json"
},
"grabberAudio" :
{
"$ref": "schema-grabberAudio.json"
},
"framegrabber" :
{
"$ref": "schema-framegrabber.json"

View File

@@ -7,6 +7,7 @@
<file alias="schema-color.json">schema/schema-color.json</file>
<file alias="schema-smoothing.json">schema/schema-smoothing.json</file>
<file alias="schema-grabberV4L2.json">schema/schema-grabberV4L2.json</file>
<file alias="schema-grabberAudio.json">schema/schema-grabberAudio.json</file>
<file alias="schema-framegrabber.json">schema/schema-framegrabber.json</file>
<file alias="schema-blackborderdetector.json">schema/schema-blackborderdetector.json</file>
<file alias="schema-foregroundEffect.json">schema/schema-foregroundEffect.json</file>

View File

@@ -0,0 +1,163 @@
{
"type": "object",
"required": true,
"title": "edt_conf_audio_heading_title",
"properties": {
"enable": {
"type": "boolean",
"title": "edt_conf_general_enable_title",
"required": true,
"default": false,
"propertyOrder": 1
},
"available_devices": {
"type": "string",
"title": "edt_conf_grabber_discovered_title",
"default": "edt_conf_grabber_discovery_inprogress",
"options": {
"infoText": "edt_conf_grabber_discovered_title_info"
},
"propertyOrder": 2,
"required": false
},
"device": {
"type": "string",
"title": "edt_conf_enum_custom",
"default": "auto",
"options": {
"hidden": true
},
"required": true,
"propertyOrder": 3,
"comment": "The 'available_audio_devices' settings are dynamically inserted into the WebUI under PropertyOrder '1'."
},
"audioEffect": {
"type": "string",
"title": "edt_conf_audio_effects_title",
"required": true,
"enum": [ "vuMeter" ],
"default": "vuMeter",
"options": {
"enum_titles": [ "edt_conf_audio_effect_enum_vumeter" ]
},
"propertyOrder": 4
},
"vuMeter": {
"type": "object",
"title": "",
"required": true,
"propertyOrder": 5,
"options": {
"dependencies": {
"audioEffect": "vuMeter"
}
},
"properties": {
"multiplier": {
"type": "number",
"title": "edt_conf_audio_effect_multiplier_title",
"default": 1,
"minimum": 0,
"step": 0.01,
"required": true,
"propertyOrder": 1,
"comment": "The multiplier is used to scale the audio input signal. Increase or decrease to achieve the desired effect. Set to 0 for auto"
},
"tolerance": {
"type": "number",
"title": "edt_conf_audio_effect_tolerance_title",
"default": 5,
"minimum": 0,
"step": 1,
"append": "edt_append_percent",
"required": true,
"propertyOrder": 2,
"comment": "The tolerance is a percentage value from 0 - 100 used during auto multiplier calculation."
},
"hotColor": {
"type": "array",
"title": "edt_conf_audio_effect_hotcolor_title",
"default": [ 255, 0, 0 ],
"format": "colorpicker",
"items": {
"type": "integer",
"minimum": 0,
"maximum": 255
},
"minItems": 3,
"maxItems": 3,
"required": true,
"propertyOrder": 3,
"comment": "Hot Color is the color the led's will reach when audio level exceeds the warn percentage"
},
"warnColor": {
"type": "array",
"title": "edt_conf_audio_effect_warncolor_title",
"default": [ 255, 255, 0 ],
"format": "colorpicker",
"items": {
"type": "integer",
"minimum": 0,
"maximum": 255
},
"minItems": 3,
"maxItems": 3,
"required": true,
"propertyOrder": 4,
"comment": "Warn Color is the color the led's will reach when audio level exceeds the safe percentage"
},
"warnValue": {
"type": "number",
"title": "edt_conf_audio_effect_warnvalue_title",
"default": 80,
"minimum": 0,
"step": 1,
"append": "edt_append_percent",
"required": true,
"propertyOrder": 5,
"comment": "Warn percentage is the percentage used to determine the maximum percentage of the audio warning level"
},
"safeColor": {
"type": "array",
"title": "edt_conf_audio_effect_safecolor_title",
"default": [ 0, 255, 0 ],
"format": "colorpicker",
"items": {
"type": "integer",
"minimum": 0,
"maximum": 255
},
"minItems": 3,
"maxItems": 3,
"required": true,
"propertyOrder": 6,
"comment": "Safe Color is the color the led's will reach when audio level is below the warning percentage"
},
"safeValue": {
"type": "number",
"title": "edt_conf_audio_effect_safevalue_title",
"default": 45,
"minimum": 0,
"step": 1,
"append": "edt_append_percent",
"required": true,
"propertyOrder": 7,
"comment": "Safe percentage is the percentage used to determine the maximum percentage of the audio safe level"
},
"flip": {
"type": "string",
"title": "edt_conf_v4l2_flip_title",
"enum": [ "NO_CHANGE", "HORIZONTAL", "VERTICAL", "BOTH" ],
"default": "NO_CHANGE",
"options": {
"enum_titles": [ "edt_conf_enum_NO_CHANGE", "edt_conf_enum_HORIZONTAL", "edt_conf_enum_VERTICAL", "edt_conf_enum_BOTH" ]
},
"required": true,
"access": "advanced",
"propertyOrder": 8
}
}
}
},
"additionalProperties": true
}

View File

@@ -48,6 +48,29 @@
"maximum": 253,
"default": 240,
"propertyOrder": 6
},
"audioEnable": {
"type": "boolean",
"required": true,
"title": "edt_conf_instC_audioEnable_title",
"default": false,
"propertyOrder": 7
},
"audioGrabberDevice": {
"type": "string",
"required": true,
"title": "edt_conf_instC_video_grabber_device_title",
"default": "NONE",
"propertyOrder": 7
},
"audioPriority": {
"type": "integer",
"required": true,
"title": "edt_conf_general_priority_title",
"minimum": 100,
"maximum": 253,
"default": 230,
"propertyOrder": 9
}
},
"additionalProperties" : false

View File

@@ -162,7 +162,8 @@ void QJsonSchemaChecker::validate(const QJsonValue& value, const QJsonObject& sc
; // references have already been collected
else if (attribute == "title" || attribute == "description" || attribute == "default" || attribute == "format"
|| attribute == "defaultProperties" || attribute == "propertyOrder" || attribute == "append" || attribute == "step"
|| attribute == "access" || attribute == "options" || attribute == "script" || attribute == "allowEmptyArray" || attribute == "comment")
|| attribute == "access" || attribute == "options" || attribute == "script" || attribute == "allowEmptyArray" || attribute == "comment"
|| attribute == "watch" || attribute == "template")
; // nothing to do.
else
{