Media Foundation/V4L2 grabber ... (#1119)

* - New Media Foundation grabber
- JsonAPI available grabber fix
- commented json config removed

* Added libjpeg-turbo to dependencies

* Fix OSX build
Removed Azure Pipelines from build scripts

* Remove Platform from Dashboard

* Correct Grabber Namings

* Grabber UI improvements, generic JSONEditor Selection Update

* Active grabber fix

* Stop Framebuffer grabber on failure

* - Image format NV12 and I420 added
- Flip mode
- Scaling factor for MJPEG
- VSCode (compile before run)
- CI (push) dependency libjpeg-turbo added

* Refactor MediaFoundation (Part 1)

* Remove QDebug output

* Added image flipping ability to MF Grabber

* fix issue 1160

* -Reload MF Grabber only once per WebUI update
- Cleanup

* Improvements

* - Set 'Software Frame Decimation' begin to 0
- Removed grabber specific device name from Log
- Keep pixel format when switching resolution
- Display 'Flip mode' correct in Log
- BGR24 images always flipped

* Refactor MediaFoundation (Part 2)

* Refactor V4L2 grabber (part 1) (#62)

* Media Foundation grabber adapted to V4L2 change

* Enable Media Foundation grabber on windows

* Have fps as int, fix height typo

* Added video standards to JsonAPI output

* Error handling in source reader improved

* Fix "Frame to small" error

* Discovery VideoSources and Dynamically Update Editor

* Hide all element when no video grabber discovered, upate naming

* Do not show unsupported grabbers

* Copy Log to Clipboard

* Update Grabber schema and Defaults

* Update access levels and validate crop ranges

* Height and width in Qt grabber corrected

* Correct formatting

* Untabify

* Global component states across instances

* Components divided on the dashboard

* refactor

* Fix Merge-issues

* Database migration aligning with updated grabber model

* Align Grabber.js with new utility functions

* Allow editor-validation for enum-lists

* Handle "Show Explainations scenario" correctly

* Grabber - Ensure save is only possible on valid content

* Dashboard update + fix GlobalSignal connection

* Ensure default database is populated with current release

* Correct grabber4L2 access level

* Display Signal detection area in preview

* Write Hyperion version into default config on compiling.

* Create defaultconfig.json dynamically

* WebUI changes

* Correct grabber config look-ups

* Refactor i18n language loading

* Fix en.json

* Split global capture from instance capture config

* Update grabber default values

* Standalone grabber: Add --debug switch

* Enhance showInputOptionsForKey for multiple keys

* Add grabber instance link to system grabber config

* Only show signal detection area, if grabber is enabled

* Always show Active element on grabber page

* Remote control - Only display gabber status, if global grabber is enabled

* WebUI optimization (thx to @mkcologne)
Start Grabber only when global settings are enabled
Fixed an issue in the WebUI preview

* V4L2/MF changes

* Jsoneditor, Correct translation for default values

* Refactor LED-Device handling in UI and make element naming consistent

* MF Discovery extended

* Fix LGTM finding

* Support Grabber Bri, Hue, Sat and Con in UI, plus their defaults

* Concider Access level for item filtering

* Concider Access level for item filtering

* Revert "Concider Access level for item filtering"

This reverts commit 5b0ce3c0f2.

* Disable fpsSoftwareDecimation for framegrabber, as not supported yet

* JSON-Editor- Add updated schema for validation on dynamic elements

* added V4L2 color IDs

* LGTM findings fix

* destroy SR callback only on exit

* Grabber.js - Hide elements not supported by platform

* Fixed freezing start effect

* Grabber UI - Hardware controls - Show current values and allow to reset to defaults

* Grabber - Discovery - Add current values to properties

* Small things

* Clean-up Effects and have ENDLESS consistently defined

* Fix on/off/on priority during startup, by initializing _prevVisComp in line with background priority

* Add missing translation mappings

* DirectX Grabber reactivated/ QT Grabber size decimation fixed

* typo in push-master workflow

* Use PreciseTimer for Grabber to ensure stable FPS timing

* Set default Screencapture rate consistently

* Fix libjpeg-turbo download

* Remove Zero character from file

* docker-compile Add PLATFORM parameter, only copy output file after successful compile

* Framebuffer, Dispmanx, OSX, AML Grabber discovery, various clean-up and consistencies across grabbers

* Fix merge problem - on docker-compile Add PLATFORM parameter, only copy output file after successful compile

* Fix definition

* OSXFRameGrabber - Revert cast

* Clean-ups nach Feedback

* Disable certain libraries when building armlogic via standard stretch image as developer

* Add CEC availability to ServerInfo to have it platform independent

* Grabber UI - Fix problem that crop values are not populated when refining editor rage

* Preserve value when updating json-editor range

* LEDVisualisation - Clear image when source changes

* Fix - Preserve value when updating json-editor range

* LEDVisualisation - Clear image when no component is active

* Allow to have password handled by Password-Manager (#1263)

* Update default signal detection area to green assuming rainbow grabber

* LED Visualisation - Handle empty priority update

* Fix yuv420 in v4l2 grabber

* V4L2-Grabber discovery - Only report grabbers with valid video input information

* Grabber - Update static variables to have them working in release build

* LED Visualisation - ClearImage when no priorities

* LED Visualisation - Fix Logo resizing issue

* LED Visualisation - Have nearly black background and negative logo

Co-authored-by: LordGrey <lordgrey.emmel@gmail.com>
Co-authored-by: LordGrey <48840279+Lord-Grey@users.noreply.github.com>
This commit is contained in:
Markus
2021-07-14 20:48:33 +02:00
committed by GitHub
parent b0e1510a78
commit c135d91986
163 changed files with 10756 additions and 5953 deletions

View File

@@ -0,0 +1,33 @@
# Common cmake definition for external video grabber
# Add Turbo JPEG library
if (ENABLE_V4L2 OR ENABLE_MF)
find_package(TurboJPEG)
if (TURBOJPEG_FOUND)
add_definitions(-DHAVE_TURBO_JPEG)
message( STATUS "Using Turbo JPEG library: ${TurboJPEG_LIBRARY}")
include_directories(${TurboJPEG_INCLUDE_DIRS})
else ()
message( STATUS "Turbo JPEG library not found, MJPEG camera format won't work.")
endif ()
endif()
# Define the wrapper/header/source locations and collect them
SET(WRAPPER_DIR ${CMAKE_SOURCE_DIR}/libsrc/grabber/video)
SET(HEADER_DIR ${CMAKE_SOURCE_DIR}/include/grabber)
if (ENABLE_MF)
project(mf-grabber)
SET(CURRENT_SOURCE_DIR ${CMAKE_SOURCE_DIR}/libsrc/grabber/video/mediafoundation)
FILE (GLOB SOURCES "${WRAPPER_DIR}/*.cpp" "${HEADER_DIR}/Video*.h" "${HEADER_DIR}/MF*.h" "${HEADER_DIR}/Encoder*.h" "${CURRENT_SOURCE_DIR}/*.h" "${CURRENT_SOURCE_DIR}/*.cpp")
elseif(ENABLE_V4L2)
project(v4l2-grabber)
SET(CURRENT_SOURCE_DIR ${CMAKE_SOURCE_DIR}/libsrc/grabber/video/v4l2)
FILE (GLOB SOURCES "${WRAPPER_DIR}/*.cpp" "${HEADER_DIR}/Video*.h" "${HEADER_DIR}/V4L2*.h" "${HEADER_DIR}/Encoder*.h" "${CURRENT_SOURCE_DIR}/*.cpp")
endif()
add_library(${PROJECT_NAME} ${SOURCES})
target_link_libraries(${PROJECT_NAME} hyperion ${QT_LIBRARIES})
if(TURBOJPEG_FOUND)
target_link_libraries(${PROJECT_NAME} ${TurboJPEG_LIBRARY})
endif()

View File

@@ -0,0 +1,203 @@
#include "grabber/EncoderThread.h"
EncoderThread::EncoderThread()
: _localData(nullptr)
, _scalingFactorsCount(0)
, _imageResampler()
#ifdef HAVE_TURBO_JPEG
, _transform(nullptr)
, _decompress(nullptr)
, _scalingFactors(nullptr)
, _xform(nullptr)
#endif
{}
EncoderThread::~EncoderThread()
{
#ifdef HAVE_TURBO_JPEG
if (_transform)
tjDestroy(_transform);
if (_decompress)
tjDestroy(_decompress);
#endif
if (_localData)
#ifdef HAVE_TURBO_JPEG
tjFree(_localData);
#else
delete[] _localData;
#endif
}
void EncoderThread::setup(
PixelFormat pixelFormat, uint8_t* sharedData,
int size, int width, int height, int lineLength,
unsigned cropLeft, unsigned cropTop, unsigned cropBottom, unsigned cropRight,
VideoMode videoMode, FlipMode flipMode, int pixelDecimation)
{
_lineLength = lineLength;
_pixelFormat = pixelFormat;
_size = (unsigned long) size;
_width = width;
_height = height;
_cropLeft = cropLeft;
_cropTop = cropTop;
_cropBottom = cropBottom;
_cropRight = cropRight;
_flipMode = flipMode;
_pixelDecimation = pixelDecimation;
_imageResampler.setVideoMode(videoMode);
_imageResampler.setFlipMode(_flipMode);
_imageResampler.setCropping(cropLeft, cropRight, cropTop, cropBottom);
_imageResampler.setHorizontalPixelDecimation(_pixelDecimation);
_imageResampler.setVerticalPixelDecimation(_pixelDecimation);
#ifdef HAVE_TURBO_JPEG
if (_localData)
tjFree(_localData);
_localData = (uint8_t*)tjAlloc(size + 1);
#else
delete[] _localData;
_localData = nullptr;
_localData = new uint8_t(size + 1);
#endif
memcpy(_localData, sharedData, size);
}
void EncoderThread::process()
{
_busy = true;
if (_width > 0 && _height > 0)
{
#ifdef HAVE_TURBO_JPEG
if (_pixelFormat == PixelFormat::MJPEG)
{
processImageMjpeg();
}
else
#endif
{
if (_pixelFormat == PixelFormat::BGR24)
{
if (_flipMode == FlipMode::NO_CHANGE)
_imageResampler.setFlipMode(FlipMode::HORIZONTAL);
else if (_flipMode == FlipMode::HORIZONTAL)
_imageResampler.setFlipMode(FlipMode::NO_CHANGE);
else if (_flipMode == FlipMode::VERTICAL)
_imageResampler.setFlipMode(FlipMode::BOTH);
else if (_flipMode == FlipMode::BOTH)
_imageResampler.setFlipMode(FlipMode::VERTICAL);
}
Image<ColorRgb> image = Image<ColorRgb>();
_imageResampler.processImage(
_localData,
_width,
_height,
_lineLength,
#if defined(ENABLE_V4L2)
_pixelFormat,
#else
PixelFormat::BGR24,
#endif
image
);
emit newFrame(image);
}
}
_busy = false;
}
#ifdef HAVE_TURBO_JPEG
void EncoderThread::processImageMjpeg()
{
if (!_transform && _flipMode != FlipMode::NO_CHANGE)
{
_transform = tjInitTransform();
_xform = new tjtransform();
}
if (_flipMode == FlipMode::BOTH || _flipMode == FlipMode::HORIZONTAL)
{
_xform->op = TJXOP_HFLIP;
tjTransform(_transform, _localData, _size, 1, &_localData, &_size, _xform, TJFLAG_FASTDCT | TJFLAG_FASTUPSAMPLE);
}
if (_flipMode == FlipMode::BOTH || _flipMode == FlipMode::VERTICAL)
{
_xform->op = TJXOP_VFLIP;
tjTransform(_transform, _localData, _size, 1, &_localData, &_size, _xform, TJFLAG_FASTDCT | TJFLAG_FASTUPSAMPLE);
}
if (!_decompress)
{
_decompress = tjInitDecompress();
_scalingFactors = tjGetScalingFactors(&_scalingFactorsCount);
}
int subsamp = 0;
if (tjDecompressHeader2(_decompress, _localData, _size, &_width, &_height, &subsamp) != 0)
return;
int scaledWidth = _width, scaledHeight = _height;
if(_scalingFactors != nullptr && _pixelDecimation > 1)
{
for (int i = 0; i < _scalingFactorsCount ; i++)
{
const int tempWidth = TJSCALED(_width, _scalingFactors[i]);
const int tempHeight = TJSCALED(_height, _scalingFactors[i]);
if (tempWidth <= _width/_pixelDecimation && tempHeight <= _height/_pixelDecimation)
{
scaledWidth = tempWidth;
scaledHeight = tempHeight;
break;
}
}
if (scaledWidth == _width && scaledHeight == _height)
{
scaledWidth = TJSCALED(_width, _scalingFactors[_scalingFactorsCount-1]);
scaledHeight = TJSCALED(_height, _scalingFactors[_scalingFactorsCount-1]);
}
}
Image<ColorRgb> srcImage(scaledWidth, scaledHeight);
if (tjDecompress2(_decompress, _localData , _size, (unsigned char*)srcImage.memptr(), scaledWidth, 0, scaledHeight, TJPF_RGB, TJFLAG_FASTDCT | TJFLAG_FASTUPSAMPLE) != 0)
return;
// got image, process it
if (!(_cropLeft > 0 || _cropTop > 0 || _cropBottom > 0 || _cropRight > 0))
emit newFrame(srcImage);
else
{
// calculate the output size
int outputWidth = (_width - _cropLeft - _cropRight);
int outputHeight = (_height - _cropTop - _cropBottom);
if (outputWidth <= 0 || outputHeight <= 0)
return;
Image<ColorRgb> destImage(outputWidth, outputHeight);
for (unsigned int y = 0; y < destImage.height(); y++)
{
unsigned char* source = (unsigned char*)srcImage.memptr() + (y + _cropTop)*srcImage.width()*3 + _cropLeft*3;
unsigned char* dest = (unsigned char*)destImage.memptr() + y*destImage.width()*3;
memcpy(dest, source, destImage.width()*3);
free(source);
source = nullptr;
free(dest);
dest = nullptr;
}
// emit
emit newFrame(destImage);
}
}
#endif

View File

@@ -0,0 +1,149 @@
#include <QMetaType>
#include <grabber/VideoWrapper.h>
// qt includes
#include <QTimer>
VideoWrapper::VideoWrapper()
#if defined(ENABLE_V4L2)
: GrabberWrapper("V4L2", &_grabber)
#elif defined(ENABLE_MF)
: GrabberWrapper("V4L2:MEDIA_FOUNDATION", &_grabber)
#endif
, _grabber()
{
// register the image type
qRegisterMetaType<Image<ColorRgb>>("Image<ColorRgb>");
// Handle the image in the captured thread (Media Foundation/V4L2) using a direct connection
connect(&_grabber, SIGNAL(newFrame(const Image<ColorRgb>&)), this, SLOT(newFrame(const Image<ColorRgb>&)), Qt::DirectConnection);
connect(&_grabber, SIGNAL(readError(const char*)), this, SLOT(readError(const char*)), Qt::DirectConnection);
}
VideoWrapper::~VideoWrapper()
{
stop();
}
bool VideoWrapper::start()
{
return (_grabber.prepare() && _grabber.start() && GrabberWrapper::start());
}
void VideoWrapper::stop()
{
_grabber.stop();
GrabberWrapper::stop();
}
#if defined(ENABLE_CEC) && !defined(ENABLE_MF)
void VideoWrapper::handleCecEvent(CECEvent event)
{
_grabber.handleCecEvent(event);
}
#endif
void VideoWrapper::handleSettingsUpdate(settings::type type, const QJsonDocument& config)
{
if(type == settings::V4L2 && _grabberName.startsWith("V4L2"))
{
// extract settings
const QJsonObject& obj = config.object();
// set global grabber state
setV4lGrabberState(obj["enable"].toBool(false));
if (getV4lGrabberState())
{
#if defined(ENABLE_MF)
// Device path
_grabber.setDevice(obj["device"].toString("none"));
#endif
#if defined(ENABLE_V4L2)
// Device path and name
_grabber.setDevice(obj["device"].toString("none"), obj["available_devices"].toString("none"));
#endif
// Device input
_grabber.setInput(obj["input"].toInt(0));
// Device resolution
_grabber.setWidthHeight(obj["width"].toInt(0), obj["height"].toInt(0));
// Device framerate
_grabber.setFramerate(obj["fps"].toInt(15));
// Device encoding format
_grabber.setEncoding(obj["encoding"].toString("NO_CHANGE"));
// Video standard
_grabber.setVideoStandard(parseVideoStandard(obj["standard"].toString("NO_CHANGE")));
// Image size decimation
_grabber.setPixelDecimation(obj["sizeDecimation"].toInt(8));
// Flip mode
_grabber.setFlipMode(parseFlipMode(obj["flip"].toString("NO_CHANGE")));
// Image cropping
_grabber.setCropping(
obj["cropLeft"].toInt(0),
obj["cropRight"].toInt(0),
obj["cropTop"].toInt(0),
obj["cropBottom"].toInt(0));
// Brightness, Contrast, Saturation, Hue
_grabber.setBrightnessContrastSaturationHue(
obj["hardware_brightness"].toInt(0),
obj["hardware_contrast"].toInt(0),
obj["hardware_saturation"].toInt(0),
obj["hardware_hue"].toInt(0));
#if defined(ENABLE_CEC) && defined(ENABLE_V4L2)
// CEC Standby
_grabber.setCecDetectionEnable(obj["cecDetection"].toBool(true));
#endif
// Software frame skipping
_grabber.setFpsSoftwareDecimation(obj["fpsSoftwareDecimation"].toInt(1));
// Signal detection
_grabber.setSignalDetectionEnable(obj["signalDetection"].toBool(true));
_grabber.setSignalDetectionOffset(
obj["sDHOffsetMin"].toDouble(0.25),
obj["sDVOffsetMin"].toDouble(0.25),
obj["sDHOffsetMax"].toDouble(0.75),
obj["sDVOffsetMax"].toDouble(0.75));
_grabber.setSignalThreshold(
obj["redSignalThreshold"].toDouble(0.0)/100.0,
obj["greenSignalThreshold"].toDouble(0.0)/100.0,
obj["blueSignalThreshold"].toDouble(0.0)/100.0,
obj["noSignalCounterThreshold"].toInt(50));
// Reload the Grabber if any settings have been changed that require it
_grabber.reload(getV4lGrabberState());
}
else
stop();
}
}
void VideoWrapper::newFrame(const Image<ColorRgb> &image)
{
emit systemImage(_grabberName, image);
}
void VideoWrapper::readError(const char* err)
{
Error(_log, "Stop grabber, because reading device failed. (%s)", err);
stop();
}
void VideoWrapper::action()
{
// dummy as v4l get notifications from stream
}

View File

@@ -0,0 +1,813 @@
#include "MFSourceReaderCB.h"
#include "grabber/MFGrabber.h"
// Constants
namespace { const bool verbose = false; }
// Need more video properties? Visit https://docs.microsoft.com/en-us/windows/win32/api/strmif/ne-strmif-videoprocampproperty
using VideoProcAmpPropertyMap = QMap<VideoProcAmpProperty, QString>;
inline QMap<VideoProcAmpProperty, QString> initVideoProcAmpPropertyMap()
{
QMap<VideoProcAmpProperty, QString> propertyMap
{
{VideoProcAmp_Brightness, "brightness" },
{VideoProcAmp_Contrast , "contrast" },
{VideoProcAmp_Saturation, "saturation" },
{VideoProcAmp_Hue , "hue" }
};
return propertyMap;
};
Q_GLOBAL_STATIC_WITH_ARGS(VideoProcAmpPropertyMap, _videoProcAmpPropertyMap, (initVideoProcAmpPropertyMap()));
MFGrabber::MFGrabber()
: Grabber("V4L2:MEDIA_FOUNDATION")
, _currentDeviceName("none")
, _newDeviceName("none")
, _hr(S_FALSE)
, _sourceReader(nullptr)
, _sourceReaderCB(nullptr)
, _threadManager(nullptr)
, _pixelFormat(PixelFormat::NO_CHANGE)
, _pixelFormatConfig(PixelFormat::NO_CHANGE)
, _lineLength(-1)
, _frameByteSize(-1)
, _noSignalCounterThreshold(40)
, _noSignalCounter(0)
, _brightness(0)
, _contrast(0)
, _saturation(0)
, _hue(0)
, _currentFrame(0)
, _noSignalThresholdColor(ColorRgb{0,0,0})
, _signalDetectionEnabled(true)
, _noSignalDetected(false)
, _initialized(false)
, _reload(false)
, _x_frac_min(0.25)
, _y_frac_min(0.25)
, _x_frac_max(0.75)
, _y_frac_max(0.75)
{
CoInitializeEx(0, COINIT_MULTITHREADED);
_hr = MFStartup(MF_VERSION, MFSTARTUP_NOSOCKET);
if (FAILED(_hr))
CoUninitialize();
}
MFGrabber::~MFGrabber()
{
uninit();
SAFE_RELEASE(_sourceReader);
if (_sourceReaderCB != nullptr)
while (_sourceReaderCB->isBusy()) {}
SAFE_RELEASE(_sourceReaderCB);
if (_threadManager)
delete _threadManager;
_threadManager = nullptr;
if (SUCCEEDED(_hr) && SUCCEEDED(MFShutdown()))
CoUninitialize();
}
bool MFGrabber::prepare()
{
if (SUCCEEDED(_hr))
{
if (!_sourceReaderCB)
_sourceReaderCB = new SourceReaderCB(this);
if (!_threadManager)
_threadManager = new EncoderThreadManager(this);
return (_sourceReaderCB != nullptr && _threadManager != nullptr);
}
return false;
}
bool MFGrabber::start()
{
if (!_initialized)
{
if (init())
{
connect(_threadManager, &EncoderThreadManager::newFrame, this, &MFGrabber::newThreadFrame);
_threadManager->start();
DebugIf(verbose, _log, "Decoding threads: %d", _threadManager->_threadCount);
start_capturing();
Info(_log, "Started");
return true;
}
else
{
Error(_log, "The Media Foundation Grabber could not be started");
return false;
}
}
else
return true;
}
void MFGrabber::stop()
{
if (_initialized)
{
_initialized = false;
_threadManager->stop();
disconnect(_threadManager, nullptr, nullptr, nullptr);
_sourceReader->Flush(MF_SOURCE_READER_FIRST_VIDEO_STREAM);
SAFE_RELEASE(_sourceReader);
_deviceProperties.clear();
_deviceControls.clear();
Info(_log, "Stopped");
}
}
bool MFGrabber::init()
{
// enumerate the video capture devices on the user's system
enumVideoCaptureDevices();
if (!_initialized && SUCCEEDED(_hr))
{
int deviceIndex = -1;
bool noDeviceName = _currentDeviceName.compare("none", Qt::CaseInsensitive) == 0 || _currentDeviceName.compare("auto", Qt::CaseInsensitive) == 0;
if (noDeviceName)
return false;
if (!_deviceProperties.contains(_currentDeviceName))
{
Debug(_log, "Configured device '%s' is not available.", QSTRING_CSTR(_currentDeviceName));
return false;
}
Debug(_log, "Searching for %s %d x %d @ %d fps (%s)", QSTRING_CSTR(_currentDeviceName), _width, _height,_fps, QSTRING_CSTR(pixelFormatToString(_pixelFormat)));
QList<DeviceProperties> dev = _deviceProperties[_currentDeviceName];
for ( int i = 0; i < dev.count() && deviceIndex < 0; ++i )
{
if (dev[i].width != _width || dev[i].height != _height || dev[i].fps != _fps || dev[i].pf != _pixelFormat)
continue;
else
deviceIndex = i;
}
if (deviceIndex >= 0 && SUCCEEDED(init_device(_currentDeviceName, dev[deviceIndex])))
{
_initialized = true;
_newDeviceName = _currentDeviceName;
}
else
{
Debug(_log, "Configured device '%s' is not available.", QSTRING_CSTR(_currentDeviceName));
return false;
}
}
return _initialized;
}
void MFGrabber::uninit()
{
// stop if the grabber was not stopped
if (_initialized)
{
Debug(_log,"Uninit grabber: %s", QSTRING_CSTR(_newDeviceName));
stop();
}
}
HRESULT MFGrabber::init_device(QString deviceName, DeviceProperties props)
{
PixelFormat pixelformat = GetPixelFormatForGuid(props.guid);
QString error;
IMFMediaSource* device = nullptr;
IMFAttributes* deviceAttributes = nullptr, *sourceReaderAttributes = nullptr;
IMFMediaType* type = nullptr;
HRESULT hr = S_OK;
Debug(_log, "Init %s, %d x %d @ %d fps (%s)", QSTRING_CSTR(deviceName), props.width, props.height, props.fps, QSTRING_CSTR(pixelFormatToString(pixelformat)));
DebugIf (verbose, _log, "Symbolic link: %s", QSTRING_CSTR(props.symlink));
hr = MFCreateAttributes(&deviceAttributes, 2);
if (FAILED(hr))
{
error = QString("Could not create device attributes (%1)").arg(hr);
goto done;
}
hr = deviceAttributes->SetGUID(MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE, MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_GUID);
if (FAILED(hr))
{
error = QString("SetGUID_MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE (%1)").arg(hr);
goto done;
}
if (FAILED(deviceAttributes->SetString(MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_SYMBOLIC_LINK, (LPCWSTR)props.symlink.utf16())))
{
error = QString("IMFAttributes_SetString_MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_SYMBOLIC_LINK (%1)").arg(hr);
goto done;
}
hr = MFCreateDeviceSource(deviceAttributes, &device);
if (FAILED(hr))
{
error = QString("MFCreateDeviceSource (%1)").arg(hr);
goto done;
}
if (!device)
{
error = QString("Could not open device (%1)").arg(hr);
goto done;
}
else
Debug(_log, "Device opened");
IAMVideoProcAmp *pProcAmp = nullptr;
if (SUCCEEDED(device->QueryInterface(IID_PPV_ARGS(&pProcAmp))))
{
for (auto control : _deviceControls[deviceName])
{
switch (_videoProcAmpPropertyMap->key(control.property))
{
case VideoProcAmpProperty::VideoProcAmp_Brightness:
if (_brightness >= control.minValue && _brightness <= control.maxValue && _brightness != control.currentValue)
{
Debug(_log,"Set brightness to %i", _brightness);
pProcAmp->Set(VideoProcAmp_Brightness, _brightness, VideoProcAmp_Flags_Manual);
}
break;
case VideoProcAmpProperty::VideoProcAmp_Contrast:
if (_contrast >= control.minValue && _contrast <= control.maxValue && _contrast != control.currentValue)
{
Debug(_log,"Set contrast to %i", _contrast);
pProcAmp->Set(VideoProcAmp_Contrast, _contrast, VideoProcAmp_Flags_Manual);
}
break;
case VideoProcAmpProperty::VideoProcAmp_Saturation:
if (_saturation >= control.minValue && _saturation <= control.maxValue && _saturation != control.currentValue)
{
Debug(_log,"Set saturation to %i", _saturation);
pProcAmp->Set(VideoProcAmp_Saturation, _saturation, VideoProcAmp_Flags_Manual);
}
break;
case VideoProcAmpProperty::VideoProcAmp_Hue:
if (_hue >= control.minValue && _hue <= control.maxValue && _hue != control.currentValue)
{
Debug(_log,"Set hue to %i", _hue);
pProcAmp->Set(VideoProcAmp_Hue, _hue, VideoProcAmp_Flags_Manual);
}
break;
default:
break;
}
}
}
hr = MFCreateAttributes(&sourceReaderAttributes, 1);
if (FAILED(hr))
{
error = QString("Could not create Source Reader attributes (%1)").arg(hr);
goto done;
}
hr = sourceReaderAttributes->SetUnknown(MF_SOURCE_READER_ASYNC_CALLBACK, (IMFSourceReaderCallback *)_sourceReaderCB);
if (FAILED(hr))
{
error = QString("Could not set stream parameter: SetUnknown_MF_SOURCE_READER_ASYNC_CALLBACK (%1)").arg(hr);
hr = E_INVALIDARG;
goto done;
}
hr = MFCreateSourceReaderFromMediaSource(device, sourceReaderAttributes, &_sourceReader);
if (FAILED(hr))
{
error = QString("Could not create the Source Reader (%1)").arg(hr);
goto done;
}
hr = MFCreateMediaType(&type);
if (FAILED(hr))
{
error = QString("Could not create an empty media type (%1)").arg(hr);
goto done;
}
hr = type->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
if (FAILED(hr))
{
error = QString("Could not set stream parameter: SetGUID_MF_MT_MAJOR_TYPE (%1)").arg(hr);
goto done;
}
hr = type->SetGUID(MF_MT_SUBTYPE, props.guid);
if (FAILED(hr))
{
error = QString("Could not set stream parameter: SetGUID_MF_MT_SUBTYPE (%1)").arg(hr);
goto done;
}
hr = MFSetAttributeSize(type, MF_MT_FRAME_SIZE, props.width, props.height);
if (FAILED(hr))
{
error = QString("Could not set stream parameter: SMFSetAttributeSize_MF_MT_FRAME_SIZE (%1)").arg(hr);
goto done;
}
hr = MFSetAttributeSize(type, MF_MT_FRAME_RATE, props.numerator, props.denominator);
if (FAILED(hr))
{
error = QString("Could not set stream parameter: MFSetAttributeSize_MF_MT_FRAME_RATE (%1)").arg(hr);
goto done;
}
hr = MFSetAttributeRatio(type, MF_MT_PIXEL_ASPECT_RATIO, 1, 1);
if (FAILED(hr))
{
error = QString("Could not set stream parameter: MFSetAttributeRatio_MF_MT_PIXEL_ASPECT_RATIO (%1)").arg(hr);
goto done;
}
hr = _sourceReaderCB->InitializeVideoEncoder(type, pixelformat);
if (FAILED(hr))
{
error = QString("Failed to initialize the Video Encoder (%1)").arg(hr);
goto done;
}
hr = _sourceReader->SetCurrentMediaType(MF_SOURCE_READER_FIRST_VIDEO_STREAM, nullptr, type);
if (FAILED(hr))
{
error = QString("Failed to set media type on Source Reader (%1)").arg(hr);
}
done:
if (FAILED(hr))
{
emit readError(QSTRING_CSTR(error));
SAFE_RELEASE(_sourceReader);
}
else
{
_pixelFormat = props.pf;
_width = props.width;
_height = props.height;
_frameByteSize = _width * _height * 3;
_lineLength = _width * 3;
}
// Cleanup
SAFE_RELEASE(deviceAttributes);
SAFE_RELEASE(device);
SAFE_RELEASE(pProcAmp);
SAFE_RELEASE(type);
SAFE_RELEASE(sourceReaderAttributes);
return hr;
}
void MFGrabber::enumVideoCaptureDevices()
{
_deviceProperties.clear();
_deviceControls.clear();
IMFAttributes* attr;
if (SUCCEEDED(MFCreateAttributes(&attr, 1)))
{
if (SUCCEEDED(attr->SetGUID(MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE, MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_GUID)))
{
UINT32 count;
IMFActivate** devices;
if (SUCCEEDED(MFEnumDeviceSources(attr, &devices, &count)))
{
DebugIf (verbose, _log, "Detected devices: %u", count);
for (UINT32 i = 0; i < count; i++)
{
UINT32 length;
LPWSTR name;
LPWSTR symlink;
if (SUCCEEDED(devices[i]->GetAllocatedString(MF_DEVSOURCE_ATTRIBUTE_FRIENDLY_NAME, &name, &length)))
{
if (SUCCEEDED(devices[i]->GetAllocatedString(MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_SYMBOLIC_LINK, &symlink, &length)))
{
QList<DeviceProperties> devicePropertyList;
QString dev = QString::fromUtf16((const ushort*)name);
IMFMediaSource *pSource = nullptr;
if (SUCCEEDED(devices[i]->ActivateObject(IID_PPV_ARGS(&pSource))))
{
DebugIf (verbose, _log, "Found capture device: %s", QSTRING_CSTR(dev));
IMFMediaType *pType = nullptr;
IMFSourceReader* reader;
if (SUCCEEDED(MFCreateSourceReaderFromMediaSource(pSource, NULL, &reader)))
{
for (DWORD i = 0; ; i++)
{
if (FAILED(reader->GetNativeMediaType((DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM, i, &pType)))
break;
GUID format;
UINT32 width = 0, height = 0, numerator = 0, denominator = 0;
if ( SUCCEEDED(pType->GetGUID(MF_MT_SUBTYPE, &format)) &&
SUCCEEDED(MFGetAttributeSize(pType, MF_MT_FRAME_SIZE, &width, &height)) &&
SUCCEEDED(MFGetAttributeRatio(pType, MF_MT_FRAME_RATE, &numerator, &denominator)))
{
PixelFormat pixelformat = GetPixelFormatForGuid(format);
if (pixelformat != PixelFormat::NO_CHANGE)
{
DeviceProperties properties;
properties.symlink = QString::fromUtf16((const ushort*)symlink);
properties.width = width;
properties.height = height;
properties.fps = numerator / denominator;
properties.numerator = numerator;
properties.denominator = denominator;
properties.pf = pixelformat;
properties.guid = format;
devicePropertyList.append(properties);
DebugIf (verbose, _log, "%s %d x %d @ %d fps (%s)", QSTRING_CSTR(dev), properties.width, properties.height, properties.fps, QSTRING_CSTR(pixelFormatToString(properties.pf)));
}
}
SAFE_RELEASE(pType);
}
IAMVideoProcAmp *videoProcAmp = nullptr;
if (SUCCEEDED(pSource->QueryInterface(IID_PPV_ARGS(&videoProcAmp))))
{
QList<DeviceControls> deviceControlList;
for (auto it = _videoProcAmpPropertyMap->begin(); it != _videoProcAmpPropertyMap->end(); it++)
{
long minVal, maxVal, stepVal, defaultVal, flag;
if (SUCCEEDED(videoProcAmp->GetRange(it.key(), &minVal, &maxVal, &stepVal, &defaultVal, &flag)))
{
if (flag & VideoProcAmp_Flags_Manual)
{
DeviceControls control;
control.property = it.value();
control.minValue = minVal;
control.maxValue = maxVal;
control.step = stepVal;
control.default = defaultVal;
long currentVal;
if (SUCCEEDED(videoProcAmp->Get(it.key(), &currentVal, &flag)))
{
control.currentValue = currentVal;
DebugIf(verbose, _log, "%s: min=%i, max=%i, step=%i, default=%i, current=%i", QSTRING_CSTR(it.value()), minVal, maxVal, stepVal, defaultVal, currentVal);
}
else
break;
deviceControlList.append(control);
}
}
}
if (!deviceControlList.isEmpty())
_deviceControls.insert(dev, deviceControlList);
}
SAFE_RELEASE(videoProcAmp);
SAFE_RELEASE(reader);
}
SAFE_RELEASE(pSource);
}
if (!devicePropertyList.isEmpty())
_deviceProperties.insert(dev, devicePropertyList);
}
CoTaskMemFree(symlink);
}
CoTaskMemFree(name);
SAFE_RELEASE(devices[i]);
}
CoTaskMemFree(devices);
}
SAFE_RELEASE(attr);
}
}
}
void MFGrabber::start_capturing()
{
if (_initialized && _sourceReader && _threadManager)
{
HRESULT hr = _sourceReader->ReadSample(MF_SOURCE_READER_FIRST_VIDEO_STREAM, 0, NULL, NULL, NULL, NULL);
if (!SUCCEEDED(hr))
Error(_log, "ReadSample (%i)", hr);
}
}
void MFGrabber::process_image(const void *frameImageBuffer, int size)
{
int processFrameIndex = _currentFrame++;
// frame skipping
if ((processFrameIndex % (_fpsSoftwareDecimation + 1) != 0) && (_fpsSoftwareDecimation > 0))
return;
// We do want a new frame...
if (size < _frameByteSize && _pixelFormat != PixelFormat::MJPEG)
Error(_log, "Frame too small: %d != %d", size, _frameByteSize);
else if (_threadManager != nullptr)
{
for (int i = 0; i < _threadManager->_threadCount; i++)
{
if (!_threadManager->_threads[i]->isBusy())
{
_threadManager->_threads[i]->setup(_pixelFormat, (uint8_t*)frameImageBuffer, size, _width, _height, _lineLength, _cropLeft, _cropTop, _cropBottom, _cropRight, _videoMode, _flipMode, _pixelDecimation);
_threadManager->_threads[i]->process();
break;
}
}
}
}
void MFGrabber::receive_image(const void *frameImageBuffer, int size)
{
process_image(frameImageBuffer, size);
start_capturing();
}
void MFGrabber::newThreadFrame(Image<ColorRgb> image)
{
if (_signalDetectionEnabled)
{
// check signal (only in center of the resulting image, because some grabbers have noise values along the borders)
bool noSignal = true;
// top left
unsigned xOffset = image.width() * _x_frac_min;
unsigned yOffset = image.height() * _y_frac_min;
// bottom right
unsigned xMax = image.width() * _x_frac_max;
unsigned yMax = image.height() * _y_frac_max;
for (unsigned x = xOffset; noSignal && x < xMax; ++x)
for (unsigned y = yOffset; noSignal && y < yMax; ++y)
noSignal &= (ColorRgb&)image(x, y) <= _noSignalThresholdColor;
if (noSignal)
++_noSignalCounter;
else
{
if (_noSignalCounter >= _noSignalCounterThreshold)
{
_noSignalDetected = true;
Info(_log, "Signal detected");
}
_noSignalCounter = 0;
}
if ( _noSignalCounter < _noSignalCounterThreshold)
{
emit newFrame(image);
}
else if (_noSignalCounter == _noSignalCounterThreshold)
{
_noSignalDetected = false;
Info(_log, "Signal lost");
}
}
else
emit newFrame(image);
}
void MFGrabber::setDevice(const QString& device)
{
if (_currentDeviceName != device)
{
_currentDeviceName = device;
_reload = true;
}
}
bool MFGrabber::setInput(int input)
{
if (Grabber::setInput(input))
{
_reload = true;
return true;
}
return false;
}
bool MFGrabber::setWidthHeight(int width, int height)
{
if (Grabber::setWidthHeight(width, height))
{
_reload = true;
return true;
}
return false;
}
void MFGrabber::setEncoding(QString enc)
{
if (_pixelFormatConfig != parsePixelFormat(enc))
{
_pixelFormatConfig = parsePixelFormat(enc);
if (_initialized)
{
Debug(_log,"Set hardware encoding to: %s", QSTRING_CSTR(enc.toUpper()));
_reload = true;
}
else
_pixelFormat = _pixelFormatConfig;
}
}
void MFGrabber::setBrightnessContrastSaturationHue(int brightness, int contrast, int saturation, int hue)
{
if (_brightness != brightness || _contrast != contrast || _saturation != saturation || _hue != hue)
{
_brightness = brightness;
_contrast = contrast;
_saturation = saturation;
_hue = hue;
_reload = true;
}
}
void MFGrabber::setSignalThreshold(double redSignalThreshold, double greenSignalThreshold, double blueSignalThreshold, int noSignalCounterThreshold)
{
_noSignalThresholdColor.red = uint8_t(255*redSignalThreshold);
_noSignalThresholdColor.green = uint8_t(255*greenSignalThreshold);
_noSignalThresholdColor.blue = uint8_t(255*blueSignalThreshold);
_noSignalCounterThreshold = qMax(1, noSignalCounterThreshold);
if (_signalDetectionEnabled)
Info(_log, "Signal threshold set to: {%d, %d, %d} and frames: %d", _noSignalThresholdColor.red, _noSignalThresholdColor.green, _noSignalThresholdColor.blue, _noSignalCounterThreshold );
}
void MFGrabber::setSignalDetectionOffset(double horizontalMin, double verticalMin, double horizontalMax, double verticalMax)
{
// rainbow 16 stripes 0.47 0.2 0.49 0.8
// unicolor: 0.25 0.25 0.75 0.75
_x_frac_min = horizontalMin;
_y_frac_min = verticalMin;
_x_frac_max = horizontalMax;
_y_frac_max = verticalMax;
if (_signalDetectionEnabled)
Info(_log, "Signal detection area set to: %f,%f x %f,%f", _x_frac_min, _y_frac_min, _x_frac_max, _y_frac_max );
}
void MFGrabber::setSignalDetectionEnable(bool enable)
{
if (_signalDetectionEnabled != enable)
{
_signalDetectionEnabled = enable;
if (_initialized)
Info(_log, "Signal detection is now %s", enable ? "enabled" : "disabled");
}
}
bool MFGrabber::reload(bool force)
{
if (_reload || force)
{
if (_sourceReader)
{
Info(_log,"Reloading Media Foundation Grabber");
uninit();
_pixelFormat = _pixelFormatConfig;
_newDeviceName = _currentDeviceName;
}
_reload = false;
return prepare() && start();
}
return false;
}
QJsonArray MFGrabber::discover(const QJsonObject& params)
{
DebugIf (verbose, _log, "params: [%s]", QString(QJsonDocument(params).toJson(QJsonDocument::Compact)).toUtf8().constData());
enumVideoCaptureDevices();
QJsonArray inputsDiscovered;
for (auto it = _deviceProperties.begin(); it != _deviceProperties.end(); ++it)
{
QJsonObject device, in;
QJsonArray video_inputs, formats;
device["device"] = it.key();
device["device_name"] = it.key();
device["type"] = "v4l2";
in["name"] = "";
in["inputIdx"] = 0;
QStringList encodingFormats = QStringList();
for (int i = 0; i < _deviceProperties[it.key()].count(); ++i )
if (!encodingFormats.contains(pixelFormatToString(_deviceProperties[it.key()][i].pf), Qt::CaseInsensitive))
encodingFormats << pixelFormatToString(_deviceProperties[it.key()][i].pf).toLower();
for (auto encodingFormat : encodingFormats)
{
QJsonObject format;
QJsonArray resolutionArray;
format["format"] = encodingFormat;
QMultiMap<int, int> deviceResolutions = QMultiMap<int, int>();
for (int i = 0; i < _deviceProperties[it.key()].count(); ++i )
if (!deviceResolutions.contains(_deviceProperties[it.key()][i].width, _deviceProperties[it.key()][i].height) && _deviceProperties[it.key()][i].pf == parsePixelFormat(encodingFormat))
deviceResolutions.insert(_deviceProperties[it.key()][i].width, _deviceProperties[it.key()][i].height);
for (auto width_height = deviceResolutions.begin(); width_height != deviceResolutions.end(); width_height++)
{
QJsonObject resolution;
QJsonArray fps;
resolution["width"] = width_height.key();
resolution["height"] = width_height.value();
QIntList framerates = QIntList();
for (int i = 0; i < _deviceProperties[it.key()].count(); ++i )
{
int fps = _deviceProperties[it.key()][i].numerator / _deviceProperties[it.key()][i].denominator;
if (!framerates.contains(fps) && _deviceProperties[it.key()][i].pf == parsePixelFormat(encodingFormat) && _deviceProperties[it.key()][i].width == width_height.key() && _deviceProperties[it.key()][i].height == width_height.value())
framerates << fps;
}
for (auto framerate : framerates)
fps.append(framerate);
resolution["fps"] = fps;
resolutionArray.append(resolution);
}
format["resolutions"] = resolutionArray;
formats.append(format);
}
in["formats"] = formats;
video_inputs.append(in);
device["video_inputs"] = video_inputs;
QJsonObject controls, controls_default;
for (auto control : _deviceControls[it.key()])
{
QJsonObject property;
property["minValue"] = control.minValue;
property["maxValue"] = control.maxValue;
property["step"] = control.step;
property["current"] = control.currentValue;
controls[control.property] = property;
controls_default[control.property] = control.default;
}
device["properties"] = controls;
QJsonObject defaults, video_inputs_default, format_default, resolution_default;
resolution_default["width"] = 640;
resolution_default["height"] = 480;
resolution_default["fps"] = 25;
format_default["format"] = "bgr24";
format_default["resolution"] = resolution_default;
video_inputs_default["inputIdx"] = 0;
video_inputs_default["standards"] = "PAL";
video_inputs_default["formats"] = format_default;
defaults["video_input"] = video_inputs_default;
defaults["properties"] = controls_default;
device["default"] = defaults;
inputsDiscovered.append(device);
}
_deviceProperties.clear();
_deviceControls.clear();
DebugIf (verbose, _log, "device: [%s]", QString(QJsonDocument(inputsDiscovered).toJson(QJsonDocument::Compact)).toUtf8().constData());
return inputsDiscovered;
}

View File

@@ -0,0 +1,401 @@
#pragma once
#include <mfapi.h>
#include <mftransform.h>
#include <dmo.h>
#include <wmcodecdsp.h>
#include <mfidl.h>
#include <mfreadwrite.h>
#include <shlwapi.h>
#include <mferror.h>
#include <strmif.h>
#include <comdef.h>
#pragma comment (lib, "ole32.lib")
#pragma comment (lib, "mf.lib")
#pragma comment (lib, "mfplat.lib")
#pragma comment (lib, "mfuuid.lib")
#pragma comment (lib, "mfreadwrite.lib")
#pragma comment (lib, "strmiids.lib")
#pragma comment (lib, "wmcodecdspuuid.lib")
#include <grabber/MFGrabber.h>
#define SAFE_RELEASE(x) if(x) { x->Release(); x = nullptr; }
// Need more supported formats? Visit https://docs.microsoft.com/en-us/windows/win32/medfound/colorconverter
static PixelFormat GetPixelFormatForGuid(const GUID guid)
{
if (IsEqualGUID(guid, MFVideoFormat_RGB32)) return PixelFormat::RGB32;
if (IsEqualGUID(guid, MFVideoFormat_RGB24)) return PixelFormat::BGR24;
if (IsEqualGUID(guid, MFVideoFormat_YUY2)) return PixelFormat::YUYV;
if (IsEqualGUID(guid, MFVideoFormat_UYVY)) return PixelFormat::UYVY;
if (IsEqualGUID(guid, MFVideoFormat_MJPG)) return PixelFormat::MJPEG;
if (IsEqualGUID(guid, MFVideoFormat_NV12)) return PixelFormat::NV12;
if (IsEqualGUID(guid, MFVideoFormat_I420)) return PixelFormat::I420;
return PixelFormat::NO_CHANGE;
};
class SourceReaderCB : public IMFSourceReaderCallback
{
public:
SourceReaderCB(MFGrabber* grabber)
: _nRefCount(1)
, _grabber(grabber)
, _bEOS(FALSE)
, _hrStatus(S_OK)
, _isBusy(false)
, _transform(nullptr)
, _pixelformat(PixelFormat::NO_CHANGE)
{
// Initialize critical section.
InitializeCriticalSection(&_critsec);
}
// IUnknown methods
STDMETHODIMP QueryInterface(REFIID iid, void** ppv)
{
static const QITAB qit[] =
{
QITABENT(SourceReaderCB, IMFSourceReaderCallback),
{ 0 },
};
return QISearch(this, qit, iid, ppv);
}
STDMETHODIMP_(ULONG) AddRef()
{
return InterlockedIncrement(&_nRefCount);
}
STDMETHODIMP_(ULONG) Release()
{
ULONG uCount = InterlockedDecrement(&_nRefCount);
if (uCount == 0)
{
delete this;
}
return uCount;
}
// IMFSourceReaderCallback methods
STDMETHODIMP OnReadSample(HRESULT hrStatus, DWORD /*dwStreamIndex*/,
DWORD dwStreamFlags, LONGLONG llTimestamp, IMFSample* pSample)
{
EnterCriticalSection(&_critsec);
_isBusy = true;
if (_grabber->_sourceReader == nullptr)
{
_isBusy = false;
LeaveCriticalSection(&_critsec);
return S_OK;
}
if (dwStreamFlags & MF_SOURCE_READERF_STREAMTICK)
{
Debug(_grabber->_log, "Skipping stream gap");
LeaveCriticalSection(&_critsec);
_grabber->_sourceReader->ReadSample(MF_SOURCE_READER_FIRST_VIDEO_STREAM, 0, nullptr, nullptr, nullptr, nullptr);
return S_OK;
}
if (dwStreamFlags & MF_SOURCE_READERF_NATIVEMEDIATYPECHANGED)
{
IMFMediaType* type = nullptr;
GUID format;
_grabber->_sourceReader->GetNativeMediaType(MF_SOURCE_READER_FIRST_VIDEO_STREAM, MF_SOURCE_READER_CURRENT_TYPE_INDEX, &type);
type->GetGUID(MF_MT_SUBTYPE, &format);
Debug(_grabber->_log, "Native media type changed");
InitializeVideoEncoder(type, GetPixelFormatForGuid(format));
SAFE_RELEASE(type);
}
if (dwStreamFlags & MF_SOURCE_READERF_CURRENTMEDIATYPECHANGED)
{
IMFMediaType* type = nullptr;
GUID format;
_grabber->_sourceReader->GetCurrentMediaType(MF_SOURCE_READER_FIRST_VIDEO_STREAM, &type);
type->GetGUID(MF_MT_SUBTYPE, &format);
Debug(_grabber->_log, "Current media type changed");
InitializeVideoEncoder(type, GetPixelFormatForGuid(format));
SAFE_RELEASE(type);
}
// Variables declaration
IMFMediaBuffer* buffer = nullptr;
if (FAILED(hrStatus))
{
_hrStatus = hrStatus;
_com_error error(_hrStatus);
Error(_grabber->_log, "%s", error.ErrorMessage());
goto done;
}
if (!pSample)
{
Error(_grabber->_log, "Media sample is empty");
goto done;
}
if (_pixelformat != PixelFormat::MJPEG && _pixelformat != PixelFormat::BGR24 && _pixelformat != PixelFormat::NO_CHANGE)
pSample = TransformSample(_transform, pSample);
_hrStatus = pSample->ConvertToContiguousBuffer(&buffer);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Buffer conversion failed => %s", error.ErrorMessage());
goto done;
}
BYTE* data = nullptr;
DWORD maxLength = 0, currentLength = 0;
_hrStatus = buffer->Lock(&data, &maxLength, &currentLength);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Access to the buffer memory failed => %s", error.ErrorMessage());
goto done;
}
_grabber->receive_image(data, currentLength);
_hrStatus = buffer->Unlock();
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Unlocking the buffer memory failed => %s", error.ErrorMessage());
}
done:
SAFE_RELEASE(buffer);
if (MF_SOURCE_READERF_ENDOFSTREAM & dwStreamFlags)
_bEOS = TRUE; // Reached the end of the stream.
if (_pixelformat != PixelFormat::MJPEG && _pixelformat != PixelFormat::BGR24 && _pixelformat != PixelFormat::NO_CHANGE)
SAFE_RELEASE(pSample);
_isBusy = false;
LeaveCriticalSection(&_critsec);
return _hrStatus;
}
HRESULT SourceReaderCB::InitializeVideoEncoder(IMFMediaType* type, PixelFormat format)
{
_pixelformat = format;
if (format == PixelFormat::MJPEG || format == PixelFormat::BGR24 || format == PixelFormat::NO_CHANGE)
return S_OK;
// Variable declaration
IMFMediaType* output = nullptr;
DWORD mftStatus = 0;
QString error = "";
// Create instance of IMFTransform interface pointer as CColorConvertDMO
_hrStatus = CoCreateInstance(CLSID_CColorConvertDMO, nullptr, CLSCTX_INPROC_SERVER, IID_IMFTransform, (void**)&_transform);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Creation of the Color Converter failed => %s", error.ErrorMessage());
goto done;
}
// Set input type as media type of our input stream
_hrStatus = _transform->SetInputType(0, type, 0);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Setting the input media type failed => %s", error.ErrorMessage());
goto done;
}
// Create new media type
_hrStatus = MFCreateMediaType(&output);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Creating a new media type failed => %s", error.ErrorMessage());
goto done;
}
// Copy all attributes from input type to output media type
_hrStatus = type->CopyAllItems(output);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Copying of all attributes from input to output media type failed => %s", error.ErrorMessage());
goto done;
}
UINT32 width, height;
UINT32 numerator, denominator;
// Fill the missing attributes
if (FAILED(output->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video)) ||
FAILED(output->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_RGB24)) ||
FAILED(output->SetUINT32(MF_MT_FIXED_SIZE_SAMPLES, TRUE)) ||
FAILED(output->SetUINT32(MF_MT_ALL_SAMPLES_INDEPENDENT, TRUE)) ||
FAILED(output->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive)) ||
FAILED(MFGetAttributeSize(type, MF_MT_FRAME_SIZE, &width, &height)) ||
FAILED(MFSetAttributeSize(output, MF_MT_FRAME_SIZE, width, height)) ||
FAILED(MFGetAttributeRatio(type, MF_MT_FRAME_RATE, &numerator, &denominator)) ||
FAILED(MFSetAttributeRatio(output, MF_MT_PIXEL_ASPECT_RATIO, 1, 1)))
{
Error(_grabber->_log, "Setting output media type attributes failed");
goto done;
}
// Set transform output type
_hrStatus = _transform->SetOutputType(0, output, 0);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Setting the output media type failed => %s", error.ErrorMessage());
goto done;
}
// Check if encoder parameters set properly
_hrStatus = _transform->GetInputStatus(0, &mftStatus);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Failed to query the input stream for more data => %s", error.ErrorMessage());
goto done;
}
if (MFT_INPUT_STATUS_ACCEPT_DATA == mftStatus)
{
// Notify the transform we are about to begin streaming data
if (FAILED(_transform->ProcessMessage(MFT_MESSAGE_COMMAND_FLUSH, 0)) ||
FAILED(_transform->ProcessMessage(MFT_MESSAGE_NOTIFY_BEGIN_STREAMING, 0)) ||
FAILED(_transform->ProcessMessage(MFT_MESSAGE_NOTIFY_START_OF_STREAM, 0)))
{
Error(_grabber->_log, "Failed to begin streaming data");
}
}
done:
SAFE_RELEASE(output);
return _hrStatus;
}
BOOL SourceReaderCB::isBusy()
{
EnterCriticalSection(&_critsec);
BOOL result = _isBusy;
LeaveCriticalSection(&_critsec);
return result;
}
STDMETHODIMP OnEvent(DWORD, IMFMediaEvent*) { return S_OK; }
STDMETHODIMP OnFlush(DWORD) { return S_OK; }
private:
virtual ~SourceReaderCB()
{
if (_transform)
{
_transform->ProcessMessage(MFT_MESSAGE_NOTIFY_END_OF_STREAM, 0);
_transform->ProcessMessage(MFT_MESSAGE_NOTIFY_END_STREAMING, 0);
}
SAFE_RELEASE(_transform);
// Delete critical section.
DeleteCriticalSection(&_critsec);
}
IMFSample* SourceReaderCB::TransformSample(IMFTransform* transform, IMFSample* in_sample)
{
IMFSample* result = nullptr;
IMFMediaBuffer* out_buffer = nullptr;
MFT_OUTPUT_DATA_BUFFER outputDataBuffer = { 0 };
// Process the input sample
_hrStatus = transform->ProcessInput(0, in_sample, 0);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Failed to process the input sample => %s", error.ErrorMessage());
goto done;
}
// Gets the buffer demand for the output stream
MFT_OUTPUT_STREAM_INFO streamInfo;
_hrStatus = transform->GetOutputStreamInfo(0, &streamInfo);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Failed to retrieve buffer requirement for output current => %s", error.ErrorMessage());
goto done;
}
// Create an output media buffer
_hrStatus = MFCreateMemoryBuffer(streamInfo.cbSize, &out_buffer);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Failed to create an output media buffer => %s", error.ErrorMessage());
goto done;
}
// Create an empty media sample
_hrStatus = MFCreateSample(&result);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Failed to create an empty media sample => %s", error.ErrorMessage());
goto done;
}
// Add the output media buffer to the media sample
_hrStatus = result->AddBuffer(out_buffer);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Failed to add the output media buffer to the media sample => %s", error.ErrorMessage());
goto done;
}
// Create the output buffer structure
memset(&outputDataBuffer, 0, sizeof outputDataBuffer);
outputDataBuffer.dwStreamID = 0;
outputDataBuffer.dwStatus = 0;
outputDataBuffer.pEvents = nullptr;
outputDataBuffer.pSample = result;
DWORD status = 0;
// Generate the output sample
_hrStatus = transform->ProcessOutput(0, 1, &outputDataBuffer, &status);
if (FAILED(_hrStatus))
{
_com_error error(_hrStatus);
Error(_grabber->_log, "Failed to generate the output sample => %s", error.ErrorMessage());
}
else
{
SAFE_RELEASE(out_buffer);
return result;
}
done:
SAFE_RELEASE(out_buffer);
return nullptr;
}
private:
long _nRefCount;
CRITICAL_SECTION _critsec;
MFGrabber* _grabber;
BOOL _bEOS;
HRESULT _hrStatus;
IMFTransform* _transform;
PixelFormat _pixelformat;
std::atomic<bool> _isBusy;
};

File diff suppressed because it is too large Load Diff