Video mode in OGRE Engine Rendering Setup - screen-resolution

I'm trying out OGRE and I'd like to ask a question about the OGRE config dialog.
The dialog which can be opened with Ogre::Root::showConfigDialog(), lists only "800 x 600 # 32-bit colour" for Video Mode, both for "Direct3D9 Rendering Subsystem" and "Direct3D11 Rendering Subsystem".
My question is, why is there only 800x600x32 ? Is there a way to make it list more video modes, like 1024x768x32, 1980x1080x32, etc?
I've tried Google-searching, but the closest thing I got was how to change video mode without the use of the config dialog.
Any help would be appreciated, thank you!
EDIT:
Here's a link to my screenshot of OGRE Engine Rendering Setup dialog, since I don't have enough reputation to upload images.
http://imgur.com/kNDy48E

In general: This list will automatically contain all available video modes reported by the selected rendering API drivers. If you are certain that your current API and drivers should allow more, you could debug the respective _initialise() function, e.g. for D3D11 in OgreD3D11RenderSystem.cpp:
RenderWindow* D3D11RenderSystem::_initialise( bool autoCreateWindow, const String& windowTitle )
Internally, the function D3D11VideoModeList::enumerate() will be used to enumerate all possible values from the driver.
I saw that your GPU uses "NVIDIA Optimus". Did you try to tell NVIDIA to use the correct GPU for your Ogre application? I saw that in the config dialog the GTX is selected, but just to make sure: How to select Optimus GPU.
Also this Optimus policy trick might help:
NVIDIA released Optimus rendering policies guidelines not long ago.
If the user has driver 302 or higher, we can hint the driver to use the dedicated GPU. All we need to do is to export a variable:
extern "C" {
_declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}

Related

How to take images with Raspberry Pi since "raspistill" and "raspivid" are deprecated

Since the Raspberry Pi is transitioning from using the old raspistill and raspivid to the newer libcamera how should I take an image now if I don't want to use the CLI nor C as programming language? I can't find any wrapper for libcamera in any language other than C and the new official Picamera2 library is also in an alpha phase and not recommended for production use.
I am also using a 64-bit version of the Raspberry Pi OS so I can't use the legacy camera interface.
I could downgrade to 32-bit but where is the point in deprecating the old system if the new one is clearly not ready for productive use.
How do you guys handle using the camera of the Raspberry Pi at the moment if you want to use a wrapper like Picamera? Am I missing something?
At the moment, the best way, if you want to use bullseye, is probably to run libcamera-vid and pipe the output from that into a Python script. You can either use a subprocess() call, or just start a pipeline:
libcamera-vid <params> | python script.py
Be sure to read from sys.stdin.buffer like here to avoid CR/LF mangling.
Probably choose a YUV-based format to ensure frames are a deterministic length, as opposed to MJPEG where the frame length will vary according to image content and you'll have to search for JPEG SOI/EOI markers.
Did you try to see if the cam utility is installed?

Calibrating a resistive touchscreen for libinput

I'm using a GUI application framework (EGT) on an ATMEL/ Microchip SAMA5D4. The framework features
DRM/KMS and X11 backends.
I've looked at using tslib to calibrate a restive touchscreen for the device but due to EGT limitations it looks like I'm going to have to use libinput for the moment.
Is there a calibration mechanism (equivalent of tslib) available for libinput? I've looked at xlibinput_calibrator & it seems like it could be a solution but I'll have to sort out the dependencies in the Yocto build.
Thanks,
For anyone looking at this xlibinput_calibrator looks like it requires X11 & was not an option.
I eventually ended up using a startup script to prevent EGT capturing raw events (deleted /dev/event0 or similiar) from the touchscreen & instead use the calibrated source from ts_input.

ZXing library and ZXing.org differences?

I have implemented ZXing-node and am able to scan generated QRcode images great, however any images captured via a phone camera, don't get recognized, even though I've added some GraphicsWizard manipulation to deblur, resize etc.
I have tried using the --try_harder option as well, without success.
However the ZXing.org website handles all of these without any issues, where can I find out what settings, or additional image manipulation are done here?
Cheers
It is also all open source: https://github.com/zxing/zxing/tree/master/zxingorg
It uses TRY_HARDER mode, and different binarizers, and will try PURE_BARCODE mode too.

How Marmalade support text-to-speech?

Is there any API/SDK to support tts (text to speech) in Marmalade?
I've had some success porting Flite (http://www.speech.cs.cmu.edu/flite/) to the Marmalade environment. It produces wave files and raw, in-memory buffers (which can then be played using s3eSound directly) fine.
The s3eSound adapter (which plays the text directly from within flite) is a work in progress, so, while it does produce something close to recognisable speech, it is also obviously bugged. For my purposes, the raw buffers are more useful anyway, but feel free to try to fix it up.
You can see what I've done here https://github.com/madmaw/marmalade-flite
There is no specific API provided by Marmalade however you may be able to use the EDK if native APIs provide this functionality on iOS or Android.
https://www.madewithmarmalade.com/devnet/docs#/main/extensions.html

NPAPI: preferred windowing model (windowed/windowless/xembed) for non-visual plugin

I'm creating an NPAPI plugin that isn't supposed to have a UI (for use from Javascript only). What windowing model should I use (windowed/windowless/xembed) to support as many browsers (and browser versions) as possible?
I currently implement the following functions:
NPP_SetWindow: do nothing, return NPERR_NO_ERROR
NPP_Event: do nothing, return kNPEventNotHandled (0)
NPP_SetValue: do nothing, return NPERR_NO_ERROR
NPP_GetValue: if asked for NPPVpluginNeedsXEmbed, answer yes if the browser supports it (NPNVSupportsXEmbedBool), no otherwise
For this plugin I am supporting Linux & Windows only for now. The NPPVpluginNeedsXEmbed was necessary for Chrome on Linux (bug 38229), however some old versions may not support it as the MDC page says that the sample plugin for XEmbed is only supported on Firefox 2.0+.
The most common that I have seen is to not care at all about the windowing mode and set the object tag to 1x1 (you can try 0x0, but I've seen browser bugs related to that) size, in which case it doesn't really matter what window mode you use. However, I would do windowless myself since it won't ever cause the trademark block that floats over all other DOM elements that a normal windowed (XEmbed or not) plugin gives you.
This is what FireBreath does if the FB_GUI_DISABLED flag is set.