I am currently working on having a web page that allows the user to see a live stream of his / her webcam and then take a snapshot (so nothing special).
Everything is fine, but on a RaspberryPi 3 running raspbian with chromium browser latest (for that distro), I can't go above a resolution of 640 x 480 using the constraints.
If I go any higher than these values, the image will simply not show on the page.
some code snippet:
var constraints = {
audio: false,
video: {
//width: { min: 1024, ideal: 1280, max: 1920 },
//height: { min: 768, ideal: 960, max: 1440 }
width: { min: 640, ideal: 1280, max: 1920 },
height: { min: 480, ideal: 960, max: 1440 }
}
};
[...]
navigator.webkitGetUserMedia(
constraints,
function(stream) {
if (navigator.mozGetUserMedia) {
video.mozSrcObject = stream;
} else {
var vendorURL = window.URL || window.webkitURL;
video.src = vendorURL.createObjectURL(stream);
}
video.play();
},
function(err) {
console.log("An error occured! " + err);
}
);
Does anyone have an Ide what I might want to look into ?
I've searched the web for the past 2 weeks but didn't find anything helpful so far...
All Ideas are highly appreciated...
Check the video camera that you are using and the drivers for it on the RaspberryPi.
Many webcams don't really offer video at higher than VGA (even if still images can be taken at higher resolutions).
There's also the issue of how that data gets from the webcam to the RaspberryPi. Older connections (lower than USB3 requires some minimal compression and decompression to take place between the camera and the device, and I am not sure the drivers you have support it for the rPi).
The only solution that has been working flawlessly on the Rpi for years, even on the Pi Zero now is UV4L, 30fps Full Hd + Audio + datachannel all optionally bidirectional, Hardware-encoded, p2p or with Janus up to 3-4 people. #Tsahi Levent-Levi you might be surprised, as I know you are actively promoting WebRTC: example
Chrome is a bit tricky in this case, but using this constraints should allow a higher resolution than 640*480:
constraints = {
"mandatory": {
"maxWidth": 1280,
"maxHeight": 720
},
"optional": [{
"minWidth": 1024
},
{
"minHeight": 768
}
]
};
Related
I am writing an OTA BLE firmware update between flutter Android app and an Espressif ESP32.
Basically, the flutter app sends chunks of data that the firmware expect: the chunk bears the chunkIndex so that the firmware can check nothing is missing since the last reception.
The issue I have is that not all chunks arrive to the firmware, and as it detects that issue, simply breaks the update as expected.
Note that the size of the chunk and its header is of 365 bytes (the MTU I negotiated).
As you can see — in the hope to have better comms — I introduced an extra delay which I'd like to be as short as possible (possibly 0) as I use the withoutResponse: false. Duration of the write is about 80msecs as per my logs.
A remark I've made is that if I use the withoutResponse: true, the comm is far better.
Another remark is that other BLE characteristics are actively exchanging information during the transfert process. Is that something that could create issue?
I really need something rock solid.
So what am I doing wrong?
for (int i = 0; i < chunksCount; i++) {
final chunk = [
OTA_RX_FB_RECEIVE_DATA_CHUNK,
i >> 8,
i & 0xff,
chunkSize >> 8,
chunkSize & 0xff
];
chunk.addAll(dataChunk);
log("OTA: whole chunk with header: ${chunk.length} bytes");
try {
log("OTA: writing…");
final start = DateTime.now();
await bleFirmwareUpdateTxCharacteristic.write(data
//,withoutResponse: true
);
final d = DateTime.now().difference(start).inMilliseconds;
log("OTA: written in ${d}msecs, now wait a bit");
await Future.delayed(otaBleDelay);
log("OTA: written, done waiting");
} catch (e) {
error("OTA: smartboxFirmwareUpdateBleWrite() error $e");
}```
How can I make my game screen fit every Android device. The problem every answer I find is for 2d. I want for 3d games.
You need two things for a 3D game
1-) Optimizing the canvas : https://www.youtube.com/watch?v=95q79j0lNYA
2-) Optimizing the camera angle (if you want it)
Optimizing the camera for every screen resolution may not be as easy as canvas.
For this, you can take certain standard resolutions with (Get in start Script).
Let's set the filed of view of the camera accordingly at Scene Start.
public class CameraOptimize : MonoBehaviour
{
// ========
private void Start
{
CameraOptimizer();
}
private void CameraOptimizer
{
if (Screen.height == 2560 || Screen.width == 1440)
{
Camera.main.fieldOfView = 65.0f;
}
else if (Screen.height == 1920 || Screen.width == 1080)
{
Camera.main.fieldOfView = 75.0f;
}
}
}
My ionic appp is to take pictures of business cards. as 99% of the business cards are landscape style so users tries to change the camera orientation to landscape mode as well. This is a natural behavior.
However, i want to avoid that and one way is to show a rectangle while camera is open (width equal to screen width and a 3:2 aspect ratio for height)
This will make life easy as users wont try to change the camera orientation.
I was looking into camera plugin which uses code like
this.camera.getPicture({
destinationType: this.camera.DestinationType.DATA_URL,
quality: 25,
correctOrientation: true,
allowEdit:false,
sourceType: this.camera.PictureSourceType.SAVEDPHOTOALBUM
}).then(async(imageData) => {
//console.log("image data is:" + imageData)
// imageData is a base64 encoded string
var base64Image = "data:image/jpeg;base64," + imageData;
I was trying targetWidth and height but that does not draw that box like i have seen in many other apps.
there are other plugins like cropperJs but seems they let u crop the image after taken which not what i need.
Use camera-preview-plugin instead of camera for that :
const cameraPreviewOpts: CameraPreviewOptions = {
x: 0,
y: 0,
width: window.screen.width,
height: window.screen.height,
camera: 'rear',
tapPhoto: true,
previewDrag: true,
toBack: true,
alpha: 1
}
Ionic Camera Preview
We are using exoplayer v2.x and are playing a HLS file which has 4 bitrate tracks.
When we configure exoplayer for adaptive playback, it is starting with a higher bitrate track but NOT switching back to a lower bitrate track when we throttle the network speed using Charles. The player seems to stick with the already selected higher bitrate track and keep on buffering instead of switching to a lower bitrate one.
We have configured the exoplayer in the following way:
private DefaultBandwidthMeter BANDWIDTH_METER =
new DefaultBandwidthMeter(mUiUpdateHandler, new BandwidthMeter.EventListener() {
#Override
public void onBandwidthSample(int elapsedMs, long bytes, long bitrate) {
Log.v(TAG, "Elapsed Time in MS " + elapsedMs + " Bytes " + bytes + " Bitrate " + bitrate);
bitrateEstimate = bitrate;
bytesDownloaded = bytes;
}
});
TrackSelection.Factory adaptiveTrackSelectionFactory =
new AdaptiveTrackSelection.Factory(BANDWIDTH_METER);
trackSelector = new DefaultTrackSelector(adaptiveTrackSelectionFactory);
player = ExoPlayerFactory.newSimpleInstance(getActivity(), trackSelector,
new CustomLoadControl(new CustomLoadControl.EventListener() {
#Override
public void onBufferedDurationSample(long bufferedDurationUs) {
long bufferedDurationMs = bufferedDurationUs;
}
}, mUiUpdateHandler), drmSessionManager, extensionRendererMode);
Can anyone please confirm this is the correct way to configure the player? Also has anyone observed this problem and have a fix for this?
Thanks in advance.
I'm writing an app that extensively uses CoreImage filters + custom shaders. Usual case is:
Load and cache large RAW file from hard drive through CIFilter(imageURL)
Apply corrections (tint, temp, exposure)
Apply random CoreImage pre-defined filters
Apply custom-written shaders
Render everything to MTLTexture
Render from MTLTexture to screen
Go to step 2.
Now, we observed on various MacBooks (e.g. late 2014) that this code runs faster if target MTLDevice is Intel integrated GPU, rather than high performance Radeon attached to MBP.
Any ideas why is that? I would expect Radeon to be way faster.
edit:
Tested cards:
"Radeon Pro 460 4096 MB" vs "Intel HD Graphics 530 1536MB"
"NVIDIA GeForce GT 750M 2GB GDDR5" vs "Intel Iris Pro Graphics"
Simplified version of code we're using:
let filter = CIFilter(imageURL: urlToRawFile20MBLarge)
class Renderer: MTKView {
override func draw() {
filter.setValue(temp, forKey: kCIInputNeutralTemperatureKey)
let image: CIImage = filter.cropped(to: rect)
// uses CIFilter(name: "CIGaussianBlur").outputImage
.applyBlurFilter(radius: radius)
.applyCustomShader1(param: x)
.applyCustomShader2(param: y)
// ... create command buffer and `CIRenderDestination`
do {
try ciContext.startTask(toClear: dest)
try ciContext.startTask(toRender: image, to: dest)
} catch {
log(error)
}
if let drawable = currentDrawable {
commandBuffer.present(drawable)
}
}
}