P5.js createCapture failure callback - callback

Is there a callback function for p5.js’ createCapture function fails? (i.e. when user permission is denied or video camera stream is unsupported by user browser).
I notice in the src there is a success callback, but can’t seem to find one for failure.
In the browser console, p5 also reports ‘DOMException: Permission denied’, however, I would like to handle this in a more user-friendly fashion.
If there is no callback, what is the best practice for handling media failure with createCapture as it doesn’t seem to be discussed in the docs.

Ok so this answer is more than a year late but posting this as it may be useful for others stuck on the same issue. Rather than error testing the capture itself as suggested in comments below or perhaps reworking createCapture() (best done through opening an issue request if you feel it should be changed) I suggest testing the pixels array of the capture and only if it has been set then proceeding with doing whatever your script does. This could be done simply like so:
//load pixel data of webcam capture
cap.loadPixels();
//all values in the pixel array start as zero so just test if they are
//greater than zero
if (cap.pixels[1] > 0)
{
//put the rest of your script here
}
A full example of this in action is below:
var canvas;
var cap;
var xpos = 0;
function setup()
{
canvas = createCanvas(windowWidth, windowHeight);
canvas.style("display", "block");
background("#666666");
//create an instance of the webcam
cap = createCapture(VIDEO);
cap.size(640, 480);
cap.hide();
}
function draw()
{
//load pixel data of webcam capture
cap.loadPixels();
//if the first pixel's R value is set continue with the rest of the script
if (cap.pixels[1] > 0)
{
var w = cap.width;
var h = cap.height;
copy(cap, w/2, 0, 10, h, xpos, (height / 2) - (h / 2), 10, h);
xpos = xpos + 1;
if (xpos > width) xpos = 0;
}
}

I believe you can use a try and catch to detect when you get an error. Something like this:
try{
capture = createCapture(VIDEO);
}
catch(error){
// error handling here
}
More info on W3Schools and MDN.

Related

When reading back asynchronously from compute shaders in Unity, can I reset buffer halfway?

Hope there will be nothing confusing in what I'm going to talk about, because my mother tongue is not English and my grammar is poor:p
I'm working on a mipmap analyzing tool which need to calculate with pixels from the render texture. Here's a part of the C# code:
private IEnumerator CSGroupColor(RenderTexture rt, GroupColor[] groupColors)
{
var outputBuffer = new ComputeBuffer(groupColors.Length, 8);
csKernelID = cs.FindKernel("CSGroupColor");
cs.SetTexture(csKernelID, "rt", rt);
cs.SetBuffer(csKernelID, "groupColorOut", outputBuffer);
cs.Dispatch(csKernelID, rt.width / 8, rt.height / 8, 1);
var req = AsyncGPUReadback.Request(outputBuffer);
yield return new WaitUntil(() => req.done);
req.GetData<GroupColor>().CopyTo(groupColors);
foreach (var color in groupColors)
{
if (!m_staticsDatas.TryGetValue(color.groupindex, out var vl))
continue;
if (color.value > 0)
vl.allColors.Add(color.value);
}
}
And what I want to implement next, is to make every buffer smaller(e.g.with a length of 4096), like we usually do in other asynchronous communications. Maybe I can pass the first buffer to CPU right away when it's full, and then replace it with the second buffer, and so on.
As I see it, using SetBuffer() again after req.done must be permitted to make that viable. I have been finding on Internet all day for a sample usage, but still found nothing.
Is there anyone who would give some help? Thx very much.

Understanding Flutter's SchedulerBinding in the context of an animated timeline

I'm trying to understand the part of this code below that uses SchedulerBinding.instance.scheduleFrameCallback(beginFrame);. beginFrame is listed in the other code block below.
The code comes from here, which is an animated timeline for Flutter. I don't expect anyone to read all this, obviously. But given some context, can you understand what for it is being used?
Context: this part of the code is inside a function called setViewport. The viewport of a timeline is simply the visible part of that timeline. So, once a viewport is set (a start and end point in the timeline are given), it ends animating something in the timeline. You can see that in the process of doing it, it calls SchedulerBinding.instance.scheduleFrameCallback, which is what I want to know what is used for. I obviously went into the page for SchedulerBinding but the explanation is so generic that I don't have an idea what it is used for.
if (!animate) {
_renderStart = start;
_renderEnd = end;
advance(0.0, false);
if (onNeedPaint != null) {
onNeedPaint();
}
} else if (!_isFrameScheduled) {
_isFrameScheduled = true;
_lastFrameTime = 0.0;
SchedulerBinding.instance.scheduleFrameCallback(beginFrame);
}
Here's beginFrame:
/// Make sure that all the visible assets are being rendered and advanced
/// according to the current state of the timeline.
void beginFrame(Duration timeStamp) {
_isFrameScheduled = false;
final double t =
timeStamp.inMicroseconds / Duration.microsecondsPerMillisecond / 1000.0;
if (_lastFrameTime == 0.0) {
_lastFrameTime = t;
_isFrameScheduled = true;
SchedulerBinding.instance.scheduleFrameCallback(beginFrame);
return;
}
double elapsed = t - _lastFrameTime;
_lastFrameTime = t;
if (!advance(elapsed, true) && !_isFrameScheduled) {
_isFrameScheduled = true;
SchedulerBinding.instance.scheduleFrameCallback(beginFrame);
}
if (onNeedPaint != null) {
onNeedPaint();
}
}
According to the project README, it's used to keep the Flare animations in sync:
"To have the animation reproduce correctly, it's also necessary to call advance(elapsed) on the current FlutterActor each frame. Moreover, the current ActorAnimation requires that the function apply(time) is called on it to display it's correct interpolated values.
This is all made possible by relying on Flutter's SchedulerBinding.scheduleFrameCallback()."

Controlling robot makes a shaking movement

Im trying to control a robot by sending positions with 100hz. It's making a shaking movement when sending so much positions. When I send 1 position that is like 50 mm from his start position it moves smoothly. When I use my sensor to steer,(so it send every position from 0 to 50mm) it is shaking. I'm probably sending like X0-X1-X2-X1-X2-X3-X4-X5-X4-X5 and this is the reason why it might shake. How can I solve this making the robot move smoothly when I use my mouse to steer it?
Robot is asking 125hz
IR sensor is sending 100hz
Otherwise does the 25hz makes the diffrent?
Here is my code.
while(true)
// If sensor 1 is recording IR light.
if (listen1.newdata = true)
{
coX1 = (int) listen1.get1X(); //
coY1 = (int) listen1.get1Y();
newdata = true;
} else {
coX1 = 450;
coY1 = 300;
}
if (listen2.newdata = true)
{
coX2 = (int) listen2.get1X();
coY2 = (int) listen2.get1Y();
newdata = true;
} else {
coY2 = 150;
}
// If the sensor gets further then the workspace, it will automaticly correct it to these
// coordinates.
if (newdata = true)
{
if (coX1< 200 || coX1> 680)
{
coX1 = 450;
}
if (coY1<200 || coY1> 680)
{
coY1 = 300;
}
if (coY2<80 || coY2> 300)
{
coY2 = 150;
}
}
// This is the actually command send to a robot.
Gcode = String.format( "movej(p[0.%d,-0.%d, 0.%d, -0.5121, -3.08, 0.0005])"+ "\n", coX1, coY1, coY2);
//sends message to server
send(Gcode, out);
System.out.println(Gcode);
newdata = false;
}
}
private static void send(String movel, PrintWriter out) {
try {
out.println(movel); /*Writes to server*/
// System.out.println("Writing: "+ movel);
// Thread.sleep(250);
}
catch(Exception e) {
System.out.print("Error Connecting to Server\n");
}
}
}
# Edit
I discovered on wich way I can do this. It is via min and max. So basicly what I think I have to do is:
* put every individual coordinate in a array( 12 coordinates)
* Get the min and max out of this array
* Output the average of the min and max
Without knowing more about your robot characteristics and how you could control it, here are some general considerations:
To have a smooth motion of your robot, you should control it in speed with a well designed PID controller algorithm.
If you can only control it in position, the best you can do is monitoring the position & waiting for it to be "near enough" from the targetted position before sending the next position.
If you want a more detailed answer, please give more information on the command you send to the robot (movej), I suspect that you can do much more than just sending [x,y] coordinates.

starling framework animation is choppy on enter frame

I have created a basic as3 web project with starling. All iam doing is creating a simple image and in onEnterframe moving the image along x. But it seems that the animation/movement is not smooth, there is a jump in frames/jerkiness after every few frames. Below is onEnterFrame and the test function used to create the image. Any help on this is much appreciated.
private function onEnterFrame(e:Event):void
{
if(!img)
return;
img.x += 1;
if(img.x >= 960)
img.x = 0;
}
private function test():void
{
img = new Image(sAssets.getTextureAtlas("atlas").getTexture("flight_00"));
addChild(img);
img.x = 0;
img.y = 320;
}
That's because the time of each frame is slightly different. To achieve smooth animation, declare onEnterFrame handler with passedTime argument (that stores the time passed since previous frame) and use this value to move objects, instead of assuming that each frame will last 1/frameRate sec.
private function onEnterFrame(passedTime:Number):void
{
if(!img)
return;
img.x += passedTime * 100; // speed is 100 px/sec
if(img.x >= 960)
img.x = 0;
}
Note: this form of event handlers (without event argument) is supported in recent versions of Starling, and should be more performant. If you use older version, you can obtain passed time from the corresponding property of event object.

Kinect SimpleOpenNI and Processing Range

I need to find a way to have the kinect only recognize objects in a certain Range. The problem is that in our setup there will be viewers around the scene who may disturb the tracking. Therefore I need to set the kinect to a range of a few meters so it won't be disturbed by objects beyond that range. We are using the SimpleOpenNI library for processing.
Is there any possibility to achieve something like that in any way?
Thank you very much in advance.
Matteo
You can get the user's centre of mass(CoM) which retrieves a x,y,z position for a user without skeleton detection:
Based on the z position you should be able to use a basic if statement for your range/threshold.
import SimpleOpenNI.*;
SimpleOpenNI context;//OpenNI context
PVector pos = new PVector();//this will store the position of the user
ArrayList<Integer> users = new ArrayList<Integer>();//this will keep track of the most recent user added
float minZ = 1000;
float maxZ = 1700;
void setup(){
size(640,480);
context = new SimpleOpenNI(this);//initialize
context.enableScene();//enable features we want to use
context.enableUser(SimpleOpenNI.SKEL_PROFILE_NONE);//enable user events, but no skeleton tracking, needed for the CoM functionality
}
void draw(){
context.update();//update openni
image(context.sceneImage(),0,0);
if(users.size() > 0){//if we have at least a user
for(int user : users){//loop through each one and process
context.getCoM(user,pos);//store that user's position
println("user " + user + " is at: " + pos);//print it in the console
if(pos.z > minZ && pos.z < maxZ){//if the user is within a certain range
//do something cool
}
}
}
}
//OpenNI basic user events
void onNewUser(int userId){
println("detected" + userId);
users.add(userId);
}
void onLostUser(int userId){
println("lost: " + userId);
users.remove(userId);
}
You can find more explanation and hopefully useful tips in these workshop notes I posted.