What is relationship between cv2.setmousecallback() and cv2.waitkey() - opencv-python

I am trying to draw a circle in a video frame but I have problem, for example:
vid1=cv2.VideoCapture('animal_video.mp4')
if vid1.isOpened():
print('is opened')
def mousedf(event,x,y,flags,params):
global nx,ny
if event == cv2.EVENT_LBUTTONDOWN:
nx,ny=x,y
if flags & cv2.EVENT_FLAG_LBUTTON:
radius=int(math.sqrt((x-nx)**2+(y-ny)**2))
cv2.circle(frame,(nx,ny),radius,(0,0,200),3)
cv2.namedWindow('frame')
cv2.setMouseCallback('frame',mousedf)
while True:
ret,frame=vid1.read()
if not ret:
print('Not enough Frame, will break')
break
k=cv2.waitKey(30)
if k == 27:
break
cv2.imshow('frame',frame)
vid1.release()
cv2.destroyAllWindows()
In the above code, I can see the circle in each frame.
However, if I change the order of cv2.imshow() and cv2.waitkey(),
I cannot see the circle.
while True:
ret,frame=vid1.read()
if not ret:
print('Not enough Frame, will break')
break
cv2.imshow('frame',frame)
k=cv2.waitKey(30)
if k == 27:
break
I would like to know the reason of the problem, and the relationship between cv2.setMouseCallback() and cv2.waitkey().

In fact, when a movie is passing images in time and In fact, a video is the passage of images in time and updates the cv2.waitKey(*time*) of different sequences with the given time.
If you use this code, you will be given the opportunity to update the new image after each run and after the show by cv2.imshow('frame', frame)
while True:
ret,frame=vid1.read()
if not ret:
print('Not enough Frame, will break')
break
k=cv2.waitKey(30) --->stop image
cv2.imshow('frame', frame) --->show update image
if k == 27:
break

Related

Confused about get_rays function in NeRF

I've been trying to understand NeRF. I finished reading the paper(Tancik) and watched some of the videos. I have been looking at some parts of the code. However, I can't quite wrap my head around what the get_rays function does in terms of the code. Could anybody just run through line-by-line about what each line in the the get_rays function is supposed to do?
def get_rays(H,W , focal, c2w): #c2w is pose
i, j = tf.meshgrid(tf.range(W, dtype=tf.float32), tf.range(H, dtype=tf.float32), indexing='xy')
dirs = tf.stack([(i-W*.5)/focal, -(j-H*.5)/focal, -tf.ones_like(i)], -1)
rays_d = tf.reduce_sum(dirs[..., np.newaxis, :] * c2w[:3,:3], -1)
rays_o = tf.broadcast_to(c2w[:3,-1], tf.shape(rays_d))
return rays_o, rays_d
It creates two lists, rays_o represents points where rays originate (camera centre) and rays_d represents direction vectors of each ray casting through the centre of every pixel of the camera. In this case, all values in the rays_o are the same because the function gets rays from a single camera.

my shader is ignoring my worldspace height

Im VERY new to shaders so bear with me. I have a mesh that I want to put a sand texture on below a worldspace position y of say 10 else it should be a grass texture. Apparantly it seems to be ignoring anything I put in and only selecting the grass texture. Something IS happening because my vert and tris count explodes with this function, compared to if I just return the same texture. I just dont see anything no matter what my sandStart value is
this is in my frag function:
if (input.positionWS.y < _SandStart) {
return tex2D(_MainTex, input.uv)* mainLight.shadowAttenuation;
} else {
return tex2D(_SandTex, input.uv) * mainLight.shadowAttenuation;
}
Is there also a way I can easily debug some of the values?
Please note that the OP figured out that their specific problem wasn't caused by the code in the question, but an error in their geometry function, this answer is only about the question "Is there a way to debug shader values" as this debugging method helped the OP find the problem
Debugging shader code can be quite a challenging task, depending on what it is you need to debug, and there are multiple approaches to it. Personally the approach I like best is using colours.
if we break it down there are three aspects in your code that could be faulty:
the value of input.positionWS.y
the if statement (input.positionWS.y < _SandStart)
Returning your texture return tex2D(_MainTex, input.uv)* mainLight.shadowAttenuation;
Lets walk down the list and test each individually.
checking if input.positionWS.y actually contains a value we expect it to contain. To do this we can set any of the RGB channels to its value, and just straight up returning that.
return float4(input.positionWS.y, 0, 0, 1);
Now if input.positionWS.y isn't a normalized value (a.k.a a value that ranges from 0 to 1) this is almost guaranteed to just return your texture as entirely red. To normalize it we divide the value by its max value, lets take max = 100 for the exmaple.
return float4(input.positionWS.y / 100, 0, 0, 1);
This should now make the texture full red at the top (where input.positionWS.y / 100 would be 1) and black at the bottom (where input.positionWS.y / 100 is zero), and a gradient from black to full red inbetween. (Note that since its a position in world space you may need to move the texture up/down to see the colour shift). If this doesn't happen, for example it always stays black or full red then your issue is most likely the input.positionWS.y.
The if statement. It could be that your statement (input.positionWS.y < _SandStart) always returns either true or false, meaning it'll never split. We can test this quite easily by commenting out the current return texture, and instead just return a flat colour like so:
if(input.positionWS.y < _SandStart)
{
return float4(1,0,0,1);
}
else
{
return float4(0,0,1,1);
}
if we tested the input.positionWS.y to be correct in step 1, and _SandStart is set correctly we should see the texture be divided in parts red (if true) and the other part blue (if false) (again since we're basing off world position we might need to change the material's height a bit to see it). If this division in colours doens't happen then the likely cause is that _SandStart isn't set properly, or to an incorrect value. (assuming this is a property you can inspect its value in the material editor)
if both of above steps yield the expected result then return tex2D(_MainTex, input.uv)* mainLight.shadowAttenuation; is possibly the culprit. To debug this we can return one of the textures without the if statement and shadowAttenuation, see if it applies the texture, and then return the other texture by changing which line is commented.
return tex2D(_MainTex, input.uv);
//return tex2D(_SandTex, input.uv);
If each of these textures gets applied properly seperately then it is unlikely that that was your cause, leaving either the shadowAttenutation (just add the multiplication to the above test) or something different altogether that isn't covered by the code in your question.
bonus round. If you got a shader property you want to debug you can actually do this from C# as well using the material.Get<type> function (the supported types can be found in the docs here, and include the array variants too, as well as both Get and Set). a small example:
Properties
{
_Foo ("Foo", Float) = 2
_Bar ("Bar", Color) = (1,1,1,1)
}
can be debugged from C# using
Material mat = getComponent<Material>();
Debug.LogFormat("_Foo value: {0}", mat.GetFloat("_Foo"); //prints 2
Debug.LogFormat("_Bar value: {0}", mat.GetFloat("_Bar"); //prints (1,1,1,1)

How to get current frame from Animated Tile/Tilemap.animationFrameRate in Unity

I am using tilemaps and animated tiles from the 2dExtras in unity.
My tiles have 6 frames, at speed=2f, and my tilemap frame rate is 2.
New tiles placed always start on frame 1 and then immediately jump to the current frame of the other tiles already placed, the tilemap is keeping every tile at the same pace, which is working as I want.
However I would like the newly placed tiles to start at the frame the others are currently on,(instead of placing a tile that jumps from frame 1 to frame 4) I would like the new tile to start on frame 4
I've found how to pick the frame I want to start on, however I am having trouble retrieving which frame the animation is currently on, so I was wondering how exactly can I access the current frame of animation of a given tilemap ( Or a given tile, I can create a dummy tile and just read the info out of it, how can I get the current frame of an animated tile? )
The animated tilemaps feature seems to lack the feature to retrieve this information, also when I try tilemap.getsprite it always returns the first frame of the sequence(does not return the sprite currently displayed), and there doesn't seem to be any method to poll info from tilemap.animationFrameRate.
I thought another method would be to set a clock and sync it to the rate of the animation but since I can't get the exact framerate duration the clock eventually goes out of sync.
Any help would be appreciated!
I found a way to solve this question. But it's not 100% insurance.
First of all, I used SuperTile2Unity. That doesn't seem to be the point.
private void LateUpdate()
{
// I use this variable to monitor the run time of the game
this.totalTime += Time.deltaTime;
}
private void func()
{
// ...
TileBase[] currentTiles = tilemap.GetTilesBlock(new BoundsInt(new Vector3Int(0, 0, 0), new Vector3Int(x, y, 1)));
Dictionary<string, Sprite> tempTiles = new Dictionary<string, Sprite>();
//I use SuperTiled2Unity. But it doesn't matter, the point is to find animated tile
foreach (SuperTiled2Unity.SuperTile tile in currentTiles)
{
if (tile == null)
{
continue;
}
if (tile.m_AnimationSprites.Length > 1 && !tempTiles.ContainsKey(tile.name))
{
// find animated tile current frame
// You can easily find that the way SuperTile2Unity is used to process animation is to generate a sprite array based on the time of each frame set by Tiled animation and the value of AnimationFrameRate parameter.
// The length of array is always n times of AnimationFrameRate. You can debug to find this.
tempTiles.Add(tile.name, tile.m_AnimationSprites[GetProbablyFrameIndex(tile.m_AnimationSprites.Length)]);
}
}
//...
}
private int GetProbablyFrameIndex(int totalFrame)
{
//According to the total running time and the total length of tile animation and AnimationFrameRate, the approximate frame index can be deduced.
int overFrameTime = (int)(totalTime * animationFrameRate);
return overFrameTime % totalFrame;
}
I have done some tests. At least in 30 minutes, there will be no deviation in animations, but there may be a critical value. If the critical time is exceeded, there may be errors. It depends on the size of AnimationFrameRate and the accumulation mode of totalTime. After all, we don't know when and how the unity deals with animatedTile.
You could try using implementation presented in [1] which looks as follows:
MyAnimator.GetCurrentAnimatorClipInfo(0)[0].clip.length * (MyAnimator.GetCurrentAnimatorStateInfo(0).normalizedTime % 1) * MyAnimator.GetCurrentAnimatorClipInfo(0)[0].clip.frameRate;
[1] https://gamedev.stackexchange.com/questions/165289/how-to-fetch-a-frame-number-from-animation-clip

Accurately selecting a node based on path from SKShapeNode

I'm working on determining which node is tapped in an SKScene. I have a line that is created from a CGMutablePath and added to an SKShapeNode. However, when trying to accurately select this line, it is included in the array returned from nodesAtPoint whenever i tap anywhere within it's accumulated rectangle frame. See the image for more detail. I'm able to tap anywhere within those blue or red squares and still get the line from nodesAtPoint. I'm trying to figure out a way to only return the node from nodesAtPoint if the actual path was tapped (or maybe a threshold of +- 20 points to help). What method am i looking for?
Here is some relevant code that i've tried based on some articles i've found.
let tempActualTouchedNodes = self.nodes(at: positionInScene)
for n in tempActualTouchedNodes {
if let tempLine = n as? LineNode {
if tempLine.contains(touch.location(in: tempLine)){
print("yes")
}
}
}
However, "yes" is never printed.

How to draw concave shape using Stencil test on Metal

This is the first time I'm trying to use Stencil Test but I have seen some examples using OpenGL and a few on Metal but focused on the Depth test instead. I understand the theory behind the Stencil test but I don't know how to set it up on Metal.
I want to draw irregular shapes. For the sake of simplicity lets consider the following 2D polygon:
I want the stencil to pass where the number of overlapping triangles is odd, so that I can reach something like this, where the white area is the area to be ignored:
I'm doing the following steps in the exact order:
Setting the depthStencilPixelFormat:
mtkView.depthStencilPixelFormat = .stencil8
mtkView.clearStencil = .allZeros
Stencil attachment:
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .stencil8, width: drawable.texture.width, height: drawable.texture.height, mipmapped: true)
textureDescriptor.textureType = .type2D
textureDescriptor.storageMode = .private
textureDescriptor.usage = [.renderTarget, .shaderRead, .shaderWrite]
mainPassStencilTexture = device.makeTexture(descriptor: textureDescriptor)
let stencilAttachment = MTLRenderPassStencilAttachmentDescriptor()
stencilAttachment.texture = mainPassStencilTexture
stencilAttachment.clearStencil = 0
stencilAttachment.loadAction = .clear
stencilAttachment.storeAction = .store
renderPassDescriptor.stencilAttachment = stencilAttachment
Stencil descriptor:
stencilDescriptor.depthCompareFunction = MTLCompareFunction.always
stencilDescriptor.isDepthWriteEnabled = true
stencilDescriptor.frontFaceStencil.stencilCompareFunction = MTLCompareFunction.equal
stencilDescriptor.frontFaceStencil.stencilFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthStencilPassOperation = MTLStencilOperation.invert
stencilDescriptor.frontFaceStencil.readMask = 0x1
stencilDescriptor.frontFaceStencil.writeMask = 0x1
stencilDescriptor.backFaceStencil = nil
depthStencilState = device.makeDepthStencilState(descriptor: stencilDescriptor)
and lastly, Im setting the reference value and the stencil state in the main pass:
renderEncoder.setStencilReferenceValue(0x1)
renderEncoder.setDepthStencilState(self.depthStencilState)
Am I missing something because the result I got is just like there is no stencil at all. I can see some differences when changing the settings of the depth test but nothing happens when changing the settings of the stencil ...
Any clue?
Thank you in advance
You're clearing the stencil texture to 0. The reference value is 1. The comparison function is "equal". So, the comparison will fail (1 does not equal 0). The operation for when the stencil comparison fails is "keep", so the stencil texture remains 0. Nothing changes for subsequent fragments.
I would expect that you'd get no rendering, although depending on the order of your vertexes and the front-face winding mode, you may be looking at the back faces of your triangles, in which case the stencil test is effectively disabled. If you don't otherwise care about front vs. back, just set both stencil descriptors the same way.
I think you need to do two passes: first, a stencil-only render; second, the color render governed by the stencil buffer. For the stencil only, you would make the compare function .always. This will toggle (invert) the low bit for each triangle that's drawn over a given pixel, giving you an indication of even or odd count. Because neither the compare function nor the operation involve the reference value, it doesn't matter what it is.
For the second pass, you'd set the compare function to .equal and the reference value to 1. The operations should all be .keep. Also, make sure to set the stencil attachment load action to .load (not .clear).