Definition of Collision Frame and Inertial Frame in PyBullet - simulation

When I am using the functions "createCollisionShape" and "createMultiBody" in PyBullet, I am really confused about the definition of parameters like "collisionFramePosition" and "linkInertialFramePositions." What is the definition of collision frame and inertial frame? The Quickstart Guide on the PyBullet website does not seem to have an explanation. Could someone tell me the definition or give me some resources to read?
Thank you for all of the inputs and advices!
I tried to search on Google but the result is not what I'm looking for.

Related

Blinking lights detection and tracking

I have a video of a set of blinking lights(LED in different frequencies) over a dark background and I want to detect LED , then track them in the video.
How can I do it if the number of LED is not given ?
I would suggest colour tracking from a video, try this link. There is example code and an explanation to how this is achieved in openCV.
The code is written in python, but once you have an idea of how it works, porting it to C++ shouldnt be too hard (Or you could do it in Python)
I think if you would like a more in-depth answer you should try providing more information in your question, it is rather vague. Explain more about what you are trying to achieve as an end goal, and hopefully people will be able to give you better information.

Rendering a 3D object from four different angles

I am working on a project where I have to render 4 different sides of a 3D object at the same time on the screen. The output should have 4 different camera outputs rendering the front side, left side, right side and back side of the 3d object.
I found that a gaming engine like Unity may help to do something like this. However, I have just started using Unity and can't figure out how to do it.
Here is the link for some examples. This is how I want the output to look like
Well first of all, welcome to Stackoverflow. And you are right, Unity is an excellent IDE to achieve what you described.
As stated in the FAQ and here, I'm going to give you an answer I deem fitting to your question. I can post the code here in about 30 minutes which does exactly what you asked for, but then we'd miss the point of learning to program and posting at StackOverflow in general. I'll show you the way on how to start on this project, but then you'll have to try yourself. If you have any troubles after trying some more, we can help you with specific problems, provided you have researched some before and show us what you tried.
As to your question, it's relative easy to do so. First create your object in the scene, then drag and place four different Camera-objects in the screen. Using the Camera's Normalized View Port Rect (Four values that indicate where on the screen this camera view will be drawn, in Screen Coordinates (values 0-1)), you can then split up the view to show the feed of each Camera.
This ofcourse happens in a script. You can read here about Scripting in Unity. Even if you are an expert in programming, that link is worth a read when you are new to Unity.
Good luck.

Matlab. Image processing on 8 ball pool flash game. Small cheat. Hehe

See the picture below. It's a flash game from a well known website :)
http://imageshack.us/photo/my-images/837/poolu.jpg/
I'd like to capture the images, frame by frame, using Matlab, and then lenghten the line that goes from the 8 ball, the short one, so i can see exactly where it will go. And display another window, in which the exact pool table will appear but with longer lines for the paths :)
I know, or can easily find out, how to capture the screen and whatnot, the problem is that i'm not sure how to start detecting those lines, to see the direction they are heading towards. Can anyone suggest an idea on how to accomplish this? Any image processing techniques i could use to at least filter out everything except those lines.
Not sure where to even start looking, or for WHAT.
And yeah, it's a cheat i know. But i got programming skills, why not put them in practice? :D Help me out people, it's a fun project :)
Thanks.
I would try using the Hough transform in the Matlab Image Processing Toolbox.
EDIT1:
Basically the Hough transform is a technique for detecting linear structures (lines) in an image.

Basic 2D Texture Howto for iPhone OpenGL ES

Just started learning OpenGL and couldn't find a decent tutorial to get me going. Specifically, I'm looking for something that will show me how to load an image to an OpenGL texture, store it in a variable to display later, and then draw the image.
I'd appreciate it if someone could write out the basic code to do that for me. If I may ask that you would also separate the code for loading and drawing and also comment thoroughly as to whats going on.
Much obliged
I've been referring to this:
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-6_25.html
Check out Apple's ImageProcessing sample.

Recognizing patterns when drawing over the iPhone screen

I'm trying to write a game where the user can issue commands by drawing certain patterns with his fingers..for example, if he draws a circle, an 'S' letter, an expiral, etc.
I'm already familiar with touch events and I'm capable of reading the coordinates... my problem Is in finding algorithms and information about the recognition of the patterns with some degree of error.... for example, If I'm supposed to detect a circle I should detect it even if the user didn't did a make perfect one.
Any resources in the matter?, thanks !
This site demos a very simple, easy to implement gesture recognizer, which they wrote in Javascript. I've implemented it myself in another language, and found it quite easy to deal with. They've got code, and a paper describing the algorithm; everything you could need.
The patterns you're referring to are known as "gestures".
This code seems to be what you're looking for.