Matlab. Image processing on 8 ball pool flash game. Small cheat. Hehe - matlab

See the picture below. It's a flash game from a well known website :)
http://imageshack.us/photo/my-images/837/poolu.jpg/
I'd like to capture the images, frame by frame, using Matlab, and then lenghten the line that goes from the 8 ball, the short one, so i can see exactly where it will go. And display another window, in which the exact pool table will appear but with longer lines for the paths :)
I know, or can easily find out, how to capture the screen and whatnot, the problem is that i'm not sure how to start detecting those lines, to see the direction they are heading towards. Can anyone suggest an idea on how to accomplish this? Any image processing techniques i could use to at least filter out everything except those lines.
Not sure where to even start looking, or for WHAT.
And yeah, it's a cheat i know. But i got programming skills, why not put them in practice? :D Help me out people, it's a fun project :)
Thanks.

I would try using the Hough transform in the Matlab Image Processing Toolbox.
EDIT1:
Basically the Hough transform is a technique for detecting linear structures (lines) in an image.

Related

Draw Line Connecting two Object/Vector3 Unity3d

I am making game .
In that i will have points(Stations) already given and on hitting those points, line should be started to draw till it reached to next point.
as well as i want to avoid overlapping of lines.
and line should continue to expand if i touch end of the line.
I know 2 approaches to draw line
I can use linerenderer or else i can use GL class .
I want to know which will be suitable for my requirement. and if you guys is having another idea then also you can share.
I have seen vectrocity demo but its not free so i cant use.
Thank you guys for your help and support till now and help me to solve my confusion.
Use the LineRenderer for a reasonably limited number of lines. It's much easier to customize than GL.Lines, and it will let you work in Unity's coordinate system without dealing with transform matrices.
And a small note: Up until 4.x, only Pro had GL.Lines, and it didn't work on iOS, so most people were using other approaches. I've never seen anyone use GL.Lines outside of a demo specifically to compare the two approaches. GL.Lines performs better, but is limited in customization options. Another approach I've seen is using the Graphics class and procedural meshes.It's also faster than the LineRenderer, but takes a little work to implement. This article comapares all three approaches and has some code for using the Graphics class
For scene you can use GL.Lines or Debug.DrawLine or Debug.DrawRay. But this lines you can't see in your game. So use Line Renderer.
For you it can useful https://www.youtube.com/watch?v=Bqcu94VuVOI

Rendering a 3D object from four different angles

I am working on a project where I have to render 4 different sides of a 3D object at the same time on the screen. The output should have 4 different camera outputs rendering the front side, left side, right side and back side of the 3d object.
I found that a gaming engine like Unity may help to do something like this. However, I have just started using Unity and can't figure out how to do it.
Here is the link for some examples. This is how I want the output to look like
Well first of all, welcome to Stackoverflow. And you are right, Unity is an excellent IDE to achieve what you described.
As stated in the FAQ and here, I'm going to give you an answer I deem fitting to your question. I can post the code here in about 30 minutes which does exactly what you asked for, but then we'd miss the point of learning to program and posting at StackOverflow in general. I'll show you the way on how to start on this project, but then you'll have to try yourself. If you have any troubles after trying some more, we can help you with specific problems, provided you have researched some before and show us what you tried.
As to your question, it's relative easy to do so. First create your object in the scene, then drag and place four different Camera-objects in the screen. Using the Camera's Normalized View Port Rect (Four values that indicate where on the screen this camera view will be drawn, in Screen Coordinates (values 0-1)), you can then split up the view to show the feed of each Camera.
This ofcourse happens in a script. You can read here about Scripting in Unity. Even if you are an expert in programming, that link is worth a read when you are new to Unity.
Good luck.

Increase the call-frequency for the touchesMoved-method

I am currently working on an app, which includes a paint function. It actually doesn't work that bad, but the problem is, that the refresh-rate or the frequency of the calls for the touchesMoved-method is too bad.
If you move the fingers fast over the screen, the lines get many edges and it doesn't look that good. So i thought about increasing the call-frequency for this method. Would that be a good and even possible solution for my problem?
Maybe you can help me with my problem. Thank you in advance.
Think about this approach: Look at Adobe's Ideas App on the AppStore. If touchesEnd, work with NSBezierPath to get a smoother look without edges. You basically store the points in touchesMoved in an array? So you got the points to insert in the Bezier Path functions. You have a edgy look while drawing but after releasing it looks kind of smooth. I did so in one of my projects and result is fairly okay.
(But there are many other approaches to build a drawing application.)
Demo App from Apple: Click here

compare one image in matlab with a database of images and show the most similar

I have a database of images of one person who is using his hands to show various words and phrases in sign language. The background is white and the only thing changing is the shape of the person's hands and their locations. Now in my gui in matlab, I want the user to be able to choose another image from the same person that was taken at another time doing a sign but wearing the same clothes and then the program will have to compare this against the images in the database and show the most similar. Obviously I can't do pixel by pixel comparison as the images were taken by a hand held mobile camera and slight movement has been inevitable so I should try and locate the hands in the images and compare their shapes. I have no idea how to go about this? I have to say I am new to image processing toolbox in matlab.
Your help is much appreciated
I am doing a phD in computer vision, and I can tell you that it is an unsolved problem. (even in your simple framewrok, with white background)
If you are interested, you might read some works about it ar MIT:
http://people.csail.mit.edu/rywang/handtracking/
or at Oxford:
http://www.robots.ox.ac.uk/~vgg/research/sign_language/index.html
http://www.robots.ox.ac.uk/~vgg/research/hands/index.html
I disagree with you. Such a project can achieve results quickly.
This becomes a problem as soon as the project has to deal with "real life".
Using a single camera, and a completely known background; Opencv provides a simple way to extract hand shape in a image (in about 20 lines of code). You will find plenty of source on the web (have a look at calcbackproj).
After that, what you will have to do is to play with shape, and search for characteristic points.
Begin with some simple signs (example : a circle and a V). How would you recognize one from the other?
There are thousands of papers on sign language; just read the older one to simple ideas flowing :)

Fake long exposure on iOS

I have to implement long exposure photo capabilities to an app. Since i know that this is not really possible i have to fake it. It should work like "Slow Shutter" or "Magic Shutter".
Sadly i got no clue how to achieve this. I know how to take images with the camera (through AVFoundation) but i'm stuck at merging them to fake long shutter times.
Possibly i need to manipulate and combine all the images with coregraphics but i'm not sure about this (even the how). Maybe there's a better solution to this.
I would appreciate every help i can get here,
thank you people!
You might try the plus lighter blend mode.
Well, I suppose it would be possible to average together the results of several shots. I've mucked around a bit with the core graphics stuff to resize images (averaging together adjacent pixels), but with lower res images. The algorithm I used is here -- maybe it'll give you some ideas.
There may, of course, be a better way, and some tricks for working efficiently with high-res images. Can't help you there.
Convert the images to pixel bitmaps. Align and stack the bitmaps. Then try applying various 3D convolution filters to the 3D pixel array.