Surface SDK Generating Touch Events in code - touch

Im just wanting to know how about generating touch events in code (if it is possible) in the surface sdk on a MS surface. i.e. Using x and y points i want to simulate a touch down at that location. How would I go about this?
Thanks

If you need to simulate touch points check out MultiTouchVista depending on your scenario you might need to create your own InputProvider.

There is an article on MSDN ( http://msdn.microsoft.com/en-us/library/ee804820(v=surface.10).aspx ) showing how to create an application that uses the simulation API.

Related

Users making their own graphs in Unity--is this possible?

I'm currently building an educational app, and I'm a complete beginner at Unity so I just wanted to know if what I want to do is possible, and if so, where to even begin.
I want to allow users to graph their own data in unity--as in, they input a number, and that point is created and displayed on a graph. They would only need to do this for about 3 points.
Thanks!
Definitely possible
I would create the graph first create the x axis y axis using the 3d objects
create a cube for the unit - the markings on the graph
then create a script that creates the cube length and position to where the user wants it to be.
You'll need a script to recieve input from the user too and link that to the units to be created and positioned
Good luck
I am also a newbie on unity but anything is possible with unity

Draw graph using accelerometer data in core-plot

I'm new to this core-plot framework and trying to draw a line graph based on X and Y acceleration. I can already get my X and Y values and have successfully added core-plot in my project. I'm quite lost on how to start this, so basically, I will have X and Y values and how do I plot this using core-plot? any help is suggested. Thanks!
Maybe my answer is a little bit off topic, as it's not using core-plot, but did you look at the "AccelerometerGraph" sample app provided with Xcode?
It provides a nice plot which is dynamically updated while new accelerometer events are registered. The great of this sample is the way CoreAnimation has been used to "speed-up" Quartz2D. And you get all of this using system framework only and no third party code (apart your adaptation of Apple's one).
Look at the Core Plot examples to see how to set up a basic graph. You'll want to use a scatter plot. Call -reloadData on the plot whenever you have new data to display. If only some of the data points change on each frame, you can use the following methods to do a partial update:
-(void)reloadDataInIndexRange:(NSRange)indexRange;
-(void)insertDataAtIndex:(NSUInteger)index
numberOfRecords:(NSUInteger)numberOfRecords;
-(void)deleteDataInIndexRange:(NSRange)indexRange;
The Real Time Plot in the Plot Gallery example app shows one way to use these methods.

Detecting particular objects in the image i.e image segmentation with opencv

I have to select any particular object visible in my image on i-phone.
Basically my project is to segment image objects on the basis of my touch.
The method I am following is to first detect contours of the image and then select a particular sequence based on finger touch.
Is there any other method which would be more robust because I have to run it on video frames?
I am using OpenCV and iphone for the project.
PLease help if there is any other idea which has been implemented or is feasible to implement.
Have you looked at SIFT or SURF implementations? They both track object features and are resilient (to a certain degree) to rotation, translation and scale.
Also check out FAST which is a corner detection algorithm which might help you, they have an app on the app store showing how quick it is too.

How to control an OpenGL projection matrix with iPhone accelerometer/compass values?

Is there any ready made class/formula somewhere I can use it for control my viewpoint by accelerometer's/compass's XYZ?
I want to achieve the same view control like acrossair uses.
I have the parts (OpenGL space, filtered accel-, compass values, and a cubic panoramic view mapped to a cube around my origin).
Can somebody suggest me where to start at least?
I've got into the problem since, so the posts about the steps of the solution can be followed here:
xCode - augmented reality at gotoandplay.freeblog.hu
A brief sketch about the whole process below:
How to get the transformation matrix from the raw iPhone sensor (accelerometer, magnetometer) data http://gotoandplay.freeblog.hu/files/2010/06/iPhoneDeviceOrientationTransformationMatrix-thumb.jpg
If all you are looking for is a means of rotating the model view matrix for your scene, you could look at the source code to my Molecules application or an even simpler cube example that I wrote for my iPhone development class. Both contain code to incrementally rotate the model view matrix in response to touch input, so you would just need to replace the touch input with accelerometer values.
Additionally, Apple's GLGravity sample application does something very similar to what you want.

How does one interact with OBJ-based 3D models on iPhone?

I have a few different OBJ files that I am able to parse and display. This code is based on Jeff LaMarche's The Start of a WaveFront OBJ File Loader Class. However, I need some means of detecting what coordinates I have selected within a displayed model. Usually there is one model displayed at a time but sometimes there will be two or more on the screen and I want to set up a NSNotificationCenter object to notify other sections of code as to which object is "selected". I have also looked at javacom's "OpenGL ES for iPhone : A Simple Tutorial" and would like to model the behavior of what I'm trying to program after his.
This is my current line of logic:
Setup a means to detect where a user has touched the screen
Have those coordinates compared with the current coordinates of a OBJ-based model
If they match, indicate said touch as being within the bounds of the object
The touchable set of coordinates must scale with the model. Currently the model is able to scale so I will most likely need to be able follow this scaling.
Also note, I don't need to move the model around on the screen. Just detect when it's been touched whether there is one model or several being displayed.
While this is most likely quite simple, I've been stumped by this for months now. I would really appreciate any light others can shed on this topic.
Use gluUnProject on the touch coordinates to get a vector going from the screen into the world, and then intersect it with your models to see if one of them has been touched. gluUnProject isn't by default available on iPhone, but you can look up implementations of it. http://www.mesa3d.org/ has an open source implementation.
Read about gluUnProject here: http://web.iiit.ac.in/~vkrishna/data/unproj.html