Is it possible to change just a portion of a Sprite's alpha in response to user interaction? A good example of what I mean is iFog or iSteam, where the user can wipe "steam" off the iPhone's screen. Swapping images out wouldn't be feasible due to the sheer number of possibilities where the user could touch and move...
For example, say you have a simple app that has a brick wall in the background that has graffiti on it, so there'd be two sprites, one of the brick wall, then one of the graffiti that has a higher z value than the brick wall. Then, based upon where the user touches (assuming their touch controls a sandblaster), some of the graffiti should be removed, but not all of it, which could be accomplished by changing the alpha value on a portion of the graffiti sprite. Is there any way to do this in cocos2d-iphone? Or, do I need to drop down into openGL, and if so, where would be a good place to start my search on how to accomplish this?
Ideally, I'd like to accomplish this on a cocos2d-iphone Sprite, but if it's not possible, where's the best place to start looking?
Thanks in advance,
Ben
The answer is here: http://www.cocos2d-iphone.org/forum/topic/7921#post-46394
But short answer: You have to override the draw method and resort to openGL methods.
I know this question is old, but it needs an answer anyway.
Related
How to implement a way to measure distances in real time (video camera?) on the iPhone, like this app that uses a card to compare the size of the card with the actual distance?
Are there any other ways to measure distances? Or how to go about doing this using the card method? What framework should I use?
Well you do have something for reference, hence the use of the card. Saying that after watching the a video for the app I can't seem it seems too user friendly.
So you either need a reference of an object that has some known size, or you need to deduct the size from the image. One idea I just had that might help you do it is what the iPhone's 4 flash (I'm sure it's very complicated by it might just work for some stuff).
Here's what I think.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.
I like Ron Srebro's idea and have thought about something similar -- please share if you get it to work!
An alternative approach would be to use the auto-focus feature of the camera. Point-and-shoot camera's often have a laser range finder that they use to auto-focus. iPhone doesn't have this and the f-stop is fixed. However, users can change the focus by tapping the camera screen. The phone can also switch between regular and macro focus.
If the API exposes the current focus settings, maybe there's a way to use this to determine range?
Another solution may be to use two laser pointers.
Basically you would shine two laser pointers at, say, a wall in parallel. Then, the further back you go, the beams will look closer and closer together in the video, but they will still remain the same distance apart. Then you can easily come up with some formula to measure the distance based on how far apart the dots are in the photo.
See this thread for more details: Possible to measure distance with an iPhone and laser pointer?.
I want to create a 360 degree turntable showing lots of pictures (12, 24 or 36) by controlling that rotation with touch events (like that example but coded for an iOS app natively).
The simplest idea depending on the touch position is to load that specific uiimage.
Any ideas what's the best practice for that? Is there a chance to create that image-turntable with the help of coreanimation faster? Any other hints on that? Any other projects known where I can get some help on that?
Thanks for your time and hints in the right direction.
Here's another example for an ipad-app from the "audi a8".
From the first example it becomes obvious that the objects have actually been photographed for each angle of rotation. This is the really tricky part. You will need a tripod and a camera with remote control, and if possible also a rotational platter to keep angles consistent.
Implementation is relatively straightforward. As you guessed, you just track the touch positions and, depending on delta to the last touch position, show the appropriate image.
well, you can just use the HTML/CSS/JS used in the same example... just load that in an UIWebView in your app and load your site embedded as a resource...
Subclass UIImageView, load array of your frames, handle tap movement over the screen y-axis and change active image accordingly. Don't forget to loop your images. :)
I am trying to build an app that detects if a user puts their lips on the screen. How could I get an image of their lips when they put it on the screen?
I know how to detect finger touches, would it be similar to that?
Edit: Also, wondering about doing this for Android.
I believe the iphone hardware supports somewhere between 10 and 20 touch points, but the SDK only gives you access to 5. To get a good outline of the lips you'd need a much higher number of touch points.
Could you use the front facing camera and avoid the obvious hygiene issues?!
You can't get the form of the touches, so you wouldn't be able to detect lips on the screens.
As others have posted here, it is not possible to detect all of the contact points of the lips and the screen. Yes, it would be similar, but more difficult. The iPhone screen is capacitive, so it should detect lips, which are similar to fingertips. (When outside in the cold and I need to scroll with my gloves on, I've used my chin.)
As far as the shape, you may be able to map multiple touches, but touches don't take on any particular shape, so you won't be able to draw the lips based on that. You may want to consider preloading generic pictures of lips and then resizing it to match the size of their lips, depending on where the touch inputs are.
Is there a way to capture the amount of screen that is making contact with the users? I assume there is since this finger painting app shows the ipad responding to only the pixels that the user makes contact with.
Thanks so much in advance for your help!
The size of the touch is abstracted away by the framework, and UITouches only contain calculated (“best estimated”) points instead of the raw, actual areas that were touched. I would guess that the “pressure” was calculated from the duration and the direction of the touch.
In a nutshell, there is no public API to get the contact area.
I don't think Apple provides APIs for the size of the touch, or as #nickthedude said (I think) any kind of way to measure pressure. Basically, you need to implement your own algorithm/policy for determining line thickness/opacity/other effects. I believe a common way to do this is to measure the amount of time spent for the stroke, and work from there. For instance, if the user moved more quickly, you might want a thinner line segment. Apple really should just provide a canvas view of some kind. Best of luck!
to get the exact area you may have to roll your own but you can get uievents pretty easily and then do some magic from there. Basically impliment/override touchesBegan, touchesEnded, touchesMoved on the UIView in question and put in your custom code there.
Looking at the video maybe the amount of touches in the UIEvent set might correspond to the "pressure" of the touch, then again maybe not.
What if you laid down a series of successively smaller square uiviews wherever the user touched then if the touches "spilled" into the larger uiviews behind the smaller front ones than you could conjecture that the touch pressure was harder. Something to try I guess. Good luck.
Why not just describe what you want to do and foxus on asking about that instead - it may not have anything at all to do with the example that has you so otherwise enthralled - I can use a camera to monitor your hand from across the table and paint pixels on the screen via BT, completely ignoring any contact between your fingers and the screen.
Is it possible to detect exactly how much finger is in contact with the screen? Say I wanted to make a fingerprinting app, how would I detect the outline of a person's fingeR?
No, the UITouch system does a lot of processing to determine a single point location for each touch given the larger touched area. This is meant to aid the user as there can be some difference between where one thinks he is touching and where the screen is actually touched.