In my app I've got a CCTexture2D, which I'm creating using an image, creating a sprite from the texture, then adding to the stage, like this:
UIImage* faceUIImage = [UIImage imageNamed:#"face.jpg"];
CCTexture2D* faceTest = [[CCTexture2D alloc] initWithImage:faceUIImage];
CCSprite* testSprite = [CCSprite spriteWithTexture:faceTest];
[self addChild:testSprite];
testSprite.position = CGPointMake(200, 100);
(I know this might seem a weird way to do it, but I'm doing some stuff to the pixel data elsewhere in the program, so I cant just use a pure sprite).
In the simulator, it looks fine:
However, when I run it on my iPhone (running 4.2), I get this:
Ignoring the fact that they are different orientations, how come the image on my device is in the background, and really dark? Can anyone help me with this?
Thanks,
Rich
Did you try :
[self addChild:testSprite z:1];
Eventually I fixed this simply by changing the jpg to a png instead... Maybe cocos2d doesnt really like jpg?
Related
I am developing a qrcode scanner and I am using ZBarSdk for that. I am able to successfully reading all the qrcode and parsing them into meaningful information. While watching some of the example I found that ZBarReaderViewController can place a image while scanning mimicking the scan area. They are using the cameraOverlay feature for the the same, however I am not finding the same for ZBarReaderView. How can I put an image on the scan surface?
I could have used the ZBarReaderViewController however my app is designed in such a manner that if I use ZBarReaderViewController, I will face issue in modalviewcontroller present so I am badly in need of cameraoverlay with ZBarReaderView. Below is the link where I got to know about the cameraOverlayView feature for ZBarReaderViewController.
Is it possible to put a square bracket for the focus when the camera appears in the ZBar SDK?
I made it with the below code, after initializing the ZBarReaderView:
UIImage *ivg = [UIImage imageNamed:#"green.png"];
UIImageView *bh = [[UIImageView alloc]initWithFrame:CGRectMake(x, y, height, width)];
bh.image = ivg;
[self.view addSubview:bh];
Here is the deal, I have tried using both of the methods shown below to save an image to the photo library. Upon saving the image successfully and launching the Photos App to take a look at the image, the app automatically scales/resizes the image to full screen.
Is there any way to prevent this? It is annoying because to see the whole image the user needs to pinch the image out to show the whole thing. I have noticed a few of the more popular camera apps do not have this issue when saving to the photo library. I am curious what method they are using.
//Save to Photo Library;
ALAssetsLibrary* library = [[ALAssetsLibrary alloc] init];
//CIImage *ciImage = image.CIImage;
[library writeImageToSavedPhotosAlbum:finalImage.CGImage
orientation:ALAssetOrientationUp
completionBlock:^(NSURL *assetURL, NSError *error) {
NSlog(#"Done");
}];
Save to Photo Library;
UIImageWriteToSavedPhotosAlbum(savedPhoto,nil,nil,nil);
Thank you for your help!
After doing a bit of research there is no solution to this problem. The photos app automatically scales up the image to fill the screen.
Thanks for all of the responses.
Saving image should be fine.
I think you can set UIView property UIViewContentMode, when you using the image.
typedef enum {
UIViewContentModeScaleToFill,
UIViewContentModeScaleAspectFit, // contents scaled to fit with fixed aspect. remainder is transparent
UIViewContentModeScaleAspectFill, // contents scaled to fill with fixed aspect. some portion of content may be clipped.
UIViewContentModeRedraw, // redraw on bounds change (calls -setNeedsDisplay)
UIViewContentModeCenter, // contents remain same size. positioned adjusted.
UIViewContentModeTop,
UIViewContentModeBottom,
UIViewContentModeLeft,
UIViewContentModeRight,
UIViewContentModeTopLeft,
UIViewContentModeTopRight,
UIViewContentModeBottomLeft,
UIViewContentModeBottomRight,
} UIViewContentMode;
Hope it helps.
If you take a look at the FrogScroller sample application from Apple, you'll see they initially load a low-resolution of the image first and then, when zooming use a CATiledLayer with it's size set by the setMaxMinZoomScalesForCurrentBounds function.
You might want to look at WWDC 2010's video on scrollViews as well as 2011 for more info. Josh and Eliza do a great job explaining what's up.
Why not layer the image onto a fullscreen image with a clear background before saving?
I want to make an applications similar to Talking Tom Cat, Touch Pets Cats/, Virtual Monkey and the Tap Zoo.
I have knowledge of Iphone basic animation.
I want to know that can i make these apps without OpenGL ,cocos and other gaming environment...
Can i make it by using only basic frameworks...
Well if its related to talking tom then there is no OpenGLES Use... they are simply using images and animate them. using something like this -
aniImage = [[UIImageView alloc] init];
UIImage* opa1 = [UIImage imageNamed:#"o1.png"];
UIImage* opa2 = [UIImage imageNamed:#"o2.png"];
UIImage* opa3 = [UIImage imageNamed:#"o3.png"];
UIImage* opa4 = [UIImage imageNamed:#"o4.png"];
UIImage* opa5 = [UIImage imageNamed:#"o5.png"];
UIImage* opa6 = [UIImage imageNamed:#"o6.png"];
UIImage* opa7 = [UIImage imageNamed:#"o7.png"];
UIImage* opa8 = [UIImage imageNamed:#"o8.png"];
UIImage* opa9 = [UIImage imageNamed:#"o9.png"];
UIImage* opa10 = [UIImage imageNamed:#"o10.png"];
UIImage* opa11 = [UIImage imageNamed:#"o11.png"];
UIImage* opa12 = [UIImage imageNamed:#"o12.png"];
UIImage* opa13 = [UIImage imageNamed:#"o13.png"];
UIImage* opa14 = [UIImage imageNamed:#"o14.png"];
UIImage* opa15 = [UIImage imageNamed:#"o15.png"];
UIImage* opa16 = [UIImage imageNamed:#"o16.png"];
UIImage* opa17 = [UIImage imageNamed:#"o17.png"];
UIImage* opa18 = [UIImage imageNamed:#"o18.png"];
NSArray *imgsArr = [NSArray arrayWithObjects:opa1, opa2, opa3, opa4,
opa5, opa6, opa7, opa8, opa9, opa10, opa11, opa12, opa13, opa14, opa15, opa16, opa17, opa18, nil];
[aniImage setAnimationImages:imagesOpcaity];
[aniImage setAnimationRepeatCount:1.0];
[aniImage setAnimationDuration:0.2];
If you wants to see all the images they are using follow these setps -
1) buy the free version on your iPhone/iPad
2) transfer your purchases on your mac or windows.
3) then drag the ipa file from Library -> Apps to your desktop (just drag the app icon from iTunes to desktop)
4) rename the .ipa file to .zip file
5) extract this zip file.
6) you will get a folder named "Payload" in it.
7) open the .app file (it will open on windows automatically because it is a folder on windows, on mac right click on it and select show package content.
8) in the .app folder you will find a folder which contains all the images used by above function.
Hope it helps
There are 2 approaches
One - make them as 3d Models with animations. The best suited too for this is Unity3d (it brings down development time to a great extent) But the basic version for iPhone to publish costs 400$. I think its worth it.
Two - Make all of them as frame animations. Use either Cocos2d or UIImageView's frames to animate it. You need to dealloc and alloc on the fly when you have too many frames due to which you might hit memory warning. You might think loading on the fly would be costly/intensive but its not as much as you would expect, as iDevices have flash memory on them and we have tried this.
I personally would say use 2nd method if you need just a clone. But with 3d models with animations in them, you get more freedom to add creativity like a 3d camera to look around. So you can poke in the butt and have a different animation for it etc. Also you could have overlay texture and do things like adding tattoos to the 3d model.
You can make Tap Zoo and other by using images or sprite sheets may be on UIButton or enable user interaction for images. but you have to use alot of coding to implement some like that big, it wouldn't be difficult to accomplish it just many lines of code(seriously many more than 100K). So take my advise and learn cocoa2d or quartzcore2d or opengl that way you will not only learn something new but also be able to make nice games. for opengl start up follow this link
http://appstapnet.blogspot.in/2009/04/opengl-es-from-ground-up-part-1-basic.html?m=1
What is the best pattern for displaying images both on iOS3 and iOS4?
For example, here's code for a custom activity animation:
activityImageView.animationImages = [NSArray arrayWithObjects:
[UIImage imageNamed:#"s01"],
[UIImage imageNamed:#"s02"],
[UIImage imageNamed:#"s03"],
[UIImage imageNamed:#"s04"],
[UIImage imageNamed:#"s05"],
nil];
On low-res and high-res devices running iOS4, the images load as expected. On iOS3 the images cannot be found since the .png extension is seen as missing.
What's the right way to deal with this situation? Do I need to throw hacky conditionals throughout my code or is there a cleaner method?
Is something from Matt Gallagher's article a good approach?
http://cocoawithlove.com/2010/07/tips-tricks-for-conditional-ios3-ios32.html
You still get the auto '#2x' loading behavior in iOS4 even if you use the file extension, so [UIImage imageNamed:#"so1.png"] is fine (it will find "so1#2x.png" if you're running on a Retina Display with OS 4) plus it's backwards compatible with 3, where it will always find the low-res version of the file.
See the "Loading Images into Your Application" section here.
Is there any way to test the iPhone camera in the simulator without having to deploy on a device? This seems awfully tedious.
There are a number of device specific features that you have to test on the device, but it's no harder than using the simulator. Just build a debug target for the device and leave it attached to the computer.
List of actions that require an actual device:
the actual phone
the camera
the accelerometer
real GPS data
the compass
vibration
push notifications...
I needed to test some custom overlays for photos. The overlays needed to be adjusted based on the size/resolution of the image.
I approached this in a way that was similar to the suggestion from Stefan, I decided to code up a "dummy" camera response.
When the simulator is running I execute this dummy code instead of the standard "captureStillImageAsynchronouslyFromConnection".
In this dummy code, I build up a "black photo" of the necessary resolution and then send it through the pipelined to be treated like a normal photo. Essentially providing the feel of a very fast camera.
CGSize sz = UIDeviceOrientationIsPortrait([[UIDevice currentDevice] orientation]) ? CGSizeMake(2448, 3264) : CGSizeMake(3264, 2448);
UIGraphicsBeginImageContextWithOptions(sz, YES, 1);
[[UIColor blackColor] setFill];
UIRectFill(CGRectMake(0, 0, sz.width, sz.height));
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImageJPEGRepresentation(image, 1.0);
The image above is equivalent to a 8MP photos that most of the current day devices send out. Obviously to test other resolutions you would change the size.
I never tried it, but you can give it a try!
iCimulator
Nope (unless they've added a way to do it in 3.2, haven't checked yet).
I wrote a replacement view to use in debug mode. It implements the same API and makes the same delegate callbacks. In my case I made it return a random image from my test set. Pretty trivial to write.
A common reason for the need of accessing the camera is to make screenshots for the AppStore.
Since the camera is not available in the simulator, a good trick ( the only one I know ) is to resize your view at the size you need, just the time to take the screenshots. You will crop them later.
Sure, you need to have the device with the bigger screen available.
The iPad is perfect to test layouts and make snapshots for all devices.
Screenshots for iPhone6+ will have to be stretched a little ( scaled by 1,078125 - Not a big deal… )
Good link to a iOS Devices resolutions quick ref : http://www.iosres.com/
Edit : In a recent project, where a custom camera view controller is used, I have replaced the AVPreview by an UIImageView in a target that I only use to run in the simulator. This way I can automate screenshots for iTunesConnect upload. Note that camera control buttons are not in an overlay, but in a view over the camera preview.
The #Craig answer below describes another method that I found quite smart - It also works with camera overlay, contrarily to mine.
Just found a repo on git that helps Simulate camera functions on iOS Simulator with images, videos, or your MacBook Camera.
Repo