How do I go about including the Tortuga 22 NinePatch library to my XCode project? - iphone

First of all, there is finally NinePatch support for the iPhone, BIG thanks to the Tortuga 22 team for that. Unfortunately for me I have not been able to add their library to my project.
What is NinePatch?
The Tortuga 22 blog post.
The source code at git hub.
If I just drag and drop the source-files into my project I get a ton of "No such file or directory"-errors. If I reference the libNinePatch.a-file as an external framework I get the same result.
What is the proper way of doing this? There are no instructions from their part so I guess there must a fairly straight forward way of doing this.
Thanks in advance.
//Abeansits

I just like to tell everyone still struggling with this that I've uploaded a podspec to CocoaPod's repository.
This means that if your project uses CocoaPods (and why wouldn't it?), you can just add:
pod 'Tortuga22-NinePatch'
To your Podfile and you'll have NinePatch support. (notice it's still not ARC-compatible, pull requests are welcome!)

I know this is an old question, but just to help people that are searching for ninePatch in objective c, I want to inform that the UIImage class from ios 5 on, have a method to do what ninePatch do.
From UIImage reference:
Discussion
You use this method to add cap insets to an image or to
change the existing cap insets of an image. In both cases, you get
back a new image and the original image remains untouched.
During scaling or resizing of the image, areas covered by a cap are
not scaled or resized. Instead, the pixel area not covered by the cap
in each direction is tiled, left-to-right and top-to-bottom, to resize
the image. This technique is often used to create variable-width
buttons, which retain the same rounded corners but whose center region
grows or shrinks as needed. For best performance, use a tiled area
that is a 1x1 pixel area in size.
Availability Available in iOS 5.0 and later.
Declared In UIImage.h
An example of use if you want a image with a fix top:
UIImage* image = [UIImage imageNamed:#"YourImage.png"];
image = [image resizableImageWithCapInsets:UIEdgeInsetsMake(25., 0, 0, 0)]
Hope it help.

You should be able to add the files to your project. Answers to this SO question may help.
Also, be aware that UIImage does provide stretchable images. See -stretchableImageWithLeftCapWidth:topCapHeight: for details. That won't help you if you need to read images in actual nine patch format, perhaps for sharing resources with an Android app, but it's often all that's really needed.

Are you #include-ing the main header file(s) in whichever classes you need them or in the prefix.pch file?

If you download their source package from GitHub, you'll see that they've included an example project, NinePatchDemo. Inspecting this should give you all the information you need.
You'll see that they just dropped the NinePatch Xcode Project into the NinePatchDemo project, under "Other Sources". You can drag and drop the NinePatch project into your own in the same way. You can check the build and target settings to make sure you're including any paths/libraries that you need in order to build.
It looks like you want -all_load -ObjC in Other Linker flags for your target, and if your project folder is at the same level as the ninepatch folder (as in the download from github), you'll also want to set the Header Search Paths on your target to ../NinePatch//**, varying accordingly if you've moved any folders around.
Other than that the only thing I can see is that you will, of course, need to include the headers with #import <TUNinePatch.h>
NinePatchDemo built fine for me, so I can verify that their source does work.

Related

Is there a way to insert an image in Xcode?

I'm very new to Xcode and Swift, so for now I'm creating a UI that requires an image to appear on the screen. I would like to add the image to the LaunchScreen.storyboard file in Xcode. Can anybody help me?
I'm very new to Xcode and Swift, so for now I'm creating a UI that requires an image to appear on the screen. I would like to add the image to the LaunchScreen.storyboard file in Xcode. Can anybody help me?
Images are typically stored in .xcassets files, which show up as folders in your project. Your project probably already has one called Assets.xcassets, and you can just drag and drop images into that file. Then you can add an image view to your LaunchScreen.storyboard and set it to use the image you just added.
Beyond that, let me suggest that you go through some of Apple's SwiftUI tutorials. They'll help you learn about Swift, but you'll also get a lot of exposure to just using Xcode. If you have to figure it all out on your own, it'll take a lot longer.

Initializing a CCSprite with a png

I'm starting to use Cocos2dx and I'm having an issue creating a CCSprite from a .png I have. It's like it couldn't found the image from the resources or it is not compatible, because if I choose another one, it works. I also tried with a jpg and it didn't work, so apparently, it just works with some kind of images.
How can I make an image compatible for the CCSprite?
Thanks
You don't need to add it the project as build-native.sh (or.bat) does that.
The image should be in the folder called 'resources' (which will be in the same folder as 'Classes' and 'Proj.Android'.
When you build you app it will be copied in proj.android/assets so check that it gets put in there after you build.

Hi-Res #2x image not being picked up for tab bar item

I have a TabBarController that sets the image for the tab like so, in the -init method:
self.tabBarItem.image = [UIImage imageNamed:#"tabImage.png"];
I have a tabImage#2x.png file in the resource. In the iPhone 4 simulator or the phone, the hi-res image isn't being picked up - the low res version is simply being scaled up.
Any ideas why this might be?
EDIT: Some more info:
If I try and explicitly use tabImage#2x.png (or just tabImage#2x) then the tab image I see is extremely large and blown up beyond the bounds of the tab, as if it's being scaled from 60px to 120px. So it looks like whatever name is supply is being treated as a scale=1.0 image.
Note that the simulator is not case-sensitive, but the device is. Make sure case matches EXACTLY. If you've changed the case of the filename at some point, you'll need to clean and rebuild. Sometimes, for the simulator, I've had to actually blow away the folder in Library/Application Support/iPhone Simulator/4.3/Applications/ to get the rebuild to pick up the renamed image.
Always use
[UIImage imageNamed:#"foo.png"]
This will work on 3.x and 4.x devices, and on the 4.x Simulator. Devices with Retina Displays (and the 4.x simulator) will magically pick up the #2x versions of your images; iOS has been modified to be smart about this function and #2x.png files.
Make sure you have both the #2x.png and the normal.png added to the project file, and do a full clean & build. As others have mentioned, verify the size of the images, too; apparently if they're not exactly 2x the dimensions it won't work (I haven't verified this myself).
If you leave the .png off, it will only work on iOS 4.0. So if you're building a 4.0+ only app, you can ask for:
[UIImage imageNamed:#"foo"]
If you have only one hi-res image and want to use it on both Retina and non-Retina devices, then you'll have to change view.contentMode to scale to fit.
I had the same problem. It turned out that my png was not square. Solution: make it square and it will work.
Are you sure the file has been added to the XCode project and is visible in the project explorer?
I had this problem as well.
Make 2 images:
30x30 pixels
60x60 pixels
Suffix the 60x60pixel image with #2x. For example, tabBarImage#2x.png. Then, in your storyboard or code, you can specify the regular one, tabBarImage.png, and iOS will choose the #2x version at its discretion.
You can leave the .png off now. I believe it will still work, but you may try that.
I just went through a few hours of redoing art in The Gimp and trying to get it recognized and loaded by my app on an iPhone 4.
I ran into the problem described with certain images with a #2x extension not being recognized and loaded.
I was not able to discern any pattern. My images are all loaded using [UIImage imageNamed:#"<name>.png"] into a singleton. I inspected the image scale settings post-startup and some were 1.0 (the old art) and some were 2.0 (the new art).
The only way I was able to resolve this problem was to delete and re-add the high resolution images that were not being recognized.
Two silly mistakes (both of which I've made before) that can cause this problem:
Accidentally naming the small
versions #2x instead of the large
ones
Having the large versions be
slightly missized (by one pixel)
you need 2 versions of your images and both ned to be at the same location in the project folder and added to the project
image.png 60x60
image#2.png 120x120
then simply use [UIImage imageNamed:#"image.png"]
did it this way with selfmade buttons and it worked for me (iOS 4.1)
Another thing to look out for is having two images with the same name.
I had the same issue. The #2x image had the wrong build target checked (ServiceTests instead of MyProject).
I had exactly the same problem.
Make two images: im1.png and im1#2x.png
Call imageNamed: with the first one.
Note, imageNamed: doesn't initialize UIImage, hence use it as transient [[UIImageView new] initWithImage:[UIImage imageNamed: #"im1.png"]] or initialize UIImage yourself.

How to display background image or foreground image of an iphone app

This might come across as a very silly question. I am trying to place an image behind my first "Hello World" app on iphone, since this morning. I tried doing so using Interface Builder using UIImageView object from library and also tried doing so programmatically.
Googling hasn't be much of an help either. Can any one please demonstrate a simple app that shows how to display an image in an iphone app, in background and/or in foreground?
The step you're missing is likely adding the image to the project.
Drag an image from the finder to the Resources section of the project outline. Xcode infers the action from the file type, and images will be copied to the Resources folder. You can check it out in the Targets section of the project outline, if you disclose your Hello World target you'll see the different build phases, including a "Copy bundle resources" build phase.
Anyway once the image is added to your project, the UIImageView inspector image name will autocomplete your image name, and all should work. No code needed.
You could try to add a UIImageView, in IB, to your view and arrange it to the background (behind all labels, etc)
edit: I just realized, you already tried it in IB. I was ignorant, I'm sorry.
The thing is, I do this same trick myself, and it works. If you add a UIImageView to your view in IB, what happens? Does it stick to the foreground? If so, you can re-arrange the position of the object by using the 'Layout' menu.
edit2: I just searched the iphone reference library, and the "Hello World" example also shows how it is done. (http://developer.apple.com/iphone/prerelease/library/samplecode/HelloWorld/index.html)

Performing iPhone optimization on externally downloaded PNGs

When a PNG is added to an XCode iPhone project, the compiler optimizes it using pngcrush. Once on the device, the image's rendering performance is very fast.
My problem is that my application downloads its PNGs from an external source at runtime (from Picasa Web albums, using the Google Data APIs). Unfortunately, these images' performance is quite bad. When I do custom rendering on top of the image, it seems 100x slower than its internally stored counterparts. I strongly suspect this is because the downloaded images haven't been optimized.
Does anyone know how I can optimize an externally downloaded PNG at runtime on the iPhone? I'm hoping for a class that does this. I even considered adding pngcrush's source code to my app, which seems drastic. I haven't been able to find an decent answer myself. I'd be very grateful for any help.
Thanks!
Update:
Some folks have suggested that it may be due to the file's size, but it isn't. During my tests, I added a toggle button to switch between using the embedded version and the downloaded version of exactly the same PNG. The only difference is that the embedded one was optimized by 'pngcrush' during compilation. This does some byte-swapping (from RGBA to BRGA) and pre-multiplication of alpha. (http://iphonedevelopment.blogspot.com/2008/10/iphone-optimized-pngs.html)
Also, the performance I'm referring to isn't the downloading, but the rendering. I superimpose custom painting on top of the image (overriding the drawRect method of the UIView), and it's very choppy when the background is the downloaded version, and very smooth when it's the embedded (and therefore optimized) version. Again, it's exactly the same file. The only difference is the optimization, which I'm hoping I can perform on the image at runtime, on the device, after downloading it.
Thanks again for everyone's help!
That link you posted pretty much answers your question.
During the build process XCode pre-processes your png so it's in a format that's more friendly to the graphics chip in the iPhone.
Png's that have not been processed like this will likely use a slower rendering path, one that deals with the non-native format and the fact that the alpha must be computed separately for each color.
So you have two options;
Perform the same work that pngcrush does and swap ordering/pre-multiply alpha. The speed up may be due to one or both of these.
After you have loaded your image, you can "create" a new image from it. This new image should be in the iPhone's native format and so should perform faster. The downside is it could potentially take up a bit more memory.
E.g.
CGRect area = CGRectMake(0, 0, width, height);
CGSize size = area.size;
UIGraphicsBeginImageContext(size);
[oldImage drawInRect:area];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The fact that you say it "seems" 100x slower indicates that you have not performed any experimentation, but made a guess (it must be the PNG optimization), and are now going down a path based on a hunch.
You should spend time to confirm what the problem is before you try to solve it. My gut says that PNG optimization shouldn't be the issue: that mostly affects the loading of images, but once they are in memory it doesn't matter what file format they were originally in.
Anyway, you should try an A-B comparison, either get your code to load an optimized PNG from somewhere else and see how it compares, or make a test app that just does some drawing on the two PNG types. Once you've confirmed what the problem is, then you can figure out if you need to compile pngcrush into your app.
On the surface, it sounds like something else is at play here. Any additional image manipulation should only add time until it's displayed onscreen...
Would it be at all possible to get the server to gzip the images by sending the appropriate HTTP header? (If it even helps file size much, that is.)
Temporarily using the pngcrush source might be a good test as well, just to get some measurements.
Are you storing the png at the original downloaded size? If it's a large image it'll take significantly longer to render.
Well it seems that a good way to do it (since you can't run pngcrush on the iPhone and expect that to speed it up) would be to make your requests through a proxy that runs pngcrush. The proxy would have nice horse power to actually give you some gain over the 100x pain you feel.
try pincrush to trans the normal png file to the crushed png file
You say you are drawing on top of the image by overriding a UIView's drawRect: method. Are you trying to do some animation by repeatedly drawing the whole image with your custom stuff on top of it?
You might get better results if you put your custom stuff in a separate view or layer, and let the OS deal with compositing the result over the background. The OS will only update the parts of the screen that you actually change, and won't be repainting the entire image as often.