I am currently doing image modification. I used a class of my own named "Image".
It looks like this:
#interface Image : NSObject {
#public
int height;
int width;
float* data;
}
So in data I have floats, in rgba format which mean they go four by four. I can't change my Image class.
I need to create a UIImage from my Image. I'm trying something like that:
+(UIImage *)getUIImageFromImage:(Image *)_img {
NSData * imageData = (NSData *) _img->data;
return [[UIImage alloc] initWithData:imageData];
}
Am I doing something wrong ? Is my cast to (NSData *) okay?
No it ain't. NSData is an Objective-C object. float * is a raw pointer to one or more float values.
If you tell us where this data comes from and what format it's in, we may be able to suggest a way to convert it to either an NSData object or to a CGImage Quartz value, which can then be wrapped in a UIImage and used with UIKit classes.
Related
Currently doing my first steps with iPhone development through MonoTouch, I am playing with an UIImage that I read from the photo library.
What I want to achieve is to get the raw byte array (byte[]) of the image.
I know that there are the UIImageJPEGRepresentation and UIImagePNGRepresentation wrappers in MonoTouch. I also know how to use them. What I don't know is:
How do I decide which of these two functions to call?
I.e. if the original image is a JPEG image, I do not want to get it as an PNG but also as a JPEG, and vice versa.
Is there a way to do this, or am I missing some points on this?
Once you have a UIImage, it is capable of producing either a JPEG or PNG using UIImageJPEGRepresentation or UIImagePNGRepresentation. The format of the original image is only important when the UIImage is being created from it (decides which CFImage provider to load it).
If it's important to your app or algorithm to ensure you save it as the original format, I think you have to maintain that info to use when your writing it. I double checked and couldn't find anything that advertised what format it came from.
Are you going to change the image through UIImageView or does the image stay unchanged in your experience? If it's not changed and you just need the UI to select the image, could you get to file bytes? For example, if you showed the images just to select and then you upload them to a server or something, the UIImage could only be for viewing and selecting and if your data structure remembers which file it came from, you could get the bits back off disk and upload. If your changing the file in the view, then you or the user needs to decide the output (and if jpeg the quality) of the image.
PREPARE
typedef NS_ENUM(NSInteger, DownloadImageType) {
DownloadImageTypePng,
DownloadImageTypeJpg
};
#property (assign, nonatomic) DownloadImageType imageType;
DETECT
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
NSString *compareString = [[info objectForKey:UIImagePickerControllerReferenceURL] absoluteString];
NSRange pngRange = [compareString rangeOfString:#"PNG" options:NSBackwardsSearch];
if (pngRange.location != NSNotFound) {
compareString = [compareString substringFromIndex:pngRange.location];
self.imageType = DownloadImageTypePng;
NSLog(#"%#", compareString);
} else {
NSLog(#"Not PNG");
}
NSRange jpgRange = [compareString rangeOfString:#"JPG" options:NSBackwardsSearch];
if (jpgRange.location != NSNotFound) {
compareString = [compareString substringFromIndex:jpgRange.location];
self.imageType = DownloadImageTypeJpg;
NSLog(#"%#", compareString);
} else {
NSLog(#"Not JPG");
}
}
USE
if (self.imageType == DownloadImageTypePng) {
} else if (self.imageType == DownloadImageTypeJpg) {
}
I have an array of GLuint with fixed size:
GLuint textures[10];
Now I need to set a size of array dynamically. I wrote something like this:
*.h:
GLuint *textures;
*.m:
textures = malloc(N * sizeof(GLuint));
where N - needed size.
Then it used like this:
glGenTextures(N, &textures[0]);
// load texture from image
-(GLuint)getTexture:(int)index{
return textures[index];
}
I used the answer from here, but program fell in runtime. How to fix this?
Program is written on Objective-C and uses OpenGL ES.
I figured out this issue.
This code is valid but seems like not working. This problem is described here, but it's not so clear.
The solution that works for me is to create separate class with GLuint property:
#interface Texture : NSObject
GLuint texture;
#end
#property (nonatomic, readwrite) GLuint texture;
Then we can create NSMutableArray of Textures:
NSMutableArray *textures;
In *.m file we should fill our array:
for(int i=0;i<N;i++){
Texture *t = [[Texture alloc] init];
t.texture = i;
GLuint l = t.texture;
[textures addObject:t];
glGenTextures(1, &l);
}
If you use other array(s) with textures you have to shift GLuint indexes, e.g.:
t.texture = i+M;
where M - size of earlier used array of GLuint's.
Then getTexture will be rewritten as following:
-(GLuint)getTexture:(int)index{
return textures[index].texture;
}
I am not excited with this approach but it's a single one I make work.
If you set the value of N to 10, then the behavior of the two methods will be identical. You should therefore look for the reason for the failure in different places.
void glGenTextures( GLsizei n,
GLuint * textures);
accepts an array of unsigned ints, and it looks like you are passing a pointer to the first element in the array, which I think is not what you want to do or what the function accepts
maybe just try
glGenTextures(N, textures);
I have a function that takes some bitmap data and returns a UIImage * from it. It looks something like so:
UIImage * makeAnImage()
{
unsigned char * pixels = malloc(...);
// ...
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, pixelBufferSize, NULL);
CGImageRef imageRef = CGImageCreate(..., provider, ...);
UIImage * image = [[UIImage alloc] initWithCGImage:imageRef];
return [image autorelease];
}
Can anyone explain exactly who owns what memory here? I want to clean up properly, but I'm not sure how to do it safely. Docs are fuzzy on these. If I free pixels at the end of this function after creating the UIImage, and then use the UIImage, I crash. If I Release the provider or the imageRef after creating the UIImage, I don't see a crash, but they're apparently passing the pixels all the way through, so I'm skittish about releasing these intermediate states.
(I know per CF docs that I should need to call release on both of the latter because they come from Create functions, but can I do that before the UIImage is used?) Presumably I can use the provider's dealloc callback to cleanup the pixels buffer, but what else?
Thanks!
The thumb rule here is "-release* it if you don't need it".
Because you no longer need provider and imageRef afterwards, you should -release all of them, i.e.
UIImage * image = [[UIImage alloc] initWithCGImage:imageRef];
CGDataProviderRelease(provider);
CGImageRelease(imageRef);
return [image autorelease];
pixel is not managed by ref-counting, so you need to tell the CG API to free them for you when necessary. Do this:
void releasePixels(void *info, const void *data, size_t size) {
free((void*)data);
}
....
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, pixelBufferSize, releasePixels);
By the way, you can use +imageWithCGImage: instead of [[[* alloc] initWithCGImage:] autorelease]. Even better, there is +imageWithData: so you don't need to mess with the CG and malloc stuff.
(*: Except when the retainCount is already supposedly zero from the beginning.)
unsigned char * pixels = malloc(...);
You own the pixels buffer because you mallocked it.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, pixelBufferSize, NULL);
Core Graphics follows the Core Foundation rules. You own the data provider because you Created it.
You didn't provide a release callback, so you still own the pixels buffer. If you had provided a release callback, the CGDataProvider object would take ownership of the buffer here. (Generally a good idea.)
CGImageRef imageRef = CGImageCreate(..., provider, ...);
You own the CGImage object because you Created it.
UIImage * image = [[UIImage alloc] initWithCGImage:imageRef];
You own the UIImage object because you allocked it.
You also still own the CGImage object. If the UIImage object wants to own the CGImage object, it will either retain it or make its own copy.
return [image autorelease];
You give up your ownership of the image.
So your code leaks the pixels (you didn't transfer ownership to the data provider and you didn't release them yourself), the data provider (you didn't release it), and the CGImage (you didn't release it). A fixed version would transfer ownership of the pixels to the data provider, and would release both the data provider and the CGImage by the time the UIImage is ready. Or, just use imageWithData:, as KennyTM suggested.
unsigned char * pixels = malloc(...);
I also had problem with malloc/free after using CGImageCreate
I finally found good and simple solution.
I just replace line:
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, pixelBufferSize, NULL);
with:
NSData *data = [NSData dataWithBytes:pixels length:pixelBufferSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
Just after that i could free mallocked memory:
free (pixels);
Aye, this code makes me queasy. As an old rule-to-live by, I try not to mix and match C and C++, and C/Objective-C in the same function/method/selector.
How about breaking this up into two methods. Have change this makeAnImage into makeAnImageRef and pull up the UIImage creation into another Obj-C selector.
UPDATED Scroll down to see the question re-asked more clearly....
If I had the name of a particular UIImageView (IBOutlet) stored in a variable, how can I use it to change the image that is displayed. I tried this, but it does not work.
I'm still new to iphone programming, so any help would be appreciated.
NSString *TmpImage = #"0.png";
NSString *Tst = #"si1_1_2";
TmpImage = #"1.png";
UIImage *sampleimage = [[UIImage imageNamed:TmpImage] retain];
((UIImageView *) (Tst)).image = sampleimage; // This is the line in question
[sampleimage release];
RESTATED:
I have a bunch of images on the screen.... UIImageView *s1, *s2 ,*s3 etc up to *s10
Now suppose I want to update the image each displays to the same image.
Rather than doing
s1.image = sampleimage;
s2.image = sampleimage;
:
s10.image = sampleimage;
How could i write a for loop to go from 1 to 10 and then use
the loop var as part of the line that updates the image.
Something like this.
for ( i = 1; i <- 10; ++i )
s(i).image = sample; // I know that does not work
Basic question is how do I incorporate the variable as part of the statement to access the image? Don't get hung up on my example. The main question is how to use a variable as part of the access to some element/object.
Bottom Line... If I can build the name of a UIImageView into a NSString object, How can I then use that NSString object to manipulate the UIImageView.
Thanks!
Ugh! Your line in question:
((UIImageView *) (Tst)).image = sampleimage;
is casting a string pointer as a UIImageView pointer - you're basically saying that your pointer to a string is actually a pointer to a UIImageView! It will compile (because the compiler will accept your assertion happily) but will of course crash on running.
You need to declare a variable of type UIImageView. This can then hold whichever view you want to set the image of. So your code could look like the following:
NSString *TmpImage = #"0.png";
UIImageView *myImageView;
If (someCondition == YES) {
myImageView = si1_1_2; //Assuming this is the name of your UIImageView
} else {
myImageView = si1_1_3; //etc
}
UIImage *sampleimage = [UIImage imageNamed:TmpImage]; //no need to retain it
myImageView.image = sampleImage;
Hopefully this makes sense!
Edit: I should add, why are you trying to have multiple UIImageViews? Because a UIImageView's image can be changed at any time (and in fact can hold many), would it not be better to have merely one UIImageView and just change the image in it?
I want to keep a mutable collection of CGImageRefs. Do I need to wrap them in NSValue, and if so how do I wrap and unwrap them properly? Can I get away with using a C array? If so how do I construct it and how do I add elements to it later? Is it significantly more costly to use UIImages instead of CGImageRefs as the elements of the collection?
You can directly add CGImage to NSMutableArray. You will need to cast to (id) to avoid compiler warnings.
CFType is bridged to NSObject. You can send any message NSObject responds to to any CFType. In particular, -retain and -release work as normal.
2011: just in case someone's still looking
You can wrap CGImageRef in NSValues by using
+ (NSValue *)valueWithBytes:(const void *)value objCType:(const char *)type
hence:
CGImageRef cgImage = [self cgImageMethod];
NSValue *cgImageValue = [NSValue valueWithBytes:&cgImage objCType:#encode(CGImageRef)];
[array addObject:cgImageValue];
to retrieve:
CGImageRef retrievedCGImageRef;
[[array objectAtIndex:0] getValue:&retrievedCGImageRef ];
hope this helps somebody
Getting the CGImageRef out of an UIImage via image.CGImage can be costly. From the documentation:
If the image data has been purged because of memory constraints, invoking this method forces that data to be loaded back into memory. Reloading the image data may incur a performance penalty.
If you feel comfortable with mixing C++ and Objective-C, you can use a std::vector for storing the CGImageRef. Rename your source file from .m to .mm and try this:
#include <vector>
...
CGImageRef i;
...
std::vector<CGImageRef> images;
images.push_back(i);
If you want to keep the vector as a member of a Objective-C class, you should allocate it on the heap, not the stack:
Header file:
#include <vector>
using std;
#interface YourInterface : ...
{
vector<CGImageRef> *images;
}
and in the implementation file:
images = new std::vector<CGImageRef>();
images->push_back(i);
...
//When you're done
delete images;
images = NULL;