GPUImage overlay live video with corner detection - iphone

Trying to get a basic filter working with GPUImage, but not sure how to properly set it up to display the crosshairs over the live video feed, when detecting corners. I tried adding the crosshairs to the blend filter, along with the video, then add that to the gpuimageview, but all I get is a white screen. Any ideas?
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view from its nib.
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageView *filteredVideoView = (GPUImageView *)self.view;
GPUImageCrosshairGenerator *crosshairGenerator = [[GPUImageCrosshairGenerator alloc] init];
crosshairGenerator.crosshairWidth = 15.0;
[crosshairGenerator forceProcessingAtSize:CGSizeMake(480.0, 640.0)];
customFilter = [[GPUImageHarrisCornerDetectionFilter alloc] init];
[customFilter setCornersDetectedBlock:^(GLfloat* cornerArray, NSUInteger cornersDetected, CMTime frameTime){
[crosshairGenerator renderCrosshairsFromArray:cornerArray count:cornersDetected frameTime:frameTime];
NSLog(#"corners: %u", cornersDetected);
}];
GPUImageAlphaBlendFilter *blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
[blendFilter forceProcessingAtSize:CGSizeMake(480.0, 640.0)];
[videoCamera addTarget:blendFilter];
[crosshairGenerator addTarget:blendFilter];
[blendFilter addTarget:filteredVideoView];
[videoCamera startCameraCapture];
}

This is how ur code should be in Swift 2.0
var liveCam:GPUImageVideoCamera!
var edgesDetector:GPUImageHarrisCornerDetectionFilter!
var crosshairGenerator:GPUImageCrosshairGenerator!
#IBOutlet weak var camView: GPUImageView!
override func viewDidLoad() {
super.viewDidLoad()
liveCam = GPUImageVideoCamera(sessionPreset: AVCaptureSessionPreset640x480, cameraPosition: .Back)
liveCam.outputImageOrientation = .Portrait
crosshairGenerator = GPUImageCrosshairGenerator()
crosshairGenerator.crosshairWidth = 15
crosshairGenerator.forceProcessingAtSize(CGSize(width: 480, height: 640))
edgesDetector = GPUImageHarrisCornerDetectionFilter()
edgesDetector.blurRadiusInPixels = 2 //Default value
edgesDetector.threshold = 0.2 //Default value
edgesDetector.cornersDetectedBlock = {(cornerArray:UnsafeMutablePointer<GLfloat>,cornersDetected:UInt,frameTime:CMTime) -> Void in
self.crosshairGenerator.renderCrosshairsFromArray(cornerArray, count: cornersDetected, frameTime: frameTime)
print("\(cornerArray) =-= \(cornersDetected) =-= \(frameTime)")
}
liveCam.addTarget(edgesDetector)
edgesDetector.addTarget(crosshairGenerator)
crosshairGenerator.addTarget(camView)
liveCam.startCameraCapture()
}
The result :

Related

SCNPlane with Video plays on Simulator but not on Device

I have a video layer I want to render onto an SCNPlane. It works on the Simulator, but not on the device.
Here's a visual:
Here's the code:
//DISPLAY PLANE
SCNPlane * displayPlane = [SCNPlane planeWithWidth:displayWidth height:displayHeight];
displayPlane.cornerRadius = cornerRadius;
SCNNode * displayNode = [SCNNode nodeWithGeometry:displayPlane];
[scene.rootNode addChildNode:displayNode];
//apply material
SCNMaterial * displayMaterial = [SCNMaterial material];
displayMaterial.diffuse.contents = [[UIColor greenColor] colorWithAlphaComponent:1.0f];
[displayNode.geometry setMaterials:#[displayMaterial]];
//move to front + position for rim
displayNode.position = SCNVector3Make(0, rimTop - 0.08, /*0.2*/ 1);
//create video item
NSBundle * bundle = [NSBundle mainBundle];
NSString * path = [bundle pathForResource:#"tv_preview" ofType:#"mp4"];
NSURL * url = [NSURL fileURLWithPath:path];
AVAsset * asset = [AVAsset assetWithURL:url];
AVPlayerItem * item = [AVPlayerItem playerItemWithAsset:asset];
queue = [AVQueuePlayer playerWithPlayerItem:item];
looper = [AVPlayerLooper playerLooperWithPlayer:queue templateItem:item];
queue.muted = true;
layer = [AVPlayerLayer playerLayerWithPlayer:queue];
layer.frame = CGRectMake(0, 0, w, h);
layer.videoGravity = AVLayerVideoGravityResizeAspectFill;
displayMaterial.diffuse.contents = layer;
displayMaterial.doubleSided = true;
[queue play];
//[self.view.layer addSublayer:layer];
I can confirm that the actual plane exists (appears as green in the image above if avplayerlayer isn't applied to it) - first image above. If the video layer is added directly to the parent view layer (bottom line above) it runs fine - final image above. I thought it might be file system issue, but then I imagine (?) the video wouldn't play in the final image.
EDIT: setting queue (AVPlayer) directly works on Simulator, albeit ugly as hell, and crashes on Device, with following error log:
Error: Could not get pixel buffer (CVPixelBufferRef)
validateFunctionArguments:3797: failed assertion `Fragment Function(commonprofile_frag): incorrect type of texture (MTLTextureType2D) bound at texture binding at index 4 (expect MTLTextureTypeCube) for u_radianceTexture[0].'
I think that my solution will suit you - the only thing is that small changes will be required here:
#import <SceneKit/SceneKit.h>
#import <AVKit/AVKit.h>
#import <SpriteKit/SpriteKit.h>
#interface ViewController: UIViewController
#end
It works for simulator (in Xcode 13.2.1) and for device (I run it on iPhone X with iOS 15.3). I used 640x360 H.265 video. The only problem here – a video looping is stuttering...
#import "ViewController.h"
#implementation ViewController
AVQueuePlayer *queue;
AVPlayerLooper *looper;
SKVideoNode *videoNode;
- (void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated];
SCNView *sceneView = (SCNView *)self.view;
sceneView.backgroundColor = [UIColor blackColor];
sceneView.autoenablesDefaultLighting = YES;
sceneView.allowsCameraControl = YES;
sceneView.playing = YES;
sceneView.loops = YES;
SCNScene *scene = [SCNScene scene];
sceneView.scene = scene;
// Display
SCNPlane *displayPlane = [SCNPlane planeWithWidth:1.6 height:0.9];
SCNNode *displayNode = [SCNNode nodeWithGeometry:displayPlane];
SCNMaterial *displayMaterial = [SCNMaterial material];
displayMaterial.lightingModelName = SCNLightingModelConstant;
displayMaterial.doubleSided = YES;
// Video
NSBundle *bundle = [NSBundle mainBundle];
NSString *path = [bundle pathForResource:#"art.scnassets/hevc"
ofType:#"mp4"];
NSURL *url = [NSURL fileURLWithPath:path];
AVAsset *asset = [AVAsset assetWithURL:url];
AVPlayerItem *item = [AVPlayerItem playerItemWithAsset:asset];
queue = [AVQueuePlayer playerWithPlayerItem:item];
queue.muted = NO;
looper = [AVPlayerLooper playerLooperWithPlayer:queue templateItem:item];
// SpriteKit video node
videoNode = [SKVideoNode videoNodeWithAVPlayer:queue];
videoNode.zRotation = -M_PI;
videoNode.xScale = -1;
SKScene *skScene = [SKScene sceneWithSize: CGSizeMake(
displayPlane.width * 1000,
displayPlane.height * 1000)];
videoNode.position = CGPointMake(skScene.size.width / 2,
skScene.size.height / 2);
videoNode.size = skScene.size;
[videoNode play];
[skScene addChild:videoNode];
displayMaterial.diffuse.contents = skScene;
[displayNode.geometry setMaterials:#[displayMaterial]];
[scene.rootNode addChildNode:displayNode];
}
#end

How to create area graph with ios charts?

I am trying to create an area graph with the min and max value for each points.But did not found anywhere .I guess setting the background color in the area between two line graph should solve this.
Tried setting fillColor for two data sets as.
.fillColor = UIColor.lightGrayColor()
.drawFilledEnabled = true
I have seen from demo project.It can be done as
set1 = [[LineChartDataSet alloc] initWithValues:yVals1 label:#"DataSet 1"];
set1.axisDependency = AxisDependencyLeft;
[set1 setColor:[UIColor colorWithRed:255/255.0 green:241/255.0 blue:46/255.0 alpha:1.0]];
set1.drawCirclesEnabled = NO;
set1.lineWidth = 2.0;
set1.circleRadius = 3.0;
set1.fillAlpha = 1.0;
set1.drawFilledEnabled = YES;
set1.fillColor = UIColor.whiteColor;
set1.highlightColor = [UIColor colorWithRed:244/255.0 green:117/255.0 blue:117/255.0 alpha:1.0];
set1.drawCircleHoleEnabled = NO;
set1.fillFormatter = [ChartDefaultFillFormatter withBlock:^CGFloat(id<ILineChartDataSet> _Nonnull dataSet, id<LineChartDataProvider> _Nonnull dataProvider) {
return _chartView.leftAxis.axisMinimum;
}];
set2 = [[LineChartDataSet alloc] initWithValues:yVals2 label:#"DataSet 2"];
set2.axisDependency = AxisDependencyLeft;
[set2 setColor:[UIColor colorWithRed:255/255.0 green:241/255.0 blue:46/255.0 alpha:1.0]];
set2.drawCirclesEnabled = NO;
set2.lineWidth = 2.0;
set2.circleRadius = 3.0;
set2.fillAlpha = 1.0;
set2.drawFilledEnabled = YES;
set2.fillColor = UIColor.whiteColor;
set2.highlightColor = [UIColor colorWithRed:244/255.0 green:117/255.0 blue:117/255.0 alpha:1.0];
set2.drawCircleHoleEnabled = NO;
set2.fillFormatter = [ChartDefaultFillFormatter withBlock:^CGFloat(id<ILineChartDataSet> _Nonnull dataSet, id<LineChartDataProvider> _Nonnull dataProvider) {
return _chartView.leftAxis.axisMaximum;
}];
But i cannot convert the syntax in swift.
set2.fillFormatter = [ChartDefaultFillFormatter withBlock:^CGFloat(id<ILineChartDataSet> _Nonnull dataSet, id<LineChartDataProvider> _Nonnull dataProvider) {
return _chartView.leftAxis.axisMaximum;
}];
You don't have to set the fillFormatter, unless you need to adjust the fill-behaviour. As far as I am concerned the static withBlock: does not exist in the Swift interface. Instead you have to create you own ChartFillFormatter implementation and implement the public func getFillLinePosition(dataSet dataSet: ILineChartDataSet, dataProvider: LineChartDataProvider) -> CGFloat function.
You should not have to do anything else than setting the drawFilledEnabled and fillColor on the LineChartDataSet and then of course use the LineChartView.
So if you really need the fill formatter you will have to do something like this:
public class MyFillFormatter: NSObject, ChartFillFormatter
{
private let _chartView: LineChartView
public override init(chartView: LineChartView)
{
super.init()
_chartView = chartView
}
public func getFillLinePosition(dataSet dataSet: ILineChartDataSet, dataProvider: LineChartDataProvider) -> CGFloat
{
return _chartView.leftAxis.axisMaximum
}
}
And then set it as:
set1.fillFormatter = MyFillFormatter(chartView: lineChartView)
There is now swift example for what you asked:
https://github.com/danielgindi/Charts/blob/master/ChartsDemo-iOS/Swift/Demos/LineChartFilledViewController.swift
Looks like this in the demo app:

Custom UIPickerView with three Components each showing label on Selection Indicator

In my app I am trying to make an custom UIPickerView which contains three components(days, hours and minutes). I have already made the custom picker with three components. And Now I am stuck at how I can add the labels to the selection indicator which shows which component is for days, hours or minutes.
I have already gone through each and every link or question posted on this site but none them helped me.
I am trying to implement something like this image
Can any one suggest me how can I achieve this?
Thats how I achieve this....I have made my Custom PickerView with the help of some code I found...
In .h file:
// LabeledPickerView.h
// LabeledPickerView
#import <UIKit/UIKit.h>
#interface LabeledPickerView : UIPickerView
{
NSMutableDictionary *labels;
}
/** Adds the label for the given component. */
-(void)addLabel:(NSString *)labeltext forComponent:(NSUInteger)component forLongestString:(NSString *)longestString;
#end
and In the .m file...
// LabeledPickerView.m
// LabeledPickerView
#import "LabeledPickerView.h"
#implementation LabeledPickerView
/** loading programmatically */
- (id)initWithFrame:(CGRect)aRect {
if (self = [super initWithFrame:aRect]) {
labels = [[NSMutableDictionary alloc] initWithCapacity:3];
}
return self;
}
/** loading from nib */
- (id)initWithCoder:(NSCoder *)coder {
if (self = [super initWithCoder:coder]) {
labels = [[NSMutableDictionary alloc] initWithCapacity:3];
}
return self;
}
- (void) dealloc
{
[labels release];
[super dealloc];
}
#pragma mark Labels
// Add labelText to our array but also add what will be the longest label we will use in updateLabel
// If you do not plan to update label then the longestString should be the same as the labelText
// This way we can initially size our label to the longest width and we get the same effect Apple uses
-(void)addLabel:(NSString *)labeltext forComponent:(NSUInteger)component forLongestString:(NSString *)longestString {
[labels setObject:labeltext forKey:[NSNumber numberWithInt:component]];
NSString *keyName = [NSString stringWithFormat:#"%#_%#", #"longestString", [NSNumber numberWithInt:component]];
if(!longestString) {
longestString = labeltext;
}
[labels setObject:longestString forKey:keyName];
}
//
- (void) updateLabel:(NSString *)labeltext forComponent:(NSUInteger)component {
UILabel *theLabel = (UILabel*)[self viewWithTag:component + 1];
// Update label if it doesn’t match current label
if (![theLabel.text isEqualToString:labeltext]) {
NSString *keyName = [NSString stringWithFormat:#"%#_%#", #"longestString", [NSNumber numberWithInt:component]];
NSString *longestString = [labels objectForKey:keyName];
// Update label array with our new string value
[self addLabel:labeltext forComponent:component forLongestString:longestString];
// change label during fade out/in
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:0.75];
[UIView setAnimationCurve:UIViewAnimationCurveEaseInOut];
theLabel.alpha = 0.00;
theLabel.text = labeltext;
theLabel.alpha = 1.00;
[UIView commitAnimations];
}
}
/**
Adds the labels to the view, below the selection indicator glass-thingy.
The labels are aligned to the right side of the wheel.
The delegate is responsible for providing enough width for both the value and the label.
*/
- (void)didMoveToWindow {
// exit if view is removed from the window or there are no labels.
if (!self.window || [labels count] == 0)
return;
UIFont *labelfont = [UIFont boldSystemFontOfSize:15];
// find the width of all the wheels combined
CGFloat widthofwheels = 0;
for (int i=0; i<self.numberOfComponents; i++) {
widthofwheels += [self rowSizeForComponent:i].width;
}
// find the left side of the first wheel.
// seems like a misnomer, but that will soon be corrected.
CGFloat rightsideofwheel = (self.frame.size.width - widthofwheels) / 2;
// cycle through all wheels
for (int component=0; component<self.numberOfComponents; component++) {
// find the right side of the wheel
rightsideofwheel += [self rowSizeForComponent:component].width;
// get the text for the label.
// move on to the next if there is no label for this wheel.
NSString *text = [labels objectForKey:[NSNumber numberWithInt:component]];
if (text) {
// set up the frame for the label using our longestString length
NSString *keyName = [NSString stringWithFormat:#"%#_%#", [NSString stringWithString:#"longestString"], [NSNumber numberWithInt:component]];
NSString *longestString = [labels objectForKey:keyName];
CGRect frame;
frame.size = [longestString sizeWithFont:labelfont];
// center it vertically
frame.origin.y = (self.frame.size.height / 2) - (frame.size.height / 2) - 0.5;
// align it to the right side of the wheel, with a margin.
// use a smaller margin for the rightmost wheel.
frame.origin.x = rightsideofwheel - frame.size.width -
(component == self.numberOfComponents - 1 ? 5 : 7);
// set up the label. If label already exists, just get a reference to it
BOOL addlabelView = NO;
UILabel *label = (UILabel*)[self viewWithTag:component + 1];
if(!label) {
label = [[[UILabel alloc] initWithFrame:frame] autorelease];
addlabelView = YES;
}
label.text = text;
label.font = labelfont;
label.backgroundColor = [UIColor clearColor];
label.shadowColor = [UIColor whiteColor];
label.shadowOffset = CGSizeMake(0,1);
// Tag cannot be 0 so just increment component number to esnure we get a positive
// NB update/remove Label methods are aware of this incrementation!
label.tag = component + 1;
if(addlabelView) {
/*
and now for the tricky bit: adding the label to the view.
kind of a hack to be honest, might stop working if Apple decides to
change the inner workings of the UIPickerView.
*/
if (self.showsSelectionIndicator) {
// if this is the last wheel, add label as the third view from the top
if (component==self.numberOfComponents-1)
[self insertSubview:label atIndex:[self.subviews count]-3];
// otherwise add label as the 5th, 10th, 15th etc view from the top
else
[self insertSubview:label aboveSubview:[self.subviews objectAtIndex:5*(component+1)]];
} else
// there is no selection indicator, so just add it to the top
[self addSubview:label];
}
if ([self.delegate respondsToSelector:#selector(pickerView:didSelectRow:inComponent:)])
[self.delegate pickerView:self didSelectRow:[self selectedRowInComponent:component] inComponent:component];
}
}
}
And call this addLabel: method with the label text and component tag and thats it..!!
Download the Source code Of custom UIPickerView Control .
Custom UiPickerView.
Hope it Helps to You :)

Photo Size With UIPrintInteractionController

I have one problem to print photo using AirPrint. I printed 4 * 6 inch image but printed image size is too large! How can I resolve this problem.
Can I specify paper size and photo programmatically?
Here is screen shot url.
https://www.dropbox.com/s/1f6wa0waao56zqk/IMG_0532.jpg
` here is my code
-(void)printPhotoWithImage:(UIImage *)image
{
NSData *myData = UIImageJPEGRepresentation(image, 1.f);
UIPrintInteractionController *pic = [UIPrintInteractionController sharedPrintController];
if (pic && [UIPrintInteractionController canPrintData:myData]) {
pic.delegate = self;
UIPrintInfo *pinfo = [UIPrintInfo printInfo];
pinfo.outputType = UIPrintInfoOutputPhoto;
pinfo.jobName = #"My Photo";
pinfo.duplex = UIPrintInfoDuplexLongEdge;
pic.printInfo = pinfo;
pic.showsPageRange = YES;
pic.printingItem = myData;
pic.printFormatter = format;
[format release];
void(^completionHandler)(UIPrintInteractionController *, BOOL, NSError *) = ^(UIPrintInteractionController *print, BOOL completed, NSError *error) {
[self resignFirstResponder];
if (!completed && error) {
NSLog(#"--- print error! ---");
}
};
[pic presentFromRect:CGRectMake((self.view.bounds.size.width - 64) + 27, (self.view.bounds.size.height - 16) + 55, 0, 0) inView:self.view animated:YES completionHandler:completionHandler];
}
}
- (UIPrintPaper *)printInteractionController:(UIPrintInteractionController *)printInteractionController choosePaper:(NSArray *)paperList
{
CGSize pageSize = CGSizeMake(6 * 72, 4 * 72);
return [UIPrintPaper bestPaperForPageSize:pageSize withPapersFromArray:paperList];
}
Just this is my code. should I use UIPrintPageRenderer property to give draw area?
`
first you should set
/*
PrintPhotoPageRenderer *pageRenderer = [[PrintPhotoPageRenderer alloc]init];
pageRenderer.imageToPrint =image;
pic.printPageRenderer = pageRenderer;
*/
- (void)printImage {
// Obtain the shared UIPrintInteractionController
UIPrintInteractionController *controller = [UIPrintInteractionController sharedPrintController];
controller.delegate = self;
if(!controller){
NSLog(#"Couldn't get shared UIPrintInteractionController!");
return;
}
// We need a completion handler block for printing.
UIPrintInteractionCompletionHandler completionHandler = ^(UIPrintInteractionController *printController, BOOL completed, NSError *error) {
if(completed && error)
NSLog(#"FAILED! due to error in domain %# with error code %u", error.domain, error.code);
};
// Obtain a printInfo so that we can set our printing defaults.
UIPrintInfo *printInfo = [UIPrintInfo printInfo];
UIImage *image = ((UIImageView *)self.view).image;
[controller setDelegate:self];
printInfo.outputType = UIPrintInfoOutputPhoto;
if(!controller.printingItem && image.size.width > image.size.height)
printInfo.orientation = UIPrintInfoOrientationLandscape;
// Use this printInfo for this print job.
controller.printInfo = printInfo;
// Since the code below relies on printingItem being zero if it hasn't
// already been set, this code sets it to nil.
controller.printingItem = nil;
#if DIRECT_SUBMISSION
// Use the URL of the image asset.
if(self.imageURL && [UIPrintInteractionController canPrintURL:self.imageURL])
controller.printingItem = self.imageURL;
#endif
// If we aren't doing direct submission of the image or for some reason we don't
// have an ALAsset or URL for our image, we'll draw it instead.
if(!controller.printingItem){
// Create an instance of our PrintPhotoPageRenderer class for use as the
// printPageRenderer for the print job.
PrintPhotoPageRenderer *pageRenderer = [[PrintPhotoPageRenderer alloc]init];
// The PrintPhotoPageRenderer subclass needs the image to draw. If we were taking
// this path we use the original image and not the fullScreenImage we obtained from
// the ALAssetRepresentation.
//pageRenderer.imageToPrint = ((UIImageView *)self.view).image;
pageRenderer.imageToPrint =image;
controller.printPageRenderer = pageRenderer;
}
// The method we use presenting the printing UI depends on the type of
// UI idiom that is currently executing. Once we invoke one of these methods
// to present the printing UI, our application's direct involvement in printing
// is complete. Our delegate methods (if any) and page renderer methods (if any)
// are invoked by UIKit.
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
//[controller presentFromBarButtonItem:self.printButton animated:YES completionHandler:completionHandler]; // iPad
[controller presentFromRect:CGRectMake(0, 0, 50, 50) inView:_btnPrint animated:YES completionHandler:completionHandler];
}else
[controller presentAnimated:YES completionHandler:completionHandler]; // iPhone
}
and then you should set PrintPhotoPageRenderer
UIPrintPageRenderer.h
#import <UIKit/UIKit.h>
#interface PrintPhotoPageRenderer : UIPrintPageRenderer { UIImage
*imageToPrint; }
#property (readwrite, retain) UIImage *imageToPrint;
#end
//
PrintPhotoPageRenderer.m
#import "PrintPhotoPageRenderer.h"
#implementation PrintPhotoPageRenderer
#synthesize imageToPrint;
// This code always draws one image at print time.
-(NSInteger)numberOfPages { return 1; }
/* When using this UIPrintPageRenderer subclass to draw a photo at
print
time, the app explicitly draws all the content and need only override
the drawPageAtIndex:inRect: to accomplish that.
The following scaling algorithm is implemented here:
1) On borderless paper, users expect to see their content scaled so that there is no whitespace at the edge of the paper. So this
code scales the content to fill the paper at the expense of
clipping any content that lies off the paper.
2) On paper which is not borderless, this code scales the content so that it fills the paper. This reduces the size of the
photo but does not clip any content.
*/
- (void)drawPageAtIndex:(NSInteger)pageIndex inRect:(CGRect)printableRect {
if(self.imageToPrint){
CGSize finialSize = CGSizeMake(560, 431);//you should set width and height for you self
int x = 20;
int y = (printableRect.size.height - finialSize.height);
CGRect finalRect = CGRectMake(x, y, finialSize.width, finialSize.height);
[self.imageToPrint drawInRect:finalRect];
}else {
NSLog(#"%s No image to draw!", __func__); } }
#end

Having trouble loading data from AppDelegate using UITableView into a flip view, loads first view but not the flip view

AppDelegate:
#implementation Ripe_ProduceGuideAppDelegate
-(void)applicationDidFinishLaunching:(UIApplication *)application {
Greens *apricot = [[Greens alloc] init];
apricot.produceName = #"Apricot";
apricot.produceSight = #"Deep orange or yellow orange in appearance, may have red tinge, no marks or bruises. ";
apricot.produceTouch = #"Firm to touch and give to gentle pressure, plump.";
apricot.produceSmell = #"Should be Fragrant";
apricot.produceHtoP = #"raw, salads, baked, sauces, glazes, desserts, poached, stuffing.";
apricot.produceStore = #"Not ripe: place in brown paper bag, at room temperature and out of direct sunlight, close bag for 2 - 3 days. Last for a week. Warning: Only refrigerate ripe apricots.";
apricot.produceBest = #"Spring & Summer";
apricot.producePic = [UIImage imageNamed:#"apricot.jpg"];
Greens *artichoke = [[Greens alloc] init];
artichoke.produceName = #"Artichoke";
artichoke.produceSight = #"Slightly glossy dark green color and sheen, tight petals that are not be too open, no marks, no brown petals or dried out look. Stem should not be dark brown or black.";
artichoke.produceTouch = #"No soft spots";
artichoke.produceSmell = #" Should not smell";
artichoke.produceHtoP = #"steam, boil, grill, saute, soups";
artichoke.produceStore = #"Stand up in vase of cold water, keeps for 2 -3 days. Or, place in refrigerator loose without plastic bag. May be frozen, if cooked but not raw.";
artichoke.produceBest = #"Spring";
artichoke.producePic = [UIImage imageNamed:#"artichoke.jpg"];
self.produce = [[NSMutableArray alloc] initWithObjects:apricot, artichoke, nil];
[apricot release];
[artichoke release];
FirstView:
#implementation ProduceView
-(id)initWithIndexPath: (NSIndexPath *)indexPath {
if (self == [super init] ){
index = indexPath;
}
return self;
}
-(void)viewDidLoad {
Ripe_ProduceGuideAppDelegate *delegate = (Ripe_ProduceGuideAppDelegate *)
[[UIApplication sharedApplication] delegate];
Greens *thisProduce = [delegate.produce objectAtIndex:index.row];
self.title = thisProduce.produceName;
sightView.text = thisProduce.produceSight;
touchView.text = thisProduce.produceTouch;
smellView.text = thisProduce.produceSmell;
picView.image = thisProduce.producePic;
}
FlipView:
#implementation FlipsideViewController
#synthesize flipDelegate;
-(id)initWithIndexPath: (NSIndexPath *)indexPath {
if ( self == [super init]) {
index = indexPath;
}
return self;
}
-(void)viewDidLoad {
Ripe_ProduceGuideAppDelegate *delegate = (Ripe_ProduceGuideAppDelegate *)
[[UIApplication sharedApplication] delegate];
Greens*thisProduce = [delegate.produce objectAtIndex:index.row];
self.title = thisProduce.produceName;
bestView.text = thisProduce.produceBest;
htopView.text = thisProduce.produceHtoP;
storeView.text = thisProduce.produceStore;
picView.image = thisProduce.producePic;
}
The app works, the flip view for Artichoke shows the information for Apricot.
Been working on it for two days. I have been working with iPhone apps for two months now and would very much appreciate any assistance with this problem.
Thank you very much.
looks like I need to use a NSDictionay and not an NSArray to do this. So, I will try again.