In order to understand the basic concepts I develop a simple Mac OS X application to calculate fractals. The application is a simple window app and has a class that calculate the fractal, and a single window in which there are:
a custom view for showing the image.
some controls for select calculation parameters. These controls are
connected with the appdelegate.
Everything works fine, but :
I would like that when the mouse is over the view with the image,
some text fields report in real time the coordinates. What i have to
do to realize that ?
I suppose that the connection I have done with the app delegate
is not the best solution.
Is it better to define a custom view controller? If so, how can I do to introduce a custom viewcontroller using interface builder?
You can just track the Mouse Events MouseMoved
https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/EventOverview/HandlingMouseEvents/HandlingMouseEvents.html#//apple_ref/doc/uid/10000060i-CH6-SW1
Then you might be able to do this:
NSPoint location = [renderView convertPoint:[theEvent locationInWindow] fromView:nil];
Then you will get X and Y relative to the view containing the image.
be careful of reading the apple documentation or you might miss things like:
Note: Because mouse-moved events occur so frequently that they can
quickly flood the event-dispatch machinery, an NSWindow object by
default does not receive them from the global NSApplication object.
However, you can specifically request these events by sending the
NSWindow object an setAcceptsMouseMovedEvents: message with an
argument of YES.
Related
Hey, I was wondering if there was an event or something that can check if my user is holding down a button? (for iphone, iOS)
The Event class is TouchEvent. You'll need to toggle the touch state in a variable if you want to keep the fact someone is pressing down after the fact (of the event).
You can use MouseEvent if you need/desire a single touch point. Though you still need the variable.
You need to set Multitouch.inputMode to MultitouchInputMode.TOUCH_POINT in order to turn on touch events on the iPhone. Unfortunately, you can only test touch/gesture events on the iPhone itself. (MouseEvent and TouchEvent are virtually identical in use: MOUSE_DOWN = TOUCH_BEGIN, MOUSE_MOVE = TOUCH_MOVE, TOUCH_END = MOUSE_UP. The main difference is that you can only have one "mouse" yet multiple touches.)
Personally, I use MouseEvent to test on the machine, with an if-then setting up TouchEvent listeners if Multitouch.supportsTouchEvents is true (which it only is on the iPhone/Android).
If the button is a Cocoa Touch UIButton, you can easily connect a number of its gesture events from Interface builder to code Xcode IBActions (where you write the appropriate methods).
How your project would connect to flash, I am not sure. Give us more details and perhaps someone can better help. ;)
How hard is it to create a UI, which views is based on the data that a user has?
So say I can have a scroll view, and a particular user A can have a view which consists of X,Y,Z and user B can have a view which consists or Y, Z? I am just concerned about positioning the views in a views because we can't do that now via interface builder and it needs to be coded.
You can build an entire app with multiple views and controls without touching Interface Builder. The UI views and elements can all be allocated and configured programmatically.
Apple even has a WWDC 2010 video on how to build data driven app UIs.
In case anyone else is looking for Data Driven UI information the WWDC post from WWDC2010 is at about 40-41minutes in the Game development video part 1 (Video 401).
Other than that there is not a lot of information on data driven UI for Objective-C. I am about to add it to an existing project because we need multiple layouts for the same screen and i found it hard to just switch the view based on the layout (wrecked the bindings from memory).
Shouldn't be too hard though, simple framework to load the plist and use KVC to set all the values.
If people are new to KVC i would recommend adding the following method to stop errors happening when you set keys that don't exist, esp. if non-programmers are going to do some of the layout.
- (void)setValue:(id)value forUndefinedKey:(NSString *)key {
NSLog(#"The key %# does not exist.", key);
}
You can either do a callback-based approach like UITableView, i.e. -cellForRowAtIndexPath, or a more explicit setter like UIMenu where you set the items.
For the latter approach, have a -setObjectsArray: or similar method where you configure your subviews according to the input data, i.e., make sure you have three of them if you have (x,y,z) and set their data to 0th view = x, 1st view = y, etc.
Next override -layoutSubviews and set the frames of each of the views based on their position in the order and your bounds.
Does that help?
FYI: It's session # 117: "Building a Server-Driven User Experience"
You can find it through iTunesU:
Search for WWDC2010
Go to "Application Frameworks"
Get "Session 117 - Building a Server-Driven User Experience"
The link to the PDF slides is on the page: WWDC 2010 Session Videos
Ray
UIView's that don't handle their events pass them up the chain. By default, this passes them to their parent View, and if not handled (ultimately) to their parent UIViewController.
UIScrollView breaks this (there's lots of questions on SO, variations on the theme of "why does my app stop working once I add a UIScrollView?)
UISV decides whether the event is for itself, and if not, it passes it DOWN (into its subviews); if they don't handle the event, UISV just throws it away. That's the bug.
In that case, it's supposed to throw them back up to its own parent view - and ultimately parent UIVC. AFAICT, this is why so many people get confused: it's not working as documented (NB: as views are documented; UISV simply is "undocumented" on this matter - it doesn't declare what it aims to do in this situation).
So ... is there an easy fix for this bug? Is there a category I could write that would fix UISV in general and avoid me having to create "fake" UIView subclasses who exist purely to capture events and hand them where they're supposed to go? (which makes for bug-prone code)
In particular, from Apple's docs:
If the time fires without a significant change in position, the scroll view sends tracking events to the touched subview of the content view. If the user then drags their finger far enough before the timer elapses, the scroll view cancels any tracking in the subview and performs the scrolling itself.
...if I could override that "if the timer fires" method, and implement it correctly, I believe I could fix all my UISV instances.
But:
- would apple consider this "using a private API" (their description of "private" is nonsensical in normal programming terms, and I can't understand what they do and don't mean by it)
- does anyone know what this method is, or a good way to go about finding it? (debugging the compiled ObjC classes to find the symbol names, perhaps?)
I've found a partial answer, that's correct, but not 100% useable :(.
iPhone OS 4.0 lets you remotely add listeners to a given view, via the UIGestureRecognizer class. That's great, and works neatly.
Only problem is ... it won't work on any 3.x iPhones and iPod Touches.
(but if you're targetting 4.0 and above, it's an easy way forwards)
EDIT:
On OS 3.x, I created a custom UIView subclass that has extra properties:
NSObject *objectToDelegateToOnTouch;
id touchSourceIdentifier;
Whenever a touch comes in, the view sends the touch message directly to the objectToDelegateToOnTouch, but with the extra parameter of the touchSourceIdentifier.
This way, whenever you get a touch, you know where it came from (you can use an object, or a string, or anything you want as the "identifier").
Gentleones,
I've got a UIImageView that I have in a UIView. This particular UIImageView displays a PNG of a graph. It's important for this particular view controller and another view controller to know what the graph "looks like," i.e., the function described by the PNG graph. These other view controllers need to be able to call a "function" of some sort with the X value returning the Y value.
Please ignore the fact (from an outsider's perspective) that I could probably generate the graph programmatically.
So my questions are:
1 - Where should I put this function? Should it go in one view controller or the other, or should it go in its own class or main or...?
2 - Should it be an object in some way? Or should it be a straight function? (In either case, where's it belong?)
Pardon the apparent n00b-iness of the question. It's 'cause I'm honestly a n00b! This OOP stuff is giving my head a spin having been a procedural programmer for >30 years. (And I choose to learn it by jumping into an iPhone app. Talk about baptism by fire!)
Thanks in advance,Bill
The graph data and the code that processes it is part of your model, not your view or your controller. Create a separate class that encapsulates the graph and associated methods and pass an instance between the controllers that need to manipulate it.
Like I understand, an Window has a lot of Views. A View is an object that can draw something on the screen, and the Window provides the space for drawing. So where's the point, that I dont have an window? What's the difference here between them?
On the iPhone, a window really is just a special kind of view. If you look at the docs for the UIWindow class, you'll see that it has additonal methods above and beyond what a regular UIView has. However, most of those methods seem to have analogous UIView counterparts.
The one thing I've found windows useful for is that UIViews have a "window" property that can be used to instantly access the window. If you have many nested views and need to immediately get to the top level from a 3rd or 4th level deep view, that window property can come in handy.
In a View based application we can create the foreground layout and the appearance of the application including text fields,buttons,labels..... depending upon the requirements of the project and how effective an application view has to be in order to make the application shine
In a Window based application we have the background of the view and we can also create a view using a window by using Interface Builder connections.But for building applications that work basing on the background we need to have Window based applications that runs on the console.