Determine external screen connected to the MacBook computer using NSScreen - swift

I need to show a window on the external screen (e.g. monitor connected to the Macbook). But I don't know how to distinguish between internal MacBook screen and external one. Calling of NSScreen.screens() returns list of all screens and in my case screen with index 0 is my connected external screen and screen with index 1 is my internal (built in) MacBook screen. But documentation says:
The screen at index 0 in the returned array corresponds to the primary screen of the user’s system.
So why is my connected screen marked as primary? Is external screen on all systems is marked as primary => can I suppose that on all systems with connected external screen is this screen on 0 position?
Also, OS X dock is visible only on my internal screen and I thought that dock is by default visible on the primary screen, but that is not true.
Is there a way to reliable determine the correct external screen?

July 2022 Update: Updated the below code to remove the guard statement since NSScreen.screens no longer returns an optional.
To expand on werediver's answer, here's one implementation:
extension NSScreen {
class func externalScreens() -> [NSScreen] {
let description: NSDeviceDescriptionKey = NSDeviceDescriptionKey(rawValue: "NSScreenNumber")
return screens.filter {
guard let deviceID = $0.deviceDescription[description] as? NSNumber else { return false }
print(deviceID)
return CGDisplayIsBuiltin(deviceID.uint32Value) == 0
}
}
}
Usage is simple:
let externalScreens = NSScreen.externalScreens()
You might want to adjust the behavior in the guard statements' else blocks depending on your needs.

There is a note in the beginning of NSScreen Class Reference page:
NOTE
The NSScreen class is for getting information about the available displays only. If you need additional information or want to change the attributes relating to a display, you must use Quartz Services. For more information, see Quartz Display Services Reference.
From Quartz Display Services Reference we can learn that the main screen is not necessary the built-in one. From CGMainDisplayID() description:
The main display is the display with its screen location at (0,0) in
the global display coordinate space. In a system without display
mirroring, the display with the menu bar is typically the main
display.
If mirroring is enabled and the menu bar appears on more than one
display, this function provides a reliable way to find the main
display.
In case of hardware mirroring, the drawable display becomes the main
display. In case of software mirroring, the display with the highest
resolution and deepest pixel depth typically becomes the main display.
So, if you can use Quartz Display Services directly, use CGDisplayIsBuiltin() function to determine whether the display is built-in or not.

Related

Restart MKUserTrackingMode.FollowWithHeading

Code:
map.UserTrackingMode = MKUserTrackingMode.FollowWithHeading;
But after a user interacts with the map or after mKMapView.ShowAnnotations, the map automatically stops following the device's heading. (Not by my code. That's just how mkmapview works. This is also the case in the built in Maps app.)
How can I make the map start following the device's heading again from code?
This is exactly like how the Maps app works and is what the user will expect. The behavior you're describing is completely normal; you shouldn't interfere with it.
The usual thing is that you put an MKUserTrackingButton in the interface, associated with the map view, and the user can just tap it to switch modes automatically. Except for initially configuring the button, no code is needed.
https://developer.apple.com/documentation/mapkit/mkusertrackingbutton
or
https://developer.apple.com/documentation/mapkit/mkusertrackingbarbuttonitem

need consulting for a gui with 5 settings to be chosen by the user

I would like to let the user set 5 different settings. Each setting has a finite amount of values to choose (in my case : smaller, small, normal, large, very large)
I tried to use the UIPickerviews for this, but it needs a lot of space and I would like to have all on one page. I realized, that apple doesn't support simple dropdowns in IOS!?!?
following sample just shows only one setting and it fills up 1/3 of the screen.
In Android I managed to do this with simple dropdowns.
Any hints on how I could do this, without programming my own dropdown box ?
iOS does not have dropdowns as you say. I have created a custom control for my company that implements a dropdown. Some of my fellow iOS developers have yelled at me and said the dropdowns don't follow Apple's HIG. (You can take a look at our free app FaceDancer to see our dropdown control if you're interested.)
Apple uses picker views instead. What you can do is to have some sort of clickable element for each item (buttons for each one, e.g. "pick size", or make each field itself clickable) that displays a picker on top of the screen to let the user pick that value.
Note that there are large numbers of (both free and paid) third party frameworks offering all kinds of additional controls. I bet somebody has implemented a drop-in dropdown menu for iOS. Take a look at CocoaControls and search the iOS section for "dropdown". You can also look on Github and SourceForge.net.
I solved my problem by using a UISegmentedControl.
In my case with only 5 values, for me its the best choise. But for more then 10 values we have to use obviously external controls as Duncan mentoined.
var arraySizes5 = [sverysmall, ssmall, snormal, slarge, sverylarge];
var segmentcontrol_TextSizeTextViewer = UISegmentedControl();
segmentcontrol_TextSizeTextViewer = UISegmentedControl(items: arraySizes5);
self.view.addSubview(segmentcontrol_TextSizeTextViewer);

Determining the location of a GTK widget on-screen to create a relevant popup window

I have a button labeled import (down arrow) and I am trying to determine it's location so I can create a popup menu to say 1. from device 2. from folder.
I can't seem to find anymore than allocation in the api docs, which is relative to the application window. Please help, I will send imaginary cookies ^_^.
using PyGI if it matters.
I figured out the answer before but couldn't self answer yet.
The process is as follows:
determine the location of the GdkWindow, which does not contain the window decorations.
: gdk_window_x, gdk_window_y
determine the location of the widget relative to it's GdkWindow
: widget_x, widget_y
x = gdk_window_x + widget_x
y = gdk_window_y + widget_y
as everything in the graphics world measures from top left (unless your weird :-)), you now have the location of the top left pixel of your widget.
You can get the position of the window relative to the screen using window.get_position(). Since you already know how to get the position of the button relative to the window, it should be a matter of adding up both to get the coordinates (relative to the screen) in which to place the popup.

iPhone - Automation testing?

I am currently detecting elements by their accessibility labels when writing automation testing?
This causes a lot of problems.
Is this the correct way of detecting elements?
If not is there a better way to
detect elements without using the
accessibility label?
UI Automation uses the accessibility label (if it’s set) to derive a name property for each element. Aside from the obvious benefits, using such names can greatly simplify development and maintenance of your test scripts.
The name property is one of four properties of these elements that can be very useful in your test scripts.
name: derived from the accessibility
label
value: the current value of the
control, for example, the text in a
text field
elements: any child
elements contained within the current
element, for example, the cells in a
table view
parent: the element that
contains the current element
Instruments User Guide
Don't understand what you mean by "This causes a lot of problems.". Accessing elements by their accessibility properties in Automation Instrument is quite easy.
var button = UIATarget.localTarget().frontMostApp().mainWindow().buttons()["Cancel"];
Of course you can access elements also by their order on the screen. For example:
var button = UIATarget.localTarget().frontMostApp().mainWindow().buttons()[3];
will refer to 4th button (they are numbered from 0) label on your screen. But in case you will decide to rearrange elements on your screen in next version of the app this method can broke your tests, therefore accessing them by accessibility label is more safe.
In addition accessibility elements makes your app more accessible for people (with disabilities) who will rely on VoiceOver for using app interface- so using accessibility properties in making interface tests force you to build better accessibility for your app.

Windows Phone 7 - Bing Maps MVVM Pushpin initialization

Does anyone have a good guideline on how to initialize a pushpin on the Bing Maps control?
My current scenario works, but is not 100% correct... let me explain the details:
When I open the page with the Bing Maps control on it, I want the user to be able to push a small button that will show his current location.
To show the current location, I'm using a Pushpin. I'm already adding the Pushpin on the control in the XAML file like this:
<map:Pushpin> x:Name="currentLocation" Location="{Binding CurrentLocation}" Content="Me" ManipulationStarted="CurrentLocationPin_ManipulationStarted" </map:Pushpin>
Now with this scenario there a some problems!
One, the pushpin is always visible! So how do I go about this? ( I know I can bind the Visibility also to a property and use a bool to visibility converter, but is this the best way to do this? )
Secondly, now I don't initialize the Location in the viewmodel... but for semantic reasons I would love to initialize the default value to Geocoordinate.Unkown ( that way I can use this to do checks when the user tries to do some manipulation before a currentlocation is set ). But when I initialize the pushpin on startup I get following error: "UIElement.Arrange(finalRect) cannot be called with Infinite or NaN values in finalRect.". So my question again :) what is a good guideline to setting up a currentlocation pushpin? ( but do mind that the users has to push a small application bar button before the currentlocation is set )
The initialization problem is due to the visibility of the Pushpin. If the initial visibility of the Pushpin is Collapsed, then it won't take part in the arrange pass, so you won't get the error.
If you're using a view model to back this view, then I don't see a problem with exposing a property from the view model that determines whether the Pushpin should be visibile or not. Yes you could use a boolean to visibility converter, but you could save some processing by actually exposing a Visibility property (which is the approach I've used).
If you're using a command to initiate showing the Pushpin from the button push (the Silverlight Windows Phone Toolkit has a behavior to enable hooking up application bar buttons to commands).