During study tutorial, i using function to show visible rect, containing all of annotations (on mapView) as it shown:
// 1
let rectToDisplay = self.treasures.reduce(MKMapRectNull) { (mapRect: MKMapRect, treasure: Treasure) -> MKMapRect in
// 2
let treasurePointRect =
MKMapRect(origin: treasure.location.mapPoint,
size: MKMapSize(width: 0, height: 0))
return MKMapRectUnion(mapRect, treasurePointRect)
// 3
}
// 4
self.mapView.setVisibleMapRect(rectToDisplay, edgePadding: UIEdgeInsetsMake(74, 10, 10, 10), animated: false)
Everything work, but I'm not understanding how exactly it working.
On third lane, we are doing following:
return MKMapRectUnion(mapRect, treasurePointRect)
Before that, we declare
mapRect: MKMapRect
So, mapRect does not have any initial value and suppose to not contain any values. Am i right?
How exactly MKMapRectUnion calculated, if mapRect have zero values? Is there any way i could look at every step of function using some kind of NSLog statement?
If you be so kind, please, explain me in details how that function work. As i understand, function try to make map Rect combining "zero" value map point and other mapPoint, with correct values.
you have understood everything correctly. as for your question
How exactly MKMapRectUnion calculated, if mapRect have zero values?
apple docs says:
If either rectangle is null, this method returns the other rectangle.
The origin point of the returned rectangle is set to the smaller of
the x and y values for the two rectangles. Similarly, the size and
width of the rectangle are computed by taking the maximum x and y
values and subtracting the x and y values for the new origin point.
how exactly that is being calculated, you can ask apple engineers or reverse engineer the mapkit. if the function does work fine then you don't have go to such length and divert from your real assignment.
Related
I'm a newbie in Swift and MacOS.
I gonna find a method to get the exact display coordinate
NSEvent.mouseLocation
I have found method in CoreGraphic :
func CGDisplayBounds(_ display: CGDirectDisplayID) -> CGRect
but the coordinate is different.
I can workaround to apply a method to mathematically method to convert point Y.
But is there any method to get or convert the position programmatically?
I expect to get the same coordinate with NSEvent.mouseLocation.
Thank for your attention.
It returns to the same coordinate with mouse location.
As you noted, CoreGraphics has what Apple calls ‘flipped’ geometry, with the origin at the top left and the y coordinates increasing toward the bottom of the screen. This is the geometry used by most computer graphics systems.
AppKit prefers what Apple calls ‘non-flipped’, with the origin at the bottom left and the y coordinates increasing toward the top of the screen. This is the geometry normally used in mathematics.
The origin (0, 0) of the CoreGraphics global geometry is always at the top-left of the ‘main’ display (identified by CGMainDisplayID()). The origin of the AppKit global geometry is always at the bottom-left of the main display. To convert between the two geometries, subtract your y coordinate from the height of the main display.
That is:
extension CGPoint {
func convertedToAppKit() -> CGPoint {
return .init(
x: x,
y: CGDisplayBounds(CGMainDisplayID()).height - y
)
}
func convertedToCoreGraphics() -> CGPoint {
return .init(
x: x,
y: CGDisplayBounds(CGMainDisplayID()).height - y
)
}
}
You may notice that these two functions have the same implementation. You don't really need two functions; you can just use one. It converts in both directions.
Calling CGDisplayBounds(CGMainDisplayID()) might also be inefficient. You might want to cache the value or batch your transformations if you're going to be doing a lot of them. But if you cache the value, you'll want to subscribe to NSApplication.didChangeScreenParametersNotification so you can update the cached value if it needs to change.
When setting the region for a MKMapView using MKCoordinateRegionMakeWithDistance, the resulting region always gives the wrong results, where the size is always slightly bigger than the best fit I would get for other phone models.
for example, doing:
let region = MKCoordinateRegionMakeWithDistance(someLocation, 400, 200)
let adjustedRegion = mapView.regionThatFits(region)
mapView.setRegion(adjustedRegion, animated: true)
(The mapview's vertical and horizontal ratios are defined to be set to 2:1)
would always result in a view that would give me 420 m vertically, 210~ m horizontally, while this doesn't happen for other phone models.
Understandably, it is meant to find the 'best fit` region for the specified dimensions, what's concerning me is that the results are different on iPhone X specifically. (on models 8, 8+, 5s)
Is there something I need to do specifically for iPhone X models with mapViews?
Turns out, mapkit's mapView's MKCoordinateRegionMakeWithDistance does it's calculations without the safe area insets.
Since my mapView was set to be at the bottom of the screen, when applying the vertical distance, some reduction needs to be made to compensate for this weird behaviour.
let verticalDistance = 400 * ((mapView.bounds.height - mapView.safeAreaInsets.bottom) / mapView.bounds.height )
let region = MKCoordinateRegionMakeWithDistance(someLocation, verticalDistance, 200)
let adjustedRegion = mapView.regionThatFits(region)
mapView.setRegion(adjustedRegion, animated: true)
This allow the mapView's resulting region to be correct in vertical and horizontal distance (compared against google map's web distance measuring tool)
Anybody know why the following is not reciprocal? latLng and new
var point = dispmap.latLngToContainerPoint(latlng);
var newPoint = L.point([point.x, point.y]);
var newLatLng = dispmap.containerPointToLatLng(newPoint);
When I execute this code I send in latlng=(26.75529,-80.93581)
newLatLng, which by inspection of the code above I would expect to reciprocate gives back...
newLatLng = (26.75542,-80.93628)
I'm wanting to array some markers with identical lat-lons around the shared spot on a map, and bumping each by some screen coordinates looks like the best method based on some blog/issue reading I've done.
I'm, "close" to what I want to achieve, but as I try to validate what these leaflet calls are doing for me I hit the fundamental question above.
They can't be ...
Latitude and longitude are float values while x and y are integer values.
This means that there are an (theoretically) infinite number of latlng's and a rather small number of points on your view (width * height).
Furthermore, I'm not sure how you define identical latlng's; the best you can't to is to speak of proximity.
If I read between the lines, identical may mean that the markers overlap. Then the best way is to have a look how Leaflet.MarkerCluster are tackling with the problem.
I was able to achieve my desired result by altering zoom level to avoid pixel-point quantization effects on my translations. The screenshot below illustrates an orange and two green circle markers that represent an identical lat-lon, but I want the green arrayed around the orange in a circular fashion...in this example there are only 2 green.
I perform simple circular array math with an angular step size of PI/4 in this example. The KEY to getting the visual effect correct is the "dispmap.setZoom(dispmap._layersMaxZoom)" call BEFORE I do the math, and then I invoke "dispmap.setZoom(mats.zoom)" after the math, which will give the user the desired zoom level as specified by variable mats.zoom.
var arrayRad=20;
var dtheta=Math.PI/4;
var theta=0;
dispmap.setZoom(dispmap._layersMaxZoom)
L.geoJson(JSON.parse(mats.intendeds), {
pointToLayer: function (feature, latlng) {
var point = dispmap.latLngToContainerPoint(latlng);
dx = arrayRad*Math.cos(theta);
dy = arrayRad*Math.sin(theta);
theta += dtheta;
var newPoint = L.point([point.x + dx, point.y+ dy]);
var newLatLng = dispmap.containerPointToLatLng(newPoint);
return L.circleMarker(newLatLng, intendedDeliveryLocationMarkerOptions);
}, onEachFeature: onEachIntendedLocFeature }).addTo(dispmap);
dispmap.setZoom(mats.zoom);
Sample screen shot at max zoom level: 2 arrayed markers
A little of background: I have function spawnBubbles(), which uses output of another function determineSpawnPoint().
determineSpawnPoint() returns random CGPoint. There is also action, which spawns SpriteNodes once 0.5 second in the random X coordinate CGPoint.
The problem: as determineSpawnPoint() is random, sometimes 2 or 3 SpriteNodes in a row created nearby, so they intersect with each over.
What do I want to achieve: create a function
func checkForFreeSpace(spawnPoint:CGPoint) -> Bool{
//some code
}
which returns true if there is free space around certain point.
So, basically, when I get new random CGPoint, I want to implement a CGRect around it, and check if this rectangle intersects with some SpriteNodes (speaking in common sense, if there is free space around it)
You can create two CGRects from the point and nodes and use CGRectIntersectsRect function to check whether they intersect. The function returns true if they intersect.
if (CGRectIntersectsRect(rect1, rect2))
{
println("They intersect")
}
I want to draw UIimage with CGAffineTransform but It gives wrong output with CGContextConcatCTM
I have try with below code :
CGAffineTransform t = CGAffineTransformMake(1.67822, -1.38952, 1.38952, 1.67822, 278.684, 209.129); // transformation of uiimageview
UIGraphicsBeginImageContext(CGSizeMake(1024, 768));
CGContextRef imageContext = UIGraphicsGetCurrentContext();
CGContextDrawImage(imageContext, dragView.frame, dragView.image.CGImage);
CGContextConcatCTM(imageContext, t);
NSLog(#"\n%#\n%#", NSStringFromCGAffineTransform(t),NSStringFromCGAffineTransform(CGContextGetCTM(imageContext)));
Output :
[1.67822, -1.38952, 1.38952, 1.67822, 278.684, 209.129] // imageview transformation
[1.67822, 1.38952, 1.38952, -1.67822, 278.684, 558.871] // drawn image transformation
CGAffineTransform CGAffineTransformMake (
CGFloat a,
CGFloat b,
CGFloat c,
CGFloat d,
CGFloat tx,
CGFloat ty
);
Parameter b, d and ty changed, How to solve this?
There is no problem to solve. Your log output is correct.
Comparing the two matrixes, the difference between the two is this:
scale vertically by -1 (which affects two of the first four members)
translate vertically by 349.742 (which affects the last member)
I'm going to take a guess and say your view is about 350 points tall. Am I right? Actually, the 349.742 is weird, since you set the context's height to 768. It's almost half (perhaps because the anchor point is centered?), but well short, and cutting off the status bar wouldn't make sense here (and wouldn't account for a 68.516-point difference). So that is a puzzle. But, what follows is still true:
A vertical scale and translate is how you would flip a context. This context has gone from lower-left origin to upper-left origin, or vice versa.
That happened before you concatenated your (unexplained, hard-coded) matrix in. Assuming you didn't flip the context yourself, it probably came that way (I would guess as a UIKit implementation detail).
Concatenation (as in CGContextConcatCTM) does not replace the old transformation matrix with the new one; it is matrix multiplication. The matrix you have afterward is the product of both the matrix you started with and the one you concatenated onto it. The resulting matrix is both flipped and then… whatever your matrix does.
You can see this for yourself by simply getting the CTM before you concatenate your matrix onto it, and logging that. You should see this:
[0, -1, 0, -1, 0, 349.742]
See also “The Math Behind the Matrices” in the Quartz 2D Programming Guide.