Recalculate values on different scale in chart - swift

I want to display some values inside a chart-like tool based on pixels.
Problem is that the left xAxis has a max scale of 200 pixel. Inside that pixel square i want to display different altitude values that can range from 200m-1500m or 324m-724m or anything else.
So i need to recalculate the orignal values by a factor to display them inside this chart. Haven't find the right solution yet. Any hints ?

You have range of Y-coordinates 0..YMax (200 for your case) and data range Data_Low..Data_High (find min and max values).
To map data range to axis range, use linear formula:
Y = (Value - Data_Low) * YMax / (Data_High - Data_Low)
If axis starts from YMin, use
Y = YMin + (Value - Data_Low) * (YMax - YMin) / (Data_High - Data_Low)

Related

Understanding MATLAB Graticules in Meshgrat and Pcolorm

I'm having trouble understanding what precisely the output of meshgrat means and how this relates to the lat and lon parameters of pcolorm(lat,lon,Z). I have a grid of global data, I'll call Z, at a 1.5 degree latitude x 1.5 degree longitude spatial resolution. Thus I have a matrix that's 120 x 240 (180 degrees of latitude / 1.5 = 120, 360 degrees of longitude / 1.5 = 240). Row 1 is 90 N and column 1 is 180 W (-180).
If I follow the MATLAB documentation, I can use meshgrat to produce the lat and lon arguments that I need to supply to pcolorm as follows.
latlim = [-90 90];
lonlim = [-180 180];
[lat,lon] = meshgrat(latlim,lonlim,[120 240]);
However, I don't understand why the spacing of the output is the way it is. For example, the first five values of lat are [-90.0000, -88.4874, -86.9748,-85.4622,-83.9496...]. The lon values follow the same spacing. The spacing is very close to 1.5 degrees, but it isn't. Why is there a discrepancy? The documentation claims that the paired lat and lon values are the location of the graticule vertices. In that case, these values make some sense, since there will always be one more vertex than actual grid cells. To test this, I made the following adjustment to the meshgrat code by adding one extra row and column:
latlim2 = [-90 90];
lonlim2 = [-180 180];
[lat2,lon2] = meshgrat(latlim2,lonlim2,[121 241]);
This did, indeed, produce the expected output, with the spacing now exactly at 1.5 degrees (i.e [-90.0000, -88.5000, -87.0000, -85.5000, -84.0000...]). Again, this is logical if these are viewed as vertices. But under this scenario lat and lon no longer match Z in size, which goes against how the documentation says to treat lat and lon in this case.
There seems to be a mismatch here: either the spacing in the lat lon grids are not accurate, or the girds are not the same size as the data, which would be fine in my mind as long as MATLAB knows how to interpret them accordingly, but the documentation does not seem to suggest using it this way. I have no detailed knowledge of how the MATLAB functions work at a finer level. Can someone explain to me what I'm missing?
Thus I have a matrix that's 120 x 240 (180 degrees of latitude / 1.5 = 120, 360 degrees of longitude / 1.5 = 240).
180/1.5 is indeed 120. But you also have an element at 0deg (presumably). That's 121.

How do i create a rectangular mask at known angles?

I have created a synthetic image that consists of a circle at the centre of a box with the code below.
%# Create a logical image of a circle with image size specified as follows:
imageSizeY = 400;
imageSizeX = 300;
[ygv, xgv] = meshgrid(1:imageSizeY, 1:imageSizeX);
%# Next create a logical mask for the circle with specified radius and center
centerY = imageSizeY/2;
centerX = imageSizeX/2;
radius = 100;
Img = double( (ygv - centerY).^2 + (xgv - centerX).^2 <= radius.^2 );
%# change image labels from double to numeric
for ii = 1:numel(Img)
if Img(ii) == 0
Img(ii) = 2; %change label from 0 to 2
end
end
%# plot image
RI = imref2d(size(Img),[0 size(Img, 2)],[0 size(Img, 1)]);
figure, imshow(Img, RI, [], 'InitialMagnification','fit');
Now, i need to create a rectangular mask (with label == 3, and row/col dimensions: 1 by imageSizeX) across the image from top to bottom and at known angles with the edges of the circle (see attached figure). Also, how can i make the rectangle thicker than 1 by imageSizeX?. As another option, I would love to try having the rectangle stop at say column 350. Lastly, any ideas how I can improve on the resolution? I mean is it possible to keep the image size the same while increasing/decreasing the resolution.
I have no idea how to go about this. Please i need any help/advice/suggestions that i can get. Many thanks!.
You can use the cos function to find the x coordinate with the correct angle phi.
First notice that the angle between the radius that intersects the vertex of phi has angle with the x-axis given by:
and the x coordinate of that vertex is given by
so the mask simply needs to set that row to 3.
Example:
phi = 45; % Desired angle in degrees
width = 350; % Desired width in pixels
height = 50; % Desired height of bar in pixels
theta = pi-phi*pi/180; % The radius angle
x = centerX + round(radius*cos(theta)); % Find the nearest row
x0 = max(1, x-height); % Find where to start the bar
Img(x0:x,1:width)=3;
The resulting image looks like:
Note that the max function is used to deal with the case where the bar thickness would extend beyond the top of the image.
Regarding resolution, the image resolution is determined by the size of the matrix you create. In your example that is (400,300). If you want higher resolution simply increase those numbers. However, if you would like to link the resolution to a higher DPI (Dots per Inch) so there are more pixels in each physical inch you can use the "Export Setup" window in the figure File menu.
Shown here:

Graph with logarythimic y axis in Swift

My goal is to create a graph similar to a picture below. I actually managed to implement it with combination of a magic numbers and set scale (0.001 - 1000). So to summarize I am looking for a formula that will calculate right position to plot lines on logarithmic y scale for range of predefined values.
Y axis: logarithmic scale
X axis: Integers
Any help will be welcome!
I solved it thanks to help of #DietrichEpp. Here is the function that calculates Y coordinates given:
screenY0 - min point on y axis
screenY1 - max point on y axis
dataY0 - value responding to the top of the scale
dataY1 - value responding to the bottom of the scale
func convert(data: Double, screenY0:CGFloat, screenY1:CGFloat, dataY0:Double, dataY1:CGFloat) ->CGFloat{
return screenY0 + (log(CGFloat(data)) - log(CGFloat(dataY0))) / (log(CGFloat(dataY1)) - log(CGFloat(dataY0))) * (screenY1 - screenY0)
}

Convert from coordinates to pixels

I am implementing a rightclick context menu on my google v3 map and I need to get the pixel x and y to correctly position the menu. I get the lat and the lng, anyone have a nice solution to get the pixel x and y?
Best Regards
Henkemota
index=x+(y*height)
x = index % width
y = index / height
Correction to the above answer:
index=x+(y*width)
//(not y*height ... because you're taking one full horizontal line of pixels (e.g. 1280px) and multiplying that by the number of lines (y) down the screen at which x is, then adding x to account for x pixels over in the next full line.)
x = index % width
y = index / height

What is the depth image received from Kinect

When I ran this Matlab code to get the depth image, the result I got is a matrix of 480x640. The min element value is 0 and the max element value is 2711. What does 2711 mean? Is that the distance from the camera to the farthest part of the image. But what is the unit of 2711. Is that meter of feet or ??
I don't know what the Matlab code exactly does to the depth, but it probably does some processing on it because the depth sent by the Kinect is on 11 bits, so it shouldn't be higher than 2048. Try to find out what it does, or to get access to the raw data sent by the Kinect.
The data sent by the Kinect is not a proper distance (it's a "disparity"), so you have to do some math to convert it to useful units.
From the OpenKinect project wiki (which contains useful information about the Kinect) :
From their data, a basic first order
approximation for converting the raw
11-bit disparity value to a depth
value in centimeters is: 100/(-0.00307
* rawDisparity + 3.33). This approximation is approximately 10 cm
off at 4 m away, and less than 2 cm
off within 2.5 m.
A better approximation is given by
Stéphane Magnenat in this post:
distance = 0.1236 * tan(rawDisparity /
2842.5 + 1.1863) in meters. Adding a final offset term of -0.037 centers
the original ROS data. The tan
approximation has a sum squared
difference of .33 cm while the 1/x
approximation is about 1.7 cm.
Once you have the distance using the
measurement above, a good
approximation for converting (i, j, z)
to (x,y,z) is:
x = (i - w / 2) * (z + minDistance) * scaleFactor * (w/h)
y = (j - h / 2) * (z + minDistance) * scaleFactor
z = z
Where
minDistance = -10
scaleFactor = .0021.
These values were found by hand.
You can find more details about the Kinect's depth camera and its calibration on the ROS website (and many others !).
If you map the data to a meter scale it compresses the depth image slightly. I found this was an issue when I was trying to look for planes in the mapped data.