AEM6.4: Meaning of values in image map properties - aem

AEM offers a plugin to create image maps for its internal inplace editor. After configuration the given values are stored into follow forrmat:
[rect(89,92,356,368)"/content/sites/we-retail/us"|"_blank"|"fdfdfdfdf"|(0.2,0.2004,0.8,0.8017)]
The first paratheses are defines the coordinates of choosen shape.
The content within the first quotaion signs defines the target site, within the second how to open it the browser. In the third pair of quotations sign contains an alternative Text for non images display.
What I don't know are the values in second paratheses. Does someone know for what these values stands for?

From the WCM core components Image model, they are called relative coordinates.
They are not standard HTML attributes and are instead populated as data attributes of the area tag within the image component.
See code below:
<area shape="${area.shape}" coords="${area.coordinates}" href="${area.href}"
target="${area.target}" alt="${area.alt}" data-cmp-hook-image="area"
data-cmp-relcoords="${area.relativeCoordinates}">
Since the map coordinates are fixed coordinates and do not change when the image scales in or not based on screen sizes, the image component’s JavaScript uses this relative coordinates data to adjust the coordinates of the map area whenever the image size is adjusted. This is handled by the resizeAreas() function within the component’s clientlib.

Related

Is there a way to check for markers whos icons intersect / overlap visibly?

I am building a map and want to use the leaflet markercluster plugin to cluster any markers that intersect visibly (the icons overlap each other). I can't seem to figure out a way to check whether the markers icons intersect though.
I've examined the documentation and the Marker objects. The marker object has no "bounds" object and has no function to return the bounds of the icon.
Yes, it's possible.
This is implemented in some Leaflet plugins, like Leaflet.LayerGroup.Collision - the technique involves fetching the computed style of each icon's HTML element to get the actual size in CSS pixels, offset those numbers by the relative pixel position of the marker's LatLng, and using a rtree data structure to speed up the calculation of the overlaps. Do have a look at the complete source for LayerGroup.Collision plugin.
Note that this technique only takes into account the rectangular bounding boxes of the icons; while it would be possible to check for the individual transparent pixels, that would involve more complex data structures and a different technique to fetch the opacity of each pixel.

polygon label displying duplicate in mapbox studio style editor

I have uploaded polygon shape zip file in mapbox tileset and created layer name as polygon_label but layer labels are showing duplicate inside polygon area.
So there is any way to centroid the polygon geometry or restrict to display duplicate label.
How to get label centroid of the polygon and remove duplication of the label from polygon area?
You need to create point geometries and use that as your source.
I have the same issue with a shapefile with land sections I imported. To get the outline of the land sections I added a fill type layer since the tileset contains polygon features. Then I add another layer of type symbol to be able to show the section address text field from the tileset features. I set the placement as "Point". But, the text field shows multiple places inside the polygons.
The only solution I have found is to make the labels appear along the boundries of the polygons by setting the placement as "Line". This still shows multiple labels. But, it is a bit more friendly for the user.

Dicom: Matlab versus ImageJ grey level

I am processing a group of DICOM images using both ImageJ and Matlab.
In order to do the processing, I need to find spots that have grey levels between 110 and 120 in an 8 bit-depth version of the image.
The thing is: The image that Matlab and ImageJ shows me are different, using the same source file.
I assume that one of them is performing some sort of conversion in the grey levels of it when reading or before displaying. But which one of them?
And in this case, how can I calibrate do so that they display the same image?
The following image shows a comparison of the image read.
In the case of the imageJ, I just opened the application and opened the DICOM image.
In the second case, I used the following MATLAB script:
[image] = dicomread('I1400001');
figure (1)
imshow(image,[]);
title('Original DICOM image');
So which one is changing the original image and if that's the case, how can I modify so that both version looks the same?
It appears that by default ImageJ uses the Window Center and Window Width tags in the DICOM header to perform window and level contrast adjustment on the raw pixel data before displaying it, whereas the MATLAB code is using the full range of data for the display. Taken from the ImageJ User's Guide:
16 Display Range of DICOM Images
With DICOM images, ImageJ sets the
initial display range based on the Window Center (0028, 1050) and
Window Width (0028, 1051) tags. Click Reset on the W&L or B&C window and the display range will be set to the minimum and maximum
pixel values.
So, setting ImageJ to use the full range of pixel values should give you an image to match the one displayed in MATLAB. Alternatively, you could use dicominfo in MATLAB to get those two tag values from the header, then apply window/leveling to the data before displaying it. Your code will probably look something like this (using the formula from the first link above):
img = dicomread('I1400001');
imgInfo = dicominfo('I1400001');
c = double(imgInfo.WindowCenter);
w = double(imgInfo.WindowWidth);
imgScaled = 255.*((double(img)-(c-0.5))/(w-1)+0.5); % Rescale the data
imgScaled = uint8(min(max(imgScaled, 0), 255)); % Clip the edges
Note that 1) double is used to convert to double precision to avoid integer arithmetic, 2) the data is assumed to be unsigned 8-bit integers (which is what the result is converted back to), and 3) I didn't use the variable name image because there is already a function with that name. ;)
A normalized CT image (e.g. after the modality LUT transformation) will have an intensity value ranging from -1024 to position 2000+ in the Hounsfield unit (HU). So, an image processing filter should work within this image data range. On the other hand, a RGB display driver can only display 256 shades of gray. To overcome this limitation, most typical medical viewers apply Window Leveling to create a view of the image where the anatomy of interest has the proper contrast to display in the RGB display driver (mapping the image data of interest to 256 or less shades of gray). One of the ways to define the Window Level settings is to use Window Center (0028,1050) and Window Width (0028,1051) tags. Also, a single CT image can have multiple Window Level values and each pair is basically a view of the anatomy of interest. So using view data for image processing, instead actual image data, may not produce consistent results.

render text with ground truth meta data

I'm Working On Optical Character Recognition.In our work we need automatically generate some rendered word image and we need each characters location(boundary) in rendered word image. this meta data about rendered image is called ground truth.
How can I do that?
I found a rendering c api called Pango wich has a function named pango_layout_Iter_get_char_extent() that can be used for that.
https://developer.gnome.org/pango/stable/pango-Layout-Objects.html#pango-layout-iter-get-char-extents

MATLAB: display RGB values of a fits image

I want to read a .fits image of wide field sky and display the RGB values contained in a star. Can you please suggest a method to do so?
I have used fitsread to read in the image but i am not able to show the RGB values for specific locations(star).
In order to do this, you'll need a proper rgb fits file. The only .fits viewer I know of, ds9, does not support saving rgb fits files, but rather as the three separate (r,g,b) fits images. You can use "getpix" from wcstools (http://tdc-www.harvard.edu/wcstools/) or scisoft (http://www.eso.org/sci/software/scisoft/) on the individual frames. Note that "getpix" returns the pixel value given an image (x,y) location. ds9 does not provide the physical image location, but rather the wcs coordinates, so you may have to convert to image coordinates before calling getpix.