Thumbor // crop and "zoom" - thumbnails

I'm pretty new to thumbor, but i was wondering if, with specific options that I'm unaware of yet, it was possible to "zoom in" an image.
Plain example below :
So far from what I've understood was possible, it could means to resize to a specific area. But I lack of experience on it to find the right options (if this is ever something possible with thumbor)

I've been working off the other answer in this thread and have been completely blocked until I realized it is not quite accurate. You don't resize the image and perform a manual crop. To be fair thumbors docs could also be made more clear on how to do this.
From trial and error I found that the required crop points top left and bottom right must be determined from the unscaled image and they must both be measured from the absolute top left of the unscaled image. Furthermore there is no scale/zoom value to calculate - only the final size output counts. An example use case follows to explain
we have an image url of 1000 x 2000 image
we want a final output of 30x30
the crop we want is a 200 x 200 section
the crop we want starts 81px from the left and 93px from the top
This results with
crop left and top of 81x93 (measured from unscaled top left corner)
crop right and bottom of 281x293 (measured from unscaled top left corner)
and a final output of 30x30
Your url operations string param would be
81x93:281x293/30x30/
Be warned that unless you are in unsafe mode as per docs you cannot manipulate the url values at will after the url has been generated because the hash in the url is made from the operation string above + the original url and salted with your thumbor key. Note the / is incorporated at the end of the operation string above. This hashing code is lifted from here
import crypto from 'crypto-js';
var key = crypto.HmacSHA1(operation + imagePath, thumborKey);
key = crypto.enc.Base64.stringify(key);
key = key.replace(/\+/g, '-').replace(/\//g, '_');
New URL would be as follows. Note the / is incorporated at the end of the operation string above
var newURL =
thumborServerUrl + '/' +
key + '/' +
operation + imagePath;
// https://imgs.mysite.com/ajs3kdlfog7prjcme9idgs/81x93:281x293/30x30/https%3A%2F%2Fmysite.s3.amazonaws.com%2Fuser%2Fmy-image.jpg

According to this you should be able to make the image larger and then perform a manual crop on it.
So something like:
http://thumbor-server/PointX1xPointY1:PointX2xPointY2/800X600/http://example.com/upload/koala.jpg

Related

remove some top, down rows and right, and left some columns of jpg image border using matlab

I have RGB museum JPG Images. most of them have image footnotes on one or more sides, and I'd like to remove them. I do that manually using paint software. now I applied the following matlab code to remove the image footnotes automatically. I get a good result for some images but for others it not remove any border. Please, can any one help me by update this code to apply it for all images?
'rgbIm = im2double(imread('A3.JPG'));
hsv=rgb2hsv(rgbIm);
m = hsv(:,:,2);
foreground = m > 0.06; % value of background
foreground = bwareaopen(foreground, 1000); % or whatever.
labeledImage = bwlabel(foreground);
measurements = regionprops(labeledImage, 'BoundingBox');
ww = measurements.BoundingBox;
croppedImage = imcrop(rgbImage, ww);'
In order to remove the boundaries you could use "imclearborder", where it checks for labelled components at boundaries and clears them. Caution! if the ROI touches the boundary, it may remove. To avoid such instance you can use "imerode" with desired "strel" -( a line or disc) before clearing the borders. The accuracy or generalizing the method to work for all images depends entirely on "threshold" which separates the foreground and background.
More generic method could be - try to extract the properties of footnotes. For instance, If they are just some texts, you can easily remove them by using a edge detection and morphology opening with line structuring element along the cols. (basic property for text detection)
Hope it helps.
I could give you a clear idea or method if you upload the image.

Get length of irregular object in a BW or RGB picture and draw it into picture for control

I face a well known problem which I am not able to solve.
I have the picture of a root (http://cl.ly/image/2W3C0a3X0a3Y). From this picture, I would like to know the length of the longest root (1st problem), the portion of the big roots and the small roots in % (say the diameter as an orientation which is the second problem). It is important that I can distinguish between fine and big roots since this is more or less the aim of the study (portion of them compared between different species). The last thing, I would like to draw a line along the measured longest root to check if everything was measured right.
For the length of the longest root, I tried to use regionprops(), which is not optimal since this assumes an oval as basic shape if I got this right.
However, the things I could really need support with are in fact:
How can I get the length of the longest root (start point should be the place where the longest root leaves the main root with the biggest diameter)?
Is it possible to distinguish between fine and big roots and can I get the portion of them? (the coin, the round object in the image is the reference)
Can I draw properties like length and diameter into the picture?
I found out how to draw the centriods of ovals and stuff, but I just dont understand how to do it with the proposed values.
I hope this is no double post and this question does not exists like this somewhere else, if yes, I am sorry for that.
I would like to thank the people on this forum, you do a great job and everybody with a question can be lucky to have you here.
Thank you for the help,
Phillip
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
EDIT
I followed the proposed solution, the code until now is as followed:
clc
clear all
close all
img=imread('root_test.jpg');
labTransformation = makecform('srgb2lab');
labI = applycform(img,labTransformation);
%seperate l,a,b
l = labI(:,:,1);
a = labI(:,:,2);
b = labI(:,:,3);
level = graythresh(l);
bw = im2bw(l);
bw = ~bw;
bw = bwareaopen(bw, 200);
se = strel('disk', 5);
bw2=imdilate(bw, se);
bw2 = imfill(bw2, 'holes');
bw3 =bwmorph(bw2, 'thin', 5);
bw3=double(bw3);
I4 = bwmorph(bw3, 'skel', 200);
%se = strel('disk', 10);%this step is for better visibility of the line
%bw4=imdilate(I4, se);
D = bwdist(I4);
This leads my in the skeleton picture - which is a great progress, thank you for that!!!
I am a little bit out at the point where I have to calculate the distances. How can I explain MatLab that it has to calculate the distance from all the small roots to the main root (how to define this?)? For this I have to work with the diameters first, right?
Could you maybe give the one or the other hint more how to accomplish the distance/length problem?
Thank you for the great help till here!
Phillip
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
EDIT2
Ok, I managed to separate the single root parts. This is not what your edit proposed, but at least something. I have the summed length of all roots as well - not too bad. But even with the (I assume) super easy step by step explanation I have never seen such a tree. I stopped at the point at which I have to select an invisible point - the rest is too advanced for me.
I dont want to waste more of the time and I am very thankful for the help you gave me already. But I suppose I am too MatLab-stupid to accomplish this :)
Thanks! Keep going like this, it is really helpful.
Phillip
For a pre-starting point, I don't see the need for a resolution of 3439x2439 for that image, it doesn't seem to add anything important to the problem, so I simply worked with a resized version of 800x567 (although there should be (nearly) no problem to apply this answer to the larger version). Also, you mention regionprops but I didn't see any description of how you got your binary image, so let us start from the beginning.
I considered your image in the LAB colorspace, then binarized the L channel by Otsu, applied a dilation on this result considering the foreground as black (the same could be done by applying an erosion instead), and finally removed small components. The L channel gives a better representation of your image than the more direct luma formula, leading to an easier segmentation. The dilation (or erosion) is done to join minor features, since there are quite a bit of ramifications that appear to be irrelevant. This produced the following image:
At this point we could attempt using the distance transform combined with grey tone anchored skeleton (see Soille's book on morphology, and/or "Order Independent Homotopic Thinning for Binary and Grey Tone Anchored Skeletons" by Ranwez and Soille). But, since the later is not easily available I will consider something simpler here. If we perform hole filling in the image above followed by thinning and pruning, we get a rough sketch of the connections between the many roots. The following image shows the result of this step composed with the original image (and dilated for better visualization):
As expected, the thinned image takes "shortcuts" due to the hole filling. But, if such step wasn't performed, then we would end up with cycles in this image -- something I want to avoid here. Nevertheless, it seems to provide a decent approximation to the size of the actual roots.
Now we need to calculate the sizes of the branches (or roots). The first thing is deciding where the main root is. This can be done by using the above binary image before the dilation and considering the distance transform, but this will not be done here -- my interest is only showing the feasibility of calculating those lengths. Supposing you know where your main root is, we need to find a path from a given root to it, and then the size of this path is the size of this root. Observe that if we eliminate the branch points from the thinned image, we get a nice set of connected components:
Assuming each end point is the end of a root, then the size of a root is the shortest path to the main root, and the path is composed by a set of connected components in the just shown image. Now you can find the largest one, the second largest, and all the others that can be calculated by this process.
EDIT:
In order to make the last step clear, first let us label all the branches found (open the image in a new tab for better visualization):
Now, the "digital" length of each branch is simply the amount of pixels in the component. You can later translate this value to a "real-world" length by considering the object added to the image. Note that at this point there is no need to depend on Image Processing algorithms at all, we can construct a tree from this representation and work there. The tree is built in the following manner: 1) find the branching point in the skeleton that belongs to the main root (this is the "invisible point" between the labels 15, 16, and 17 in the above image); 2) create an edge from that point to each branch connected to it; 3) assign a weight to the edge according to the amount of pixels needed to travel till the start of the other branch; 4) repeat with the new starting branches. For instance, at the initial point, it takes 0 pixels to reach the beginning of the branches 15, 16, and 17. Then, to reach from the beginning of the branch 15 till its end, it takes the size (number of pixels) of the branch 15. At this point we have nothing else to visit in this path, so we create a leaf node. The same process is repeated for all the other branches. For instance, here is the complete tree for this labeling (the dual representation of the following tree is much more space-efficient):
Now you find the largest weighted path -- which corresponds to the size of the largest root -- and so on.

Android custom input component with graphical representation?

I am looking all over the net to find out a code to make a custom input component that I need but didn't stumble upon anything similar. Here's how I'd like it to work:
the purpose is to input the quantity (a number)
the quantity is to be changed with two buttons (+ & -)
there should be a button to accept the input
Here's the tricky part - the graphical representation of the input:
I'd like to have two pictures representing the currently selected quantity in the following way:
q = 0:
Both pictures are dimmed
q = 1:
The upper-left quarter of the first picture is bright (normal) and the rest is dimmed
q = 2:
The upper half of the first picture is bright (normal) and the rest is dimmed
q = 3:
The upper half + lower-left quarter of the first picture is bright (normal) and the rest is dimmed
q = 4:
The first picture is bright and the second one is dimmed
q = 5:
The first picture is bright and the upper-left quarter of the second picture is bright
.
.
.
q = 8:
Both pictures are bright.
I hope I've explained that in an understandable way.
The question is:
Do I have to make 5 instances of each picture (dimmed, bright upper-left quarter, bright upper half, bright upper half + lower-left quarter, bright) or is it possible to have only one instance of each picture (bright) and to dim the portions (as necessary) with the code?
Of course, I'd appreciate a link to anything that would be of any help or chunk of the code.
I think you should be able to handle all of your conditions with just 2 images. But use a combination linearlayout,framelayout and imageviews. Some thing like this to represent one image.
FrameLayout
Imageview
LinearLayout (Divided to 4 cells using the weight property)
You can change the alpha value of the bg color of the linear layouts to get the dimmed effect.
This can also be done using different slices of the images and changing the alpha value of the imageview. You will need to find what suits you more. It wont be easy to find any code samples as this is not a common UI found in apps.

Perspective correction of UIImage from Points

I'm working on a app where I'll let the user take a picture e.g of a business card or photograph.
The user will then mark the four corners of the object (which they took a picture off) - Like it is seen in a lot of document/image/business card scanning apps:
My question is how do i crop and fix the perspective according to these four points? I've been searching for days and looked at several image proccessing libraries without any luck.
Any one who can point me in the right direction?
From iOS8+ there is Filter for Core Image called CIPerspectiveCorrection. All you need to do is pass the image and four points.
Also there is one more filter supporting iOS6+ called CIPerspectiveTransform which can be used in similar way (skewing image).
If this image were loaded in as a texture, it'd be extremely simple to skew it using OpenGL. You'd literally just draw a full-screen quad and use the yellow correction points as the UV coordinate at each point.
I'm not sure if you've tried the Opencv library yet, but it has a very nice way to deskew an image. I've got here a small snippet that takes an array of corners, your four corners for example, and a final size to map it into.
You can read the man page for warpPerspective on the OpenCV site.
cv::Mat deskew(cv::Mat& capturedFrame, cv::Point2f source_points[], cv::Size finalSize)
{
cv::Point2f dest_points[4];
// Output of deskew operation has same color space as source frame, but
// is proportional to the area the document occupied; this is to reduce
// blur effects from a scaling component.
cv::Mat deskewedMat = cv::Mat(finalSize, capturedFrame.type());
cv::Size s = capturedFrame.size();
// Deskew to full output image corners
dest_points[0] = cv::Point2f(0,s.height); // lower left
dest_points[1] = cv::Point2f(0,0); // upper left
dest_points[2] = cv::Point2f(s.width,0); // upper right
dest_points[3] = cv::Point2f(s.width,s.height); // lower right
// Build quandrangle "de-skew" transform matrix values
cv::Mat transform = cv::getPerspectiveTransform( source_points, dest_points );
// Apply the deskew transform
cv::warpPerspective( capturedFrame, deskewedMat, transform, s, cv::INTER_CUBIC );
return deskewedMat;
}
I don't know exact solution of your case, but there is approach for trapezoid: http://www.comp.nus.edu.sg/~tants/tsm/TSM_recipe.html - the idea is to continuously build transformation matrix. Theoretically you can add transformation that converts your shape into trapecy.
And there are many questions like this: https://math.stackexchange.com/questions/13404/mapping-irregular-quadrilateral-to-a-rectangle , but I didn't check solutions.

OpenXml and Word: How to Calculate WrapPolygon Coordinates?

I am creating a Microsoft Word document using the OpenXml library. Most of what I need is already working correctly. However, I can't for the life of me find the following bit of information.
I'm displaying an image in an anchor, which causes text to wrap around the image. I used WrapSquare but this seems to affect the last line of the previous paragraph as shown in the image below. The image is anchored to the second paragraph but causes the last line of the first paragraph to also indent around the image.
Word Screenshot http://www.softcircuits.com/Client/Word.jpg
Experimenting within Word, I can make the text wrap how I want by changing the wrapping to WrapTight. However, this requires a WrapPolygon with several coordinates. And I can't find any way to determine the polygon coordinates so that they match the size of the image, which is in pixels.
The documentation doesn't even seem to indicate what units are used for these coordinates, let alone how to calculate them from pixels. I can only assume the calculation would involve a DPI value, but I have no idea how to determine what DPI will be used when the user eventually loads the document into Word.
I would also be satisfied if someone can explain why the issues described above is happening in the first place. I can shift the image down and the previous paragraph is no longer affected. But why is this necessary? (The Distance from text setting for both Left and Top is 0".)
The WrapPolygon element has two possible child elements of LineTo and StartPoint that each take a x and y coordinate. According to 2.1.1331 Part 1 Section 20.4.2.9, lineTo (Wrapping Polygon Line End Position) and 2.1.1334 Part 1 Section 20.4.2.14, start (Wrapping Polygon Start) found in the [MS-OI29500: Microsoft Office Implementation Information for ISO/IEC-29500 Standard Compliance]:
The standard states that the x and y attributes are represented in
EMUs. Office interprets the x and y attributes in a fixed coordinate
space of 21600x21600.
As far as converting pixels to EMUs (English Metric Units), take a look at this blog post for an example.
I finally resolved this. Despite what the standard says, the WrapPolygon coordinates are not EMUs (English Metric Units). The coordinates are relative to the fixed coordinate space (21600 x 21600, as mentioned in the quote provided by amurra).
More specifically, this means 0,0 is at the top, left corner of the image, and 21600,21600 is at the bottom, right corner of the image. This is the case no matter what the size of the image is. Coordinates greater than 21600 extend outside the image.
According to this article, "The 21600 value is a legacy artifact from the drawing layer of early versions of the Microsoft Office."