Automatically recognized objects transferred into ROIs in Imagej - plugins

I am facing a challange in my field and I would need some advices.
I have an image with tree rings.
To see a photo I want to work with, you can check it from my dropbox: https://dl.dropboxusercontent.com/u/65873264/Sample.jpg
I would like to write a macro/task... in which program will recognize each ring and mark it as ROI.
I was trying to make this task using some plugins: Template Matching, Feature Finder and Visual grap. But those rings could be extremely variable.
What I need is something like that: With Analyze particles function program recognize all vessels (objects) on a thresholded image. Second step is fun: for each particle, it check if there is a particle around in a range of 0.5 mm. If it is, it creates an ROI including both particles and searches for next particle in the range of 0.5 mm...
There is a smillar method [http://imagej.1557.x6.nabble.com/combine-particles-in-ROI-manager-automatically-td3692844.html]
But here macro at first calculates differences between two serial particles, but I need to include all particles in range of 0.5 mm.

The following ImageJ macro code makes use of the Maximum and Minimum filters in ImageJ to perform a morphological closing operation on the particles in your sample image, and it then uses the Particle Analyzer to create ROIs from those:
open("https://dl.dropboxusercontent.com/u/65873264/Sample.jpg");
run("Duplicate...", "title=[Temporary Copy]");
run("8-bit");
setAutoThreshold("Default");
run("Analyze Particles...", "size=100-Infinity show=Masks clear include in_situ");
run("Maximum...", "radius=70");
run("Minimum...", "radius=70");
run("Analyze Particles...", "size=100-Infinity clear add");
selectWindow("Sample.jpg");
roiManager("Show All with labels");
roiManager("Show All");

Related

How to have a generator class in shader glsl with amplify shader editor

i want to create a shader that can cover a surface with "circles" from many random positions.
the circles keep growing until all surface covered with them.
here my first try with amplify shader editor.
the problem is i don't know how make this shader that create array of "point maker" with random positions.also i want to controll circles with
c# example:
point_maker = new point_maker[10];
point_maker[1].position = Vector2.one;
point_maker[1].scale = 1;
and etc ...
Heads-up: That's probably not the way to do what you're looking for, as every pixel in your shader would need to loop over all your input points, while each of those pixels will only be covered by one at most. It's a classic case of embracing the benefits of the parallel nature of shaders. (The keyword for me here is 'random', as in 'random looking').
There's 2 distinct problems here: generating circles, and masking them.
I would go onto generating a grid out of your input space (most likely your UV coordinates so I'll assume that from here), by taking the fractional part of the coords scaled by some value: UV (usually) go between 0 and 1, so if you want 100 circles you'd multiply the coord by 10. You now have a grid of 100 pieces of UVs, where you can do something similar to what you have to generate the circle (tip: dot product a vector on itself gives the square distance, which is much cheaper to compute).
You want some randomness, so you need to add some offset to the center of the circle. You need some sort of random number (there might be some in ASE I can't remember, or make one your own - there's plenty of that you look online) that is unique per cell of the grid. To do this you'd input the remainder of your frac() as value to your hash/random method. You also need to limit that offset depending on the radius of the circle so it doesn't touch the sides of the cell. You can overlay more than one layer of circles if you want more coverage as well.
Second step is to figure out if you want to display those circles at all, and for this you could make the drawing conditional to the distance from the center of the circle to an input coordinate you provide to the shader, by some threshold. (it doesn't have to be an 'if' condition per se, it could be clamping the value to the bg color or something)
I'm making a lot of assumptions on what you want to do here, and if you have stronger conditions on the point distribution you might be better off rendering quads to a render texture for example, but that's a whole other topic :)

Region of Interest in nighttime vehicle detection

I am developing a project of detecting vehicles' headlights in night scene. I am working on a demo on MATLAB. My problem is that I need to find region of interest (ROI) to get low computing requirement. I have researched in many papers and they just use a fixed ROI like this one, the upper part is ignored and the bottom is used to analysed later.
However, if the camera is not stable, I think this approach is inappropriate. I want to find a more flexible one, which alternates in each frame. My experiments images are shown here:
If anyone has any idea, plz give me some suggestions.
I would turn the problem around and say that we are looking for headlights
ABOVE a certain line rather than saying that the headlights are below a certain line i.e. the horizon,
Your images have a very high reflection onto the tarmac and we can use that to our advantage. We know that the maximum amount of light in the image is somewhere around the reflection and headlights. We therefore look for the row with the maximum light and use that as our floor. Then look for headlights above this floor.
The idea here is that we look at the profile of the intensities on a row-by-row basis and finding the row with the maximum value.
This will only work with dark images (i.e. night) and where the reflection of the headlights onto the tarmac is large.
It will NOT work with images taking in daylight.
I have written this in Python and OpenCV but I'm sure you can translate it to a language of your choice.
import matplotlib.pylab as pl
import cv2
# Load the image
im = cv2.imread('headlights_at_night2.jpg')
# Convert to grey.
grey_image = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
Smooth the image heavily to mask out any local peaks or valleys
We are trying to smooth the headlights and the reflection so that there will be a nice peak. Ideally, the headlights and the reflection would merge into one area
grey_image = cv2.blur(grey_image, (15,15))
Sum the intensities row-by-row
intensity_profile = []
for r in range(0, grey_image.shape[0]):
intensity_profile.append(pl.sum(grey_image[r,:]))
Smooth the profile and convert it to a numpy array for easy handling of the data
window = 10
weights = pl.repeat(1.0, window)/window
profile = pl.convolve(pl.asarray(intensity_profile), weights, 'same')
Find the maximum value of the profile. That represents the y coordinate of the headlights and the reflection area. The heat map on the left show you the distribution. The right graph shows you the total intensity value per row.
We can clearly see that the sum of the intensities has a peak.The y-coordinate is 371 and indicated by a red dot in the heat map and a red dashed line in the graph.
max_value = profile.max()
max_value_location = pl.where(profile==max_value)[0]
horizon = max_value_location
The blue curve in the right-most figure represents the variable profile
The row where we find the maximum value is our floor. We then know that the headlights are above that line. We also know that most of the upper part of the image will be that of the sky and therefore dark.
I display the result below.
I know that the line in both images are on almost the same coordinates but I think that is just a coincidence.
You may try downsampling the image.

Matlab: Transparent Object Detection

I'm trying to detect a transparent object (glass bottle) in an image.
The image is taken from the Kinect so there's rgb and depth images available.
I read from a literature that the boundary of an transparent object have 'unknown depth values' and I can use that as a boundary condition for detecting the object.
The problem is I cannot find that information from my depth file ie. the depth of the image only returns either zero or other values but never 'unknown'
I assume kinect represent 'unknown depth values' as zeros but this raises another problem:
there's a lot of zeros in the image ( ie. boundary etc) how do I know which zero is for the object?
Thanks alot!!
You could try to detect the body of the transparent object rather than the border. The body should return values of whatever is behind it, but those values will be noisier. Take a time-running sample and calculate a running standard deviation. Look for the region of the image that has larger errors than elsewhere. This is simpler if you have access to the raw data (libfreenect). If the data is converted to distance, then the error is a function of distance, so you need to detect regions that are noisier than other regions at that distance, not just regions that are noisier than elsewhere.
I'd recommend you take a look at the following publication:
They were able to detect objects (such as water bottles and glasses). all undertaken in matlab.
Object localisation via action recognition.
J. Darby, B. Li, R. Cunningham and N. Costen.
ICPR, 2012.

Matlab color detection

I'm trying to consistently detect a certain color between images of the same scene. The idea is to recognize a set of object based on a color profile. So, for instance, if I'm given a scene with a green ball in it and I select that green as part of my color palette, I would like a function which has a matrix reflecting that it detects the ball.
Can anyone recommend some matlab functions/plugins/starting points for this project? ideally the function for color recognition will take an array of color values and will match them within a certain threshold.
Kinda like this:
http://www.mathworks.com/matlabcentral/fileexchange/18440-color-detection-using-hsv-color-space-training-and-testing
except it works (this one didn't)
Update:
Here's why I chose not to use the above toolkit..
I start by selecting some colors of interest in the picture
and then ask the function to recognize the road in later images...
And absolutely nothing useful is triggered. So yeah, apart from the few bugs that I came across in the code on download and fixed, this was kind of the kicker. I didn't try to fix the body of the code that recognizes the colors because.. well, I don't know how, which is why I came here.
So, let me just start off by saying road detection with color profiles is a pathological problem. But if the color of the roads are consistent, and the lighting doesn't change the color of the object you are trying to recognize then you might have a shot. (this will be extremely difficult if this is taken outside, or with different cameras, or if shadows happen, or it taking place in any sort of real-world environment)
Here are a few things that might help.
Try smoothing the image beforehand, the reason you get the bad results in the first images is probably because of small pixel variations in the road. If you can blur them, or use some sort of watershed or local averaging, you might get regions with more consistent color.
You might also consider using the LAB color space instead of HSV or RGB.
Using edge detection (see matlab's canny edge detector) might be able to get you some boundary information. If you are looking for a smooth object, there will not be very many edges in it.
Edit: I tried to adhere to this advice in the most simplistic way. Here are the resulting code and a few samples.
im=rgb2gray(im) %for most basic color capturing.. using another color space is better practice
%imshow(im)
RoadMask=roipoly(im)%create mask
RoadMask=uint8(RoadMask);%cast to so you can elementwise multiply
im=im.*RoadMask;%apply mask
[x y]=size(im);
for i=1:x
for j=1:y
%disp('here')
if (im(i,j)<160 || im(i,j)>180) %select your values based on your targets range
im(i,j)=0; %replace everything outside of range with 0
%disp(im(x,y)) %if you'd like to count pixels, turn all values
end %within range to 1 and do a sum at end
end
end
First converted from RGB to grayscale
selected a region that generally matched the roads grayness
Notice parts of the road are not captured and the blocky edges. such as this -------------^
This implementation was quicky and dirty, but I wanted to put it up before I forgot. I'll try to update with code that implements smoothing, sampling, and the LAB color space.

Calculating corresponding pixels

I have a computer vision set up with two cameras. One of this cameras is a time of flight camera. It gives me the depth of the scene at every pixel. The other camera is standard camera giving me a colour image of the scene.
We would like to use the depth information to remove some areas from the colour image. We plan on object, person and hand tracking in the colour image and want to remove far away background pixel with the help of the time of flight camera. It is not sure yet if the cameras can be aligned in a parallel set up.
We could use OpenCv or Matlab for the calculations.
I read a lot about rectification, Epipolargeometry etc but I still have problems to see the steps I have to take to calculate the correspondence for every pixel.
What approach would you use, which functions can be used. In which steps would you divide the problem? Is there a tutorial or sample code available somewhere?
Update We plan on doing an automatic calibration using known markers placed in the scene
If you want robust correspondences, you should consider SIFT. There are several implementations in MATLAB - I use the Vedaldi-Fulkerson VL Feat library.
If you really need fast performance (and I think you don't), you should think about using OpenCV's SURF detector.
If you have any other questions, do ask. This other answer of mine might be useful.
PS: By correspondences, I'm assuming you want to find the coordinates of a projection of the same 3D point on both your images - i.e. the coordinates (i,j) of a pixel u_A in Image A and u_B in Image B which is a projection of the same point in 3D.