I have several images that i would like to correct from artifacts. They show different animals but they appear to look like they were folded (look at the image attached). The folds are straight and they go through the wings as well, they are just hard to see but they are there. I would like to remove the folds but at the same time preserve the information from the picture (structure and color of the wings).
I am using MATLAB right now and i have tried several methods but nothing seems to work.
Initially i tried to see if i can see anything by using an FFT but i do not see a structure in the spectrum that i can remove. I tried to use several edge detection methods (like Sobel, etc) but the problem is that the edge detection always finds the edges of the wings (because they are stronger)
rather than the straight lines. I was wondering if anyone has any ideas about how to proceed with this problem? I am not attaching any code because none of the methods i have tried (and described) are working.
Thank you for the help in advance.
I'll leave this bit here for anyone that knows how to erase those lines without affecting the quality of the image:
a = imread('https://i.stack.imgur.com/WpFAA.jpg');
b = abs(diff(a,1,2));
b = max(b,[],3);
c = imerode(b,strel('rectangle',[200,1]));
I think you should use a 2-dimensional Fast Fourier Transform
It might be easier to first use GIMP / Photoshop if a filter can resolve it.
I'm guessing the CC sensor got broken (it looks to good for old scanner problems). Maybe an electric distortion while it was reading the camera sensor. Such signals in theory have a repeating nature.
I dont think this was caused by a wrong colordepth/colorspace translation
If you like to code, then you might also write a custom pixel based filter in which you take x vertical pixels (say 20 or so) compare them to the next vertical row of 20 pixels. Compare against HSL (L lightnes), not RGB.
From all pixels calculate brightness changes this way.
Then per pixel check H (heu) is within range of nearby pixels take slope average of their brightness(ea take 30 pixels horizontal, calculate average brightnes of first 10 and last 10 pixels apply that brightness to center pixel 15,... //30, 15, 10 try to find what works well
Since you have whole strokes that apear brighter/darker such filter would smooth that effect out, the difficulty is to remain other patterns (the wings are less distorted), knowing what color space the sensor had might allow for a better decision as HSL, maybe HSV or so..
Related
I have seen many tutorials that people blend two images that are placed on top of each other very nicely in Photoshop. For example here are two images that are placed on top of each other:
Then in Photoshop after some work, the edges (around the smaller image) will be erased and two images are nicely mixed.
For example, this is a possible end result:
As it can be seen there is no edge and two images are very nicely blended, without blurring.
Can someone point me to any article or post that shows the math behind it? If there is a MATLAB code that can do it, that would be even better. Or at least if someone can tell me what is the correct term for this so I can do Google search on the topic.
Straight alpha blending alone is not sufficient, as it will perform a uniform mixing of the two images.
To achieve nice-looking results, you will need to define an alpha map, i.e. an image of the same size where you adjust the degree of transparency depending on the image that should dominate.
To obtain the mask, you can draw it by hand, for example as a filled outline, as a path or a polygon. Then you have to strongly blur this mask to get a smooth blend.
It looks very difficult (if not impossible) to automate this, as no software can guess what you want to enhance.
The term you are looking for is alpha blending.
https://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending
The maths behind it boils down to some alpha weighted sums.
Matlab provides the function imfuse to achieve this:
https://de.mathworks.com/help/images/ref/imfuse.html
Edit: (as it still seems to be unclear)
Let's say you have 2 images A and B wich you want to blend.
You put one image over the other so for each coordinate you have 2 RGB touples.
Now you need to define the weight of both images. Will you only see the colour of image A or B or which ratio will you choose to mix them?
This is done by alpha values.
So all you need is a 2d function that defines the mixing ratio for each pixel.
Usually you have values between 0 and 1 where 0 shows one image, 1 shows the other image, 0.5 will mix them both equally and so on...
Just read the article I have linked. It gives you a clear mathematical definition. I can't provide more detail than that.
If you have problems understanding that I urge you to read a book on image processing fundamentals.
I am developing a project of detecting vehicles' headlights in night scene. I am working on a demo on MATLAB. My problem is that I need to find region of interest (ROI) to get low computing requirement. I have researched in many papers and they just use a fixed ROI like this one, the upper part is ignored and the bottom is used to analysed later.
However, if the camera is not stable, I think this approach is inappropriate. I want to find a more flexible one, which alternates in each frame. My experiments images are shown here:
If anyone has any idea, plz give me some suggestions.
I would turn the problem around and say that we are looking for headlights
ABOVE a certain line rather than saying that the headlights are below a certain line i.e. the horizon,
Your images have a very high reflection onto the tarmac and we can use that to our advantage. We know that the maximum amount of light in the image is somewhere around the reflection and headlights. We therefore look for the row with the maximum light and use that as our floor. Then look for headlights above this floor.
The idea here is that we look at the profile of the intensities on a row-by-row basis and finding the row with the maximum value.
This will only work with dark images (i.e. night) and where the reflection of the headlights onto the tarmac is large.
It will NOT work with images taking in daylight.
I have written this in Python and OpenCV but I'm sure you can translate it to a language of your choice.
import matplotlib.pylab as pl
import cv2
# Load the image
im = cv2.imread('headlights_at_night2.jpg')
# Convert to grey.
grey_image = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
Smooth the image heavily to mask out any local peaks or valleys
We are trying to smooth the headlights and the reflection so that there will be a nice peak. Ideally, the headlights and the reflection would merge into one area
grey_image = cv2.blur(grey_image, (15,15))
Sum the intensities row-by-row
intensity_profile = []
for r in range(0, grey_image.shape[0]):
intensity_profile.append(pl.sum(grey_image[r,:]))
Smooth the profile and convert it to a numpy array for easy handling of the data
window = 10
weights = pl.repeat(1.0, window)/window
profile = pl.convolve(pl.asarray(intensity_profile), weights, 'same')
Find the maximum value of the profile. That represents the y coordinate of the headlights and the reflection area. The heat map on the left show you the distribution. The right graph shows you the total intensity value per row.
We can clearly see that the sum of the intensities has a peak.The y-coordinate is 371 and indicated by a red dot in the heat map and a red dashed line in the graph.
max_value = profile.max()
max_value_location = pl.where(profile==max_value)[0]
horizon = max_value_location
The blue curve in the right-most figure represents the variable profile
The row where we find the maximum value is our floor. We then know that the headlights are above that line. We also know that most of the upper part of the image will be that of the sky and therefore dark.
I display the result below.
I know that the line in both images are on almost the same coordinates but I think that is just a coincidence.
You may try downsampling the image.
At image i need find "table" - simple rectangle.
Problem is with edge recognition, because potencial photos will be "dark".
I tried edge - sobel, canny, log, .... - recognition and after that Hough transformation and line finding. But this algorithms are not enough for this task.
Something what can help me:
- it is rectangle!, only in perspective view (something like fitting perspective rectangle?)
- that object MUST cover atleast for example 90% of photo (i know i need looking near photo edges)
- that rectangle have fast same color (for example wood dining table)
- i need find atleast "only" 4 corners..(but yes, better will be find the edges of that table)
I know how for example sobel, canny or log algorithms works and Hough as well. And naturally those algorithms fail at dark or non-contrast images. But is there some another method for example based at "fitting"?
Images showing photo i can get (you see it would be dark) and what i need find:
and this is really "nice" picture (without noise). I tested it on more noise pictures and the result was..simply horrible..
Result of this picture with actual algorithm log (with another ones it looks same):
I know image and edge recognition is not simple challenge but are there some new better methods or something like that what i can try to use?
In one of posts in here i found LSD algorithm. It seems very nice descripted and it seems it is recognizing really nice straight lines as well. Do you think it would be better to use it insted of the canny or sobel detection?
Another solution will be corner detection, on my sample images it works better but it recognize too much points and there will problem with time..i will need to connect all the points and "find" the table..
Another solution:
I thought about point to point mapping. That i will have some "virtual" table and try to map that table above with that "virtual" table (simple 2d square in painting :] )..But i think point to point mapping will give me big errors or it will not working.
Does someone have any advice what algorithm use to?
I tried recognize edges in FIJI and then put the edge detected image in matlab, but with hough it works bad as well..:/..
What do you think it would be best to use? In short i need find some algorithm working on non contrast, dark images.
I'd try some modified snakes algorithm:
you parameterize your rectangle with 4 points and initialize them somewhere in the image corners. Then you move the points towards image features using some optimization algorithm (e.g. gradient descent, simulated annealing, etc.).
The image features could be a combination of edge features (e.g. sobel directly or sobel of some gaussian filtered image) to be evaluated on the lines between those four points and corner features to be evaluated at those 4 points.
Additionally you can penalize unlikely rectangles (maybe depending on the angles between the points or on the distance to the image boundary).
I'm trying to consistently detect a certain color between images of the same scene. The idea is to recognize a set of object based on a color profile. So, for instance, if I'm given a scene with a green ball in it and I select that green as part of my color palette, I would like a function which has a matrix reflecting that it detects the ball.
Can anyone recommend some matlab functions/plugins/starting points for this project? ideally the function for color recognition will take an array of color values and will match them within a certain threshold.
Kinda like this:
http://www.mathworks.com/matlabcentral/fileexchange/18440-color-detection-using-hsv-color-space-training-and-testing
except it works (this one didn't)
Update:
Here's why I chose not to use the above toolkit..
I start by selecting some colors of interest in the picture
and then ask the function to recognize the road in later images...
And absolutely nothing useful is triggered. So yeah, apart from the few bugs that I came across in the code on download and fixed, this was kind of the kicker. I didn't try to fix the body of the code that recognizes the colors because.. well, I don't know how, which is why I came here.
So, let me just start off by saying road detection with color profiles is a pathological problem. But if the color of the roads are consistent, and the lighting doesn't change the color of the object you are trying to recognize then you might have a shot. (this will be extremely difficult if this is taken outside, or with different cameras, or if shadows happen, or it taking place in any sort of real-world environment)
Here are a few things that might help.
Try smoothing the image beforehand, the reason you get the bad results in the first images is probably because of small pixel variations in the road. If you can blur them, or use some sort of watershed or local averaging, you might get regions with more consistent color.
You might also consider using the LAB color space instead of HSV or RGB.
Using edge detection (see matlab's canny edge detector) might be able to get you some boundary information. If you are looking for a smooth object, there will not be very many edges in it.
Edit: I tried to adhere to this advice in the most simplistic way. Here are the resulting code and a few samples.
im=rgb2gray(im) %for most basic color capturing.. using another color space is better practice
%imshow(im)
RoadMask=roipoly(im)%create mask
RoadMask=uint8(RoadMask);%cast to so you can elementwise multiply
im=im.*RoadMask;%apply mask
[x y]=size(im);
for i=1:x
for j=1:y
%disp('here')
if (im(i,j)<160 || im(i,j)>180) %select your values based on your targets range
im(i,j)=0; %replace everything outside of range with 0
%disp(im(x,y)) %if you'd like to count pixels, turn all values
end %within range to 1 and do a sum at end
end
end
First converted from RGB to grayscale
selected a region that generally matched the roads grayness
Notice parts of the road are not captured and the blocky edges. such as this -------------^
This implementation was quicky and dirty, but I wanted to put it up before I forgot. I'll try to update with code that implements smoothing, sampling, and the LAB color space.
I'm trying to write a code The helps me in my biology work.
Concept of code is to analyze a video file of contracting cells in a tissue
Example 1
Example 2: youtube.com/watch?v=uG_WOdGw6Rk
And plot out the following:
Count of beats per min.
Strenght of Beat
Regularity of beating
And so i wrote a Matlab code that would loop through a video and compare each frame vs the one that follow it, and see if there was any changes in frames and plot these changes on a curve.
Example of My code Results
Core of Current code i wrote:
for i=2:totalframes
compared=read(vidObj,i);
ref=rgb2gray(compared);%% convert to gray
level=graythresh(ref);%% calculate threshold
compared=im2bw(compared,level);%% convert to binary
differ=sum(sum(imabsdiff(vid,compared))); %% get sum of difference between 2 frames
if (differ ~=0) && (any(amp==differ)==0) %%0 is = no change happened so i dont wana record that !
amp(end+1)=differ; % save difference to array amp wi
time(end+1)=i/framerate; %save to time array with sec's, used another array so i can filter both later.
vid=compared; %% save current frame as refrence to compare the next frame against.
end
end
figure,plot(amp,time);
=====================
So thats my code, but is there a way i can improve it so i can get better results ?
because i get fealing that imabsdiff is not exactly what i should use because my video contain alot of noise and that affect my results alot, and i think all my amp data is actually faked !
Also i actually can only extract beating rate out of this, by counting peaks, but how can i improve my code to be able to get all required data out of it ??
thanks also really appreciate your help, this is a small portion of code, if u need more info please let me know.
thanks
You say you are trying to write a "simple code", but this is not really a simple problem. If you want to measure the motion accuratly, you should use an optical flow algorithm or look at the deformation field from a registration algorithm.
EDIT: As Matt is saying, and as we see from your curve, your method is suitable for extracting the number of beats and the regularity. To accuratly find the strength of the beats however, you need to calculate the movement of the cells (more movement = stronger beat). Unfortuantly, this is not straight forwards, and that is why I gave you links to two algorithms that can calculate the movement for you.
A few fairly simple things to try that might help:
I would look in detail at what your thresholding is doing, and whether that's really what you want to do. I don't know what graythresh does exactly, but it's possible it's lumping different features that you would want to distinguish into the same pixel values. Have you tried plotting the differences between images without thresholding? Or you could threshold into multiple classes, rather than just black and white.
If noise is the main problem, you could try smoothing the images before taking the difference, so that differences in noise would be evened out but differences in large features, caused by motion, would still be there.
You could try edge-detecting your images before taking the difference.
As a previous answerer mentioned, you could also look into motion-tracking and registration algorithms, which would estimate the actual motion between each image, rather than just telling you whether the images are different or not. I think this is a decent summary on Wikipedia: http://en.wikipedia.org/wiki/Video_tracking. But they can be rather complicated.
I think if all you need is to find the time and period of contractions, though, then you wouldn't necessarily need to do a detailed motion tracking or deformable registration between images. All you need to know is when they change significantly. (The "strength" of a contraction is another matter, to define that rigorously you probably would need to know the actual motion going on.)
What are the structures we see in the video? For example what is the big dark object in the lower part of the image? This object would be relativly easy to track, but would data from this object be relevant to get data about cell contraction?
Is this image from a light microscop? At what magnification? What is the scale?
From the video it looks like there are several motions and regions of motion. So should you focus on a smaller or larger area to get your measurments? Per cell contraction or region contraction? From experience I know that changing what you do at the microscope might be much better then complex image processing ;)
I had sucsess with Gunn and Nixons Dual Snake for a similar problem:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.6831
I placed the first aproximation in the first frame by hand and used the segmentation result as starting curv for the next frame and so on. My implementation for this is from 2000 and I only have it on paper, but if you find Gunn and Nixons paper interesting I can probably find my code and scan it.
#Matt suggested smoothing and edge detection to improve your results. This is good advise. You can combine smoothing, thresholding and edge detection in one function call, the Canny edge detector.Then you can dialate the edges to get greater overlap between frames. Little overlap will probably mean a big movement between frames. You can use this the same way as before to find the beat. You can now make a second pass and add all the dialated edge images related to one beat. This should give you an idea about the area traced out by the cells as they move trough a contraction. Maybe this can be used as a useful measure for contraction of a large cluster of cells.
I don't have access to Matlab and the Image Processing Toolbox now, so I can't give you tested code. Here are some hints: http://www.mathworks.se/help/toolbox/images/ref/edge.html , http://www.mathworks.se/help/toolbox/images/ref/imdilate.html and http://www.mathworks.se/help/toolbox/images/ref/imadd.html.