How to enlarge ROI imagej macro - macros

I would like to enlarge several rois with the following loop:
counts=roiManager("count");
for(i=0; i<counts; i++) {
roiManager("Select", i);
run("Enlarge...", "enlarge=10");
}
However, I can’t figure out, what’s wrong with this macro.

Your code enlarges every ROI by 10 pixels, but does not store the new ROI in the ROI manager. You are missing the roiManager("Update"); command, which you get when running the macro recorder while clicking the Update button in the ROI Manager window.
counts=roiManager("count");
for(i=0; i<counts; i++) {
roiManager("Select", i);
run("Enlarge...", "enlarge=10");
roiManager("Update");
}

Related

Blob position comparison across several video frames

Goal is to detect whether an object/s(can be multiple) is stationary in a ROI for a period of time (Application: Blocking the zebra lane detection). So it means obeserving each blob with respect to time t
Input = Video file
So, let's say the pedestrian crossing lane is the ROI. Background subtraction happens inside ROI only, then each blob(vehicle) will be observed separately for time t if they have been motionless there.
What I'm thinking is getting the position of the blob at frame 1 and frame n (time threshold) and check if the position is the same. But this must be applied on each blob assuming there are multiple blobs. So a loop is involved here to process each blob one by one. But what about processing each blob by getting its position at frame 1 and frame n, then compare if it's the same(if so then it has been motionless for time t therefore it's "blocking"). Then move on to the next blob.
My logic written on java code:
//assuming "blobs" is an arraylist containing all the blobs in the image
int initialPosition = 0, finalPosition = 0;
static int violatorCount=0;
for(int i=0; i<blobs.size(); i++){ //iterate to each blob to process them separately
initialPosition = blobs.get(i).getPosition();
for(int j=0; j<=timeThreshold; j++){
if(blobs.get(i) == null){ //if blob is no longer existing on frame j
break;
}
finalPosition = blobs.get(i).getPosition();
}
if(initialPosition == finalPosition){
violatorCount++;
}
//output count on top-right part of window
}
Can you share guys the logic on how to implement the goal/idea in either Matlab or OpenCV?
Optical Flow is an option thanks to PSchn. Any other options I can consider
Sounds like optical flow. You could yous the OpenCV implementation. Pass your points to cv::calcOpticalFlowPyrLK along with the next image (see here). Then you could check for the distance between to points and dicide what to do.
I dont know if it works, just an idea.

JavaFX8 Timeline to scroll Grid of Circle Objects?

Using JavaFX8 and JDK 1.8.0_74 in IntelliJ, I created a basic pixel editor app. I have two windows(stages). One is a 32x128 matrix of Circle Objects placed in a Grid Pane and the other, a Tools widow; I have one Controller.java. Using the mouse, the Tools window provides tools to draw lines, rectangles, etc. and a menu for Setup, Files and Playlists. The Files menu presents the user with Open, SaveAs and Delete. To view a saved pixel art file, the user clicks Open and via the FileChooser, the selected file is opened and each Circle’s color property is displayed. The saved pixel art File can be sent via Wi-Fi to an RGB LED matrix that’s also 32x128.
To view pics and video go to: https://virtualartsite.wordpress.com/
I can scroll a displayed pixel art file left, right, up or dow using Timeline. However, I would also like to wrap the pixel image but have failed to eliminate small anomalies that appear at the beginning of the wrap while the remaining 95% of the wrap is correct?
The critical code for class WrapLeft is as follows:
public static void runAnimation() {
timeline = new Timeline(
new KeyFrame(Duration.millis(200), event -> {
wrapFileLeft(pixelArray);
}));
timeline.setCycleCount(100);
timeline.play();
}
public static void wrapFileLeft(Circle[][] pixelArray){
// save pixelArray[r][0] in pixelArrayTmp[r][0] and wrap to end, pixelArray[r][col-1]
Circle[] pixelArrayTmp = new Circle[row];
for (int r = 0; r < row; r++) {
pixelArrayTmp[r] = pixelArray[r][0];
}
// move all the pixelArray columns one column to the left
for (int c = 0; c < col-1; c++) {
for (int r = 0; r < row; r++) {
Color color = (Color) pixelArray[r][c+1].getFill();
pixelArray[r][c].setFill(color);
}
}
// move the pixelArrayTmp[r][0] column into the new, blank, end column of pixelArray[r][col-1]
for (int r = 0; r < row; r++) {
Color color = (Color) pixelArrayTmp[r].getFill();
pixelArray[r][col-1].setFill(color);
} } }
The logic is to temporarily save column 0, shift all the remaining columns to the left one position and replace column 127 with column 0. This is all done in one CycleCount(). The anomalies occur in the first four shifts left; the Circle Objects with colors other than black get changed to an adjacent color. But after four shifts, all remaining shifts appear to be correct?
My best guess is the logical order of execution gets out of order because I not using Timeline properly or trying to execute too much in a single KeyFrame? Increasing the duration doesn’t seem to affect the anomalies.
Thanks for your help.
Logically, your solution is wrong, you are storing referencing to circles in a temporary array, then changing the fill of the referenced circles, then using the updated referenced fill to set the new fill.
Instead of storing references to circles, store the fill values themselves.
public void wrapItLeft(Circle[][] pixelArray){
// save pixelArray[r][0] in pixelArrayTmp[r][0] and wrap to end, pixelArray[r][col-1]
Paint[] pixelArrayTmp = new Paint[N_ROWS];
for (int r = 0; r < N_ROWS; r++) {
pixelArrayTmp[r] = pixelArray[r][0].getFill();
}
// move all the pixelArray columns one column to the left
for (int c = 0; c < N_COLS-1; c++) {
for (int r = 0; r < N_ROWS; r++) {
Color color = (Color) pixelArray[r][c+1].getFill();
pixelArray[r][c].setFill(color);
}
}
// move the pixelArrayTmp[r][0] column into the new, blank, end column of pixelArray[r][col-1]
for (int r = 0; r < N_ROWS; r++) {
pixelArray[r][N_COLS-1].setFill(pixelArrayTmp[r]);
}
}

HoughCircles gives wrong number of circles and position - iOS

Im using OpenCV to help me detect a coin in an image taken from an iPhone camera. Im using HoughCircles method to help me find them but the results are less than hopeful.
cv::Mat greyMat;
cv::Mat filteredMat;
cv::vector<cv::Vec3f> circles;
cv::cvtColor(mainImageCV, greyMat, CV_BGR2GRAY);
cv::threshold(greyMat, filteredMat, 100, 255, CV_THRESH_BINARY);
for ( int i = 1; i < 31; i = i + 2 )
{
// cv::blur( filteredMat, greyMat, cv::Size( i, i ), cv::Point(-1,-1) );
cv::GaussianBlur(filteredMat, greyMat, cv::Size(i,i), 0);
// cv::medianBlur(filteredMat, greyMat, i);
// cv::bilateralFilter(filteredMat, greyMat, i, i*2, i/2);
}
cv::HoughCircles(greyMat, circles, CV_HOUGH_GRADIENT, 1, 50);
NSLog(#"Circles: %ld", circles.size());
for(size_t i = 0; i < circles.size(); i++)
{
cv::Point center((cvRound(circles[i][0]), cvRound(circles[i][1])));
int radius = cvRound(circles[i][2]);
cv::circle(greyMat, center, 3, cv::Scalar(0,255,0));
cv::circle(greyMat, center, radius, cv::Scalar(0,0,255));
}
[self removeOverViews];
[self.imageView setImage: [self UIImageFromCVMat:greyMat]];
This current segment of code returns that i have 15 circles and the are all position along the right side of the image which has me confused.
Im new to OpenCV and there are barely any examples for iOS which has left me desperate.
Any help would be greatly appreciated, thanks in advance!
Your algorithm doesn't make much sense. It seems that you are using cv::GaussianBlur iteratively, but when you run HoughCircles on it, it's only going to work on the grey image that has been filtered by a GassianBlur with a 31x31 kernel, which is going to blur the crap out of the image. It might make better sense to do something like this to see the best results:
This will show you all images iteratively, which I believe is what you wanted to do in the first place.
// NOTE only psuedocode, won't compile, need to fix up.
for ( int i = 1; i < 31; i = i + 2 )
{
cv::GaussianBlur(filteredMat, greyMat, cv::Size(i,i), 0);
cv::HoughCircles(greyMat, circles, CV_HOUGH_GRADIENT, 1, 50);
for(size_t i = 0; i < circles.size(); i++)
{
cv::Point center((cvRound(circles[i][0]), cvRound(circles[i][1])));
int radius = cvRound(circles[i][2]);
cv::circle(greyMat, center, 3, cv::Scalar(0,255,0));
cv::circle(greyMat, center, radius, cv::Scalar(0,0,255));
}
cv::imshow("Circles i " + i, greyMat);
}
You still need some edges for the HoughCircle implementation to work. It uses a Canny edge detector and if you are blurring your image that much.
Also I would suggest you work with the bilateralFilter which blurs but attempts to keep some edges.
This might help as well for defining the correct parameters: HoughCircles Parameters to recognise balls
All the above code does is run the same process over and over, so your circles detect the drawn circles over and over again. Not the best. Also uses Gaussian Blur over and over, not the best way, in my opinion. I can see the Gaussian Blur in a for loop to make the image more readable, but not HoughCircles in the for loop. You need to include all the variables in houghcircles, it doubled my recognition rate when I used them all.
cv::HoughCircles(gray, circles, CV_HOUGH_GRADIENT, 1, 30, 50, 20, 10, 25);
Same format that is available on opencv website it is the C++ format.
Here is a link to my iPhone sim pic. Costco aspirin on my desktop. App counts circles in image and displays total in label.
Here is my code, it has a lot of comments included to show what I have tried...and sifted through. Hope this helps.
OpenCV install in xcode
I know this is an old question, so just putting this here in case someone else will make the same mistake (as did I...):
This line:
cv::Point center((cvRound(circles[i][0]), cvRound(circles[i][1])));
has the brackets messed up, the double "((" at the beginning is causing the point to be initialized with only one parameter instead of two, it should be:
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
Hope that helps.

Copying a portion of an IplImage into another Iplimage (that is of same size is the source)

I have a set of mask images that I need to use everytime I recognise a previously-known scene on my camera. All the mask images are in IplImage format. There will be instances where, for example, the camera has panned to a slightly different but nearby location. this means that if I do a template matching somewhere in the middle of the current scene, I will be able to recognise the scene with some amount of shift of the template in this scene. All I need to do is use those shifts to adjust the mask image ROIs so that they can be overlayed appropriately based on the template-matching. I know that there are functions such as:
cvSetImageROI(Iplimage* img, CvRect roi)
cvResetImageROI(IplImage* img);
Which I can use to set crop/uncrop my image. However, it didn't work for me quit the way I expected. I would really appreciate if someone could suggest an alternative or what I am doing wrong, or even what I haven't thought of!
**I must also point out that I need to keep the image size same at all times. The only thing that will be different is the actual area of interest in the image. I can probably use the zero/one padding to cover the unused areas.
I believe a solution without making too many copies of the original image would be:
// Make a new IplImage
IplImage* img_src_cpy = cvCreateImage(cvGetSize(img_src), img_src->depth, img_src->nChannels);
// Crop Original Image without changing the ROI
for(int rows = roi.y; rows < roi.height; rows++) {
for(int cols = roi.x; rows < roi.width; cols++) {
img_src_cpy->imageData[(rows-roi.y)*img_src_cpy->widthStep + (cols-roi.x)] = img_src[rows*img_src + cols];
}
{
//Now copy everything to the original image OR simply return the new image if calling from a function
cvCopy(img_src_cpy, img_src); // OR return img_src_cpy;
I tried the code out on itself and it is also fast enough for me (executes in about 1 ms for 332 x 332 Greyscale image)

What is the best type of marker to detect with OpenCV and how can I find the 2D location near real-time on the iPhone?

I am writing an iPhone app to use OpenCV to detect the 2D location in the iPhone camera of some sort of predefined marker (only one). What is the best type of marker? Circle? Square? Color? What is the fastest way to detect that marker? In addition, the detection algorithm needs to run near real-time.
I have tried openCV's circle detection but I got 1 fps (640x480 image):
Mat gray;
vector<Vec3f> circles;
CGPoint findLargestCircle(IplImage *image) {
Mat img(image);
cvtColor(img, gray, CV_BGR2GRAY);
// smooth it, otherwise a lot of false circles may be detected
GaussianBlur( gray, gray, cv::Size(9, 9), 2, 2 );
HoughCircles(gray, circles, CV_HOUGH_GRADIENT,
2, gray.rows/4, 200, 100 );
double radius=-1;
size_t ind;
for( size_t i = 0; i < circles.size(); i++ ) {
if(circles[i][2] > radius) {
radius = circles[i][2];
ind = i;
}
}
if(ind == -1) {
return CGPointMake(0, 0);
}else {
return CGPointMake(circles[ind][0], circles[ind][1]);
}
}
Any advice or code would be helpful.
Thanks in advance.
Maybe you can try some specific colored marker, then take color filtering into consideration. Another, the object with specific oriented texture is a good choice too.