TPU training freezes in the middle of training - neural-network

I'm trying to train a CNN regression net in TF 1.12, using TPU v3-8 1.12 instance. The model succesfully compiles with XLA, starting the training process, but some where after the half iterations of the 1t epoch freezes, and doing nothing. I cannot find the root of the problem.
def read_tfrecord(example):
features = {
'image': tf.FixedLenFeature([], tf.string),
'labels': tf.FixedLenFeature([], tf.string)
}
sample=tf.parse_single_example(example, features)
image = tf.image.decode_jpeg(sample['image'], channels=3)
image = tf.reshape(image, tf.stack([540, 540, 3]))
image = augmentation(image)
labels = tf.decode_raw(sample['labels'], tf.float64)
labels = tf.reshape(labels, tf.stack([2,2,45]))
labels = tf.cast(labels, tf.float32)
return image, labels
def load_dataset(filenames):
files = tf.data.Dataset.list_files(filenames)
dataset = files.apply(tf.data.experimental.parallel_interleave(tf.data.TFRecordDataset, cycle_length=4))
dataset = dataset.apply(tf.data.experimental.map_and_batch(map_func=read_tfrecord, batch_size=BATCH_SIZE, drop_remainder=True))
dataset = dataset.apply(tf.data.experimental.shuffle_and_repeat(1024, -1))
dataset = dataset.prefetch(buffer_size=1024)
return dataset
def augmentation(img):
image = tf.cast(img, tf.float32)/255.0
image = tf.image.random_brightness(image, max_delta=25/255)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
image = tf.image.per_image_standardization(image)
return image
def get_batched_dataset(filenames):
dataset = load_dataset(filenames)
return dataset
def get_training_dataset():
return get_batched_dataset(training_filenames)
def get_validation_dataset():
return get_batched_dataset(validation_filenames)

The most likely cause is an issue in the data pre-processing function, take a look at the troubleshooting documentation Errors in the middle of training, it could be helpful to get a guidance.
I did not catch anything strange with your code.
Are you using Cloud Storage Buckets to work with those images and files? If yes, Are those buckets in the same region?
You might use Cloud TPU Audit Logs to determine if the issue is related with the resources in the system or how you are accessing your data.
Finally I suggest you to take a look in the Training Mask RCNN on Cloud TPU
documentation.

Related

How to separate human body from background in an image

I have been trying to separate the human body in an image from the background, but all the methods I have seen don't seem to work very well for me.
I have collected the following images;
The image of the background
The image of the background with the person in it.
Now I want to cut out the person from the background.
I tried subtracting the image of the background from the image with the person using res = cv2.subtract(background, foreground) (I am new to image processing).
Background subtraction methods in opencv like cv2.BackgroundSubtractorMOG2() and cv2.BackgroundSubtractorMOG2() only works with videos or image sequence and contour detection methods I have seen are only for solid shapes.
And grabCut doesn't quite work well for me because I would like to automate the process.
Given the images I have (Image of the background and image of the background with the person in it), is there a method of cutting the person out from the background?
I wouldn't recommend a neural net for this problem. That's a lot of work for something like this where you have a known background. I'll walk through the steps I took to do the background segmentation on this image.
First I shifted into the LAB color space to get some light-resistant channels to work with. I did a simple subtractions of foreground and background and combined the a and b channels.
You can see that there is still significant color change in the background even with a less light-sensitive color channel. This is likely due to the auto white balance on the camera, you can see that some of the background colors change when you step into view.
The next step I took was thresholding off of this image. The optimal threshold values may not always be the same, you'll have to adjust to a range that works well for your set of photos.
I used openCV's findContours function to get the segmentation points of each blob and I filtered the available contours by size. I set a size threshold of 15000. For reference, the person in the image had a pixel area of 27551.
Then it's just a matter of cropping out the contour.
This technique works for any good thresholding strategy. If you can improve the consistency of your pictures by turning off auto settings and ensure good contrast of the person against the wall then you can use simpler thresholding strategies and get good results.
Just for fun:
Edit:
I forgot to add in the code I used:
import cv2
import numpy as np
# rescale values
def rescale(img, orig, new):
img = np.divide(img, orig);
img = np.multiply(img, new);
img = img.astype(np.uint8);
return img;
# get abs(diff) of all hue values
def diff(bg, fg):
# do both sides
lh = bg - fg;
rh = fg - bg;
# pick minimum # this works because of uint wrapping
low = np.minimum(lh, rh);
return low;
# load image
bg = cv2.imread("back.jpg");
fg = cv2.imread("person.jpg");
fg_original = fg.copy();
# blur
bg = cv2.blur(bg,(5,5));
fg = cv2.blur(fg,(5,5));
# convert to lab
bg_lab = cv2.cvtColor(bg, cv2.COLOR_BGR2LAB);
fg_lab = cv2.cvtColor(fg, cv2.COLOR_BGR2LAB);
bl, ba, bb = cv2.split(bg_lab);
fl, fa, fb = cv2.split(fg_lab);
# subtract
d_b = diff(bb, fb);
d_a = diff(ba, fa);
# rescale for contrast
d_b = rescale(d_b, np.max(d_b), 255);
d_a = rescale(d_a, np.max(d_a), 255);
# combine
combined = np.maximum(d_b, d_a);
# threshold
# check your threshold range, this will work for
# this image, but may not work for others
# in general: having a strong contrast with the wall makes this easier
thresh = cv2.inRange(combined, 70, 255);
# opening and closing
kernel = np.ones((3,3), np.uint8);
# closing
thresh = cv2.dilate(thresh, kernel, iterations = 2);
thresh = cv2.erode(thresh, kernel, iterations = 2);
# opening
thresh = cv2.erode(thresh, kernel, iterations = 2);
thresh = cv2.dilate(thresh, kernel, iterations = 3);
# contours
_, contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# filter contours by size
big_cntrs = [];
marked = fg_original.copy();
for contour in contours:
area = cv2.contourArea(contour);
if area > 15000:
print(area);
big_cntrs.append(contour);
cv2.drawContours(marked, big_cntrs, -1, (0, 255, 0), 3);
# create a mask of the contoured image
mask = np.zeros_like(fb);
mask = cv2.drawContours(mask, big_cntrs, -1, 255, -1);
# erode mask slightly (boundary pixels on wall get color shifted)
mask = cv2.erode(mask, kernel, iterations = 1);
# crop out
out = np.zeros_like(fg_original) # Extract out the object and place into output image
out[mask == 255] = fg_original[mask == 255];
# show
cv2.imshow("combined", combined);
cv2.imshow("thresh", thresh);
cv2.imshow("marked", marked);
# cv2.imshow("masked", mask);
cv2.imshow("out", out);
cv2.waitKey(0);
Since it is very easy to find dataset consist a lot of human body, I suggest you to implement neural network segmentation tecniques to extract human body perfectly. Please check this link to see similar example.

How to show only one of the clusters in Goole Earth Engine unsupervised classification

Suppose we have the following codes for unsupervised classification. My goal is to identify the water bodies across the area. How can I mask out other classes (clusters) and only map one of the clusters (water bodies)in my results:
// Load a pre-computed Landsat composite for input
var input = ee.Image('LANDSAT/LE7_TOA_1YEAR/2001');
// Define a region in which to generate a sample of the input.
var region = ee.Geometry.Rectangle(29.7, 30, 32.5, 31.7);
// Display the sample region.
Map.setCenter(31.5, 31.0, 8);
Map.addLayer(ee.Image().paint(region, 0, 2), {}, 'region');
// Make the training dataset.
var training = input.sample({
region: region,
scale: 30,
numPixels: 5000
});
// Instantiate the clusterer and train it.
var clusterer = ee.Clusterer.wekaKMeans(5).train(training);
// Cluster the input using the trained clusterer.
var result = input.cluster(clusterer);
// Display the clusters with random colors.
Map.addLayer(result.randomVisualizer(), {}, 'clusters');
I only need the cluster (0) so I could mask the rest of the classes using the codes below:
// showing only one cluster.
var subset = result.select("cluster").eq(0).selfMask();

In caffe, py-faster-rcnn, "scores" return a large matrix, why?

I use the py-faster-rcnn demo to build further of my project with 20 classes.
However, I am trying to gain the softmax, last layer probability of my classes.
For example:
# Load the demo image
im_file = os.path.join(cfg.DATA_DIR, 'demo', image_name)
im = cv2.imread(im_file)
# Detect all object classes and regress object bounds
timer = Timer()
timer.tic()
scores, boxes = im_detect(net, im)
timer.toc()
print ('Detection took {:.3f}s for '
'{:d} object proposals').format(timer.total_time, boxes.shape[0])
# Visualize detections for each class
CONF_THRESH = 0.8
NMS_THRESH = 0.3
for cls_ind, cls in enumerate(CLASSES[1:]):
cls_ind += 1 # because we skipped background
cls_boxes = boxes[:, 4*cls_ind:4*(cls_ind + 1)]
cls_scores = scores[:, cls_ind]
dets = np.hstack((cls_boxes,
cls_scores[:, np.newaxis])).astype(np.float32)
keep = nms(dets, NMS_THRESH)
dets = dets[keep, :]
vis_detections(im, cls, dets, thresh=CONF_THRESH)
print scores
While I do the print scores, it gives me a very large matrix output,
instead of 1 x 20 . I am not sure why, and how can I get the last probability matrix?
Thanks
The raw scores the detector outputs include overlapping detections and very low score detections as well.
Note that only after applying non-maximal suppression (aka "nms") with NMS_THRESH=0.3 the function vis_detection only displays detections with confidence larger than CONF_THRESH=0.8.
So, if you want to look at the "true" objects, you need to check inside vis_detection and check only the detections it renders on the image.

Smooth and fit edge of binary images

i am working on a research about the swimming of fishes using analysis of videos, then i need to be carefully with the images (obtained from video frames) with emphasis in the tail.
The images are in High-Resolution and the software that i customize works with binary images, because is easy to use maths operations on this.
For obten this binary images i use 2 methods:
1)Convert the image to gray, invert the colors,later to bw and finally to binary with a treshold that give me images like this, with almost nothing of noise. The images sometimes loss a bit of area and doesn't is very exactly with the tail(now i need more acurracy for determinate the amplitude of tail moves)
image 1
2)i use this code, for cut the border that increase the threshold, this give me a good image of the edge, but i dont know like joint these point and smooth the image, or fitting binary images, the app fitting of matlab 2012Rb doesn't give me a good graph and i don't have access to the toolboxs of matlab.
s4 = imread('arecorte.bmp');
A=[90 90 1110 550]
s5=imcrop(s4,A)
E = edge(s5,'canny',0.59);
image2
My question is that
how i can fit the binary image or joint the points and smooth without disturb the tail?
Or how i can use the edge of the image 2 to increase the acurracy of the image 1?
i will upload a image in the comments that give me the idea of the method 2), because i can't post more links, please remember that i am working with iterations and i can't work frame by frame.
Note: If i ask this is because i am in a dead point and i don't have the resources to pay to someone for do this, until this moment i was able to write the code but in this final problem i can't alone.
I think you should use connected component labling and discard the small labels and than extract the labels boundary to get the pixels of each part
the code:
clear all
% Read image
I = imread('fish.jpg');
% You don't need to do it you haef allready a bw image
Ibw = rgb2gray(I);
Ibw(Ibw < 100) = 0;
% Find size of image
[row,col] = size(Ibw);
% Find connceted components
CC = bwconncomp(Ibw,8);
% Find area of the compoennts
stats = regionprops(CC,'Area','PixelIdxList');
areas = [stats.Area];
% Sort the areas
[val,index] = sort(areas,'descend');
% Take the two largest comonents ids and create filterd image
IbwFilterd = zeros(row,col);
IbwFilterd(stats(index(1,1)).PixelIdxList) = 1;
IbwFilterd(stats(index(1,2)).PixelIdxList) = 1;
imshow(IbwFilterd);
% Find the pixels of the border of the main component and tail
boundries = bwboundaries(IbwFilterd);
yCorrdainteOfMainFishBody = boundries{1}(:,1);
xCorrdainteOfMainFishBody = boundries{1}(:,2);
linearCorrdMainFishBody = sub2ind([row,col],yCorrdainteOfMainFishBody,xCorrdainteOfMainFishBody);
yCorrdainteOfTailFishBody = boundries{2}(:,1);
xCorrdainteOfTailFishBody = boundries{2}(:,2);
linearCorrdTailFishBody = sub2ind([row,col],yCorrdainteOfTailFishBody,xCorrdainteOfTailFishBody);
% For visoulaztion put color for the boundries
IFinal = zeros(row,col,3);
IFinalChannel = zeros(row,col);
IFinal(:,:,1) = IFinalChannel;
IFinalChannel(linearCorrdMainFishBody) = 255;
IFinal(:,:,2) = IFinalChannel;
IFinalChannel = zeros(row,col);
IFinalChannel(linearCorrdTailFishBody) = 125;
IFinal(:,:,3) = IFinalChannel;
imshow(IFinal);
The final image:

Monte Carlo Simulation of Irregular Shape

need a working example of how to do this, i have a bmp file showing the shape of a lake, the bmp's size is the rectangular area and known. i need to take this picture and estimate the lake size. so far, i have a script that generates a giant matrix of each pixel, telling me whether or not it's in the lake - but this isn't monte carlo! i need to generate random points and compare them against the shape somehow, this is where i'm getting stuck. i don't understand how to compare here, i don't have an equation for the shape or lines, i only have exact point information - either it is or isn't in the lake. so i guess i have the exact area already, but i need to find a way to compare random points against this.
function Yes = Point_In_Lake(x,y,image_pixel)
[pHeight,pWidth]=size(image_pixel);
%pHeight = Height in pixel
%pWidth = Width in pixel
width = 1000; %width is the actual width of the lake
height = 500; %height is the actual height of the lake
%converting x_value to pixel_value in image_pixel
point_x_pixel = x*pWidth/width;
xl = floor(point_x_pixel)+1;
xu = min(ceil(point_x_pixel)+1,pWidth);
%converting y_value to pixel_value in image_pixel
point_y_pixel = y*pHeight/height;
yl = floor(point_y_pixel)+1;
yu = min(ceil(point_y_pixel)+1,pHeight);
%Finally, perform the check whether the point is in the lake
if (image_pixel(yl,xl)~=0)&&(image_pixel(yl,xu)~=0)&&(image_pixel(yu,xl)~=0)&&(image_pixel(yu,xu)~=0)
Yes=0;
else
Yes=1;
end
Here was the solution:
binaryMap = image_pixel
for i = 1:numel(image_pixel)
xrand = randperm(size(image_pixel,1),1);
yrand = randperm(size(image_pixel,2),1);
Yes(i) = Point_In_Lake(xrand,yrand,binaryMap);
end
PercentLake = length(find(Yes==1))/length(Yes);
LakeArea = (PercentLake * 500000)/43560;