3 channel depth image 1 channel - matlab

I have record a depth video using Kinect v2, when I extracted images using MATLAB then each image is 3 channel. Normally the images I saw are just 1 channel. Please any one tell me how can make this 3 channel image to 1 channel?
Here is the code of the depth part:
IplImage depth = new IplImage(512, 424, BitDepth.U16, 1);
CvVideoWriter DepthWriter;
Width = sensor.DepthFrameSource.FrameDescription.Width;
DHeight = sensor.DepthFrameSource.FrameDescription.Height;
WbDepth = new WriteableBitmap(DWidth, DHeight, 96, 96, PixelFormats.Gray16, null);
int depthshft = (int)SliderDepth.Value;
using (DepthFrame depthframe = frame.DepthFrameReference.AcquireFrame())
ushort* depthdata = (ushort*)depth.ImageData;
if (depthframe != null)
{
Depthdata = new ushort[DWidth * DHeight];
ushort[] Depthloc = new ushort[DWidth * DHeight];
depthframe.CopyFrameDataToArray(Depthdata);
for (int i = 0; i < DWidth * DHeight; i++)
{
Depthloc[i] = 0x1000;
}
colorspacePoint = new ColorSpacePoint[DWidth * DHeight];
depthspacePoint = new DepthSpacePoint[CWidth * CHeight];
sensor.CoordinateMapper.MapDepthFrameToColorSpace(Depthloc, colorspacePoint);
for (int y = 0; y < DHeight; y++)
{
for (int x = 0; x < DWidth; x++)
{
if (depthshft != 0)
{
Depthdata[y * DWidth + x] = (ushort)((Depthdata[y * DWidth + x]) << depthshft);
}
}
}
depth.CopyPixelData(Depthdata);
}
WbDepth.WritePixels(new Int32Rect(0, 0, DWidth, DHeight), Depthdata, strideDep, 0);
ImageDepth.Source = WbDepth;
if (depth != null && DepthWriter.FileName != null) Cv.WriteFrame(DepthWriter, depth);
Cv.ReleaseVideoWriter(DepthWriter);
if (CheckBox_saveD.IsChecked == true)
DepthWriter = new CvVideoWriter(string.Format("{1}\\Scene{0}_DepthRecord.avi", scene, TextBlock_saveloca.Text.ToString()), FourCC.Default, 30.0f, new CvSize(512, 424));
CheckBox_saveD.IsEnabled = false;
if (CheckBox_saveD.IsChecked == true) Cv.ReleaseVideoWriter(DepthWriter);
Thank you

Everyone so far is advising you to convert the (supposedly) color image to grayscale. I don't think you should do this.
The kinect gives you a "1 channel" image of depth values. If you have a color (3 channel) depth map, then something is wrong. Converting to gray scale will then make you lose depth information.
Instead, try to figure out why your image is loaded as gray scale in the first place. What is the source? Is the conversion maybe done by Matlab when reading the image? Can you then give it some flag to tell it not to?

Related

Converting grayscale 1D list of Image Pixels to Grayscale Image Dart

I am trying to convert a binary mask predicted with the pytorch_mobile package to an image I can show in my app.
The Prediction I receive is a 1-Dimensional list containing the predictions that my model spits out, these are negative for pixels assigned to the background and positive for pixels assigned to the area of interest. After this, I create a list that assigns the value 0 to all previously negative values, and 255 to all previously positive values yielding a 1-Dimensional list containing the values 0 or 255 depending on what the pixel was classified as.
The image prediction is a size of 512x512 pixels, and the length of the list is subsequently 262,144.
How would I be able to convert this list into an image that I could save to storage or show via the flutter UI?
Here is my current code:
customModel = await PyTorchMobile
.loadModel('assets/segmentation_model.pt');
result_list = [];
File image = File(filePath);
List prediction = await customModel.getImagePredictionList(image, 512, 512);
prediction.forEach((element) {
if (element >0){
result_list.add(255);
}else if(element <= 0){
result_list.add(0);
}
});
result_list_Uint8 = Uint8List.fromList(result_list);
The following should do the trick. Just use Image.setPixelSafe to set every pixel in the image and then convert it to a Flutter Image widget with encodePng and Image.memory.
import 'package:image/image.dart' as im;
...
final img = im.Image(512, 512);
for (var i = 0, len = 512; i < len; i++) {
for (var j = 0, len = 512; j < len; j++) {
final color = result_list_Uint8[i * 512 + j] == 0 ? 0 : 0xffffff;
img.setPixelSafe(i, j, 0xff000000 | color);
}
}
final pngBytes = Uint8List.fromList(im.encodePng(img));
photoImage = Image.memory(pngBytes);

How to select and drag an ellipse in old version of Processing?

//The following game has been designed as an educational resource
//for Key Stage 1 and 2 children. Children are the future of
//civil engineering, and to inspire them to get involved in the
//industry is important for innovation. However, today the
//national curriculum is very structured, and many children
//can find themselves falling behind even at the age of 7 or 8.
//It is essential that children can be supported with material
//they find difficult, and given the resources to learn in a
//fun and engaging manner.
//One of the topics that many children struggle to grasp is
//fractions. It is necessary to prevent young children feeling
//like STEM subjects are too difficult for them, so that they
//have the opportunity and confidence to explore science and
//engineering subjects as they move into secondary education and
//careers.
//This game intends to set a precedent for teaching complex
//subjects to children in a simple, but fun and interactive
//manner. It will show them that fractions can be fun, and that
//they are capable, building confidence once they return to
//the classroom.
//The game will work by challenging the user to split a group
//of balls into three buckets depending on the fraction
//displayed on the bucket.
int number_of_balls;
float bucket_1, bucket_2, bucket_3;
int bucket_1_correct, bucket_2_correct, bucket_3_correct;
PVector basket_position, basket_dimensions;
Ball[] array_of_balls;
int linethickness;
//Random generator to give number of balls, ensuring that
//they can be divided into the number of buckets available.
void setup()
{
size(500,500);
linethickness = 4;
number_of_balls = int(random(1,11))*6;
println(number_of_balls);
bucket_1 = 1/6;
bucket_2 = 1/2;
bucket_3 = 1/3;
//Working out the correct answers
bucket_1_correct = number_of_balls*bucket_1;
bucket_2_correct = number_of_balls*bucket_2;
bucket_3_correct = number_of_balls*bucket_3;
println (bucket_1, bucket_2, bucket_3);
println (bucket_1_correct, bucket_2_correct, bucket_3_correct);
//Creating the basket
basket_position = new PVector(width/4, height/8);
basket_dimensions = new PVector(width/2, height/4);
//Creating the balls & placing inside basket
array_of_balls = new Ball[number_of_balls];
for (int index=0; index<number_of_balls; index++)
{
array_of_balls[index] = new Ball();
}
}
//Drawing the balls and basket outline
void draw()
{
background (125,95,225);
for (int index=0; index<number_of_balls; index++)
{
array_of_balls[index].Draw();
}
noFill();
stroke(180,0,0);
strokeWeight(linethickness);
rect(basket_position.x, basket_position.y, basket_dimensions.x, basket_dimensions.y);
}
void mouseDragged()
{
if ((mouseX >= (ball_position.x - radius)) && (mouseX <= (ball_position.x + radius)) && (mouseY >= (ball_position.y - radius)) && (mouseY <= (ball_position.y + radius)))
{
ball_position = new PVector (mouseX, mouseY);
}
}
//Ball_class
int radius;
Ball()
{
radius = 10;
ball_position = new PVector (random(basket_position.x + radius + linethickness, basket_position.x + basket_dimensions.x - radius - linethickness), random(basket_position.y + radius + linethickness, basket_position.y + basket_dimensions.y - radius - linethickness));
colour = color(random(255), random(255), random(255));
}
void Draw()
{
noStroke();
fill(colour);
ellipse(ball_position.x,ball_position.y,radius*2,radius*2);
}
}
Thanks in advance for your help! I am using Processing 2.2.1 which I know is very out of date, so struggling to find help.
I have a piece of code that has created a number of balls, and I would like to be able to 'drag and drop' these to a different location on the screen as part of an educational game. I've tried playing around with mousePressed() and mouseDragged() but no luck yet. Any advice would be appreciated!
There are a lot of ways to approach this, but one way I could suggest is doing something like this:
// "Ellipse" object
function Ellipse (x, y, width, height) {
// Each Ellipse object has their own x, y, width, height, and "selected" values
this.x = x;
this.y = y;
this.width = width;
this.height = height;
this.selected = false;
// You can call the draw function whenever you want something done with the object
this.draw = function() {
// Draw ellipse
ellipse(this.x, this.y, this.width, this.height);
// Check if mouse is touching the ellipse using math
// https://www.desmos.com/calculator/7a9u1bpfvt
var xDistance = this.x - mouseX;
var yDistance = this.y - mouseY;
// Ellipse formula: (x^2)/a + (y^2)/b = r^2
// Assuming r = 1 and y = 0:
// 0 + (x^2)/a = 1 Substitute values
// ((width / 2)^2)/a = 1 x = width / 2 when y = 0
// a = (width / 2)^2 Move numbers around
// a = (width^2) / 4 Evaluate
var a = Math.pow(this.width, 2) / 4;
// Assuming r = 1 and x = 0:
// 0 + (y^2)/b = 1 Substitute values
// ((height / 2)^2)/b = 1 y = height / 2 when x = 0
// b = (height / 2)^2 Move numbers around
// b = (height^2) / 4 Evaluate
var b = Math.pow(this.height, 2) / 4;
// x^2
var x2 = Math.pow(xDistance, 2);
// y^2
var y2 = Math.pow(yDistance, 2);
// Check if coordinate is inside ellipse and mouse is pressed
if(x2 / a + y2 / b < 1 && mouseIsPressed) {
this.selected = true;
}
// If mouse is released, deselect the ellipse
if(!mouseIsPressed) {
this.selected = false;
}
// If selected, then move the ellipse
if(this.selected) {
// Moves ellipse with mouse
this.x += mouseX - pmouseX;
this.y += mouseY - pmouseY;
}
};
}
// New Ellipse object
var test = new Ellipse(100, 100, 90, 60);
draw = function() {
background(255);
// Do everything associated with that object
test.draw();
};
The math is a bit funky, and I might not be using the right version of Processing, but hopefully you found this at least slightly helpful :)
I'm kind of confused about what language you're using. Processing is a wrapper for Java, not JavaScript. Processing.js went up to version 1.6.6 and then was succeeded by p5.js. I'm going to assume you're using p5.js.
I don't know if this is a new thing in p5.js, but for easy, but not very user-friendly click-and-drag functionality I like to use the built-in variable mouseIsPressed.
If the ellipse coordinates are stored in an array of vectors, you might do something like this:
let balls = [];
let radius = 10;
function setup() {
createCanvas(400, 400);
for (let i = 0; i < 10; i++) {
balls.push(createVector(random(width), random(height)));
}
}
function draw() {
background(220);
for (let i = 0; i < balls.length && mouseIsPressed; i++) {
if (dist(mouseX, mouseY, balls[i].x, balls[i].y) < radius) {
balls[i] = createVector(mouseX, mouseY);
i = balls.length;
}
}
for (let i = 0; i < balls.length; i++) {
ellipse(balls[i].x, balls[i].y,
2 * radius, 2 * radius
);
}
}
This is the quickest way I could think of, but there are better ways to do it (at least, there are in p5.js). You could make a Ball class which has numbers for x, y, and radius, as well as a boolean for whether it's being dragged. In that class, you could make a method mouseOn() which detects whether the cursor is within the radius (if it's not a circle, you can use two radii: sq((this.x - mouseX)/r1) + sq((this.y - mouseY)/r2) < 1).
When the mouse is pressed, you can cycle through all the balls in the array of balls, and test each of them with mouseOn(), and set their drag boolean to true. When the mouse is released, you can set all of their drag booleans to false. Here's what it looks like in the current version of p5.js:
function mousePressed() {
for (let i = 0; i < balls.length; i++) {
balls[i].drag = balls[i].mouseOn();
if (balls[i].drag) {
i = balls.length;
}
}
}
function mouseReleased() {
for (let i = 0; i < balls.length; i++) {
balls[i].drag = false;
}
}
I hope this helps.
The way your code is right now doesn't work in the current version of Processing either, but it's a pretty quick fix. I'm going to show you a way to fix that, and hopefully it'll work in the earlier version.
Here's where I think the problem is: when you use mouseDragged(), you try to change ball_position, but you don't specify which ball's position. Here's one solution, changing the mouseDragged() block and the Ball class:
void mouseDragged() {
for (int i = 0; i < array_of_balls.length; i++) {
if ((mouseX > (array_of_balls[i].ball_position.x - array_of_balls[i].radius)) &&
(mouseX < (array_of_balls[i].ball_position.x + array_of_balls[i].radius)) &&
(mouseY > (array_of_balls[i].ball_position.y - array_of_balls[i].radius)) &&
(mouseY < (array_of_balls[i].ball_position.y + array_of_balls[i].radius))
) {
array_of_balls[i].ball_position = new PVector (mouseX, mouseY);
i = array_of_balls.length;
}
}
}
//Ball_class
class Ball {
int radius;
PVector ball_position;
color colour;
Ball() {
radius = 10;
ball_position = new PVector (random(basket_position.x + radius + linethickness, basket_position.x + basket_dimensions.x - radius - linethickness), random(basket_position.y + radius + linethickness, basket_position.y + basket_dimensions.y - radius - linethickness));
colour = color(random(255), random(255), random(255));
}
void Draw() {
noStroke();
fill(colour);
ellipse(ball_position.x, ball_position.y, radius*2, radius*2);
}
}
P.S. Since you're using a language based in Java, you should probably adhere to the finnicky parts of the language:
data types are very strict in Java. Avoid assigning anything that could possibly be a float to a variable that is declared as an int. For example, in your setup() block, you say bucket_1_correct = number_of_balls*bucket_1;. This might seem like not an issue, since number_of_balls*bucket_1 is always going to be a whole number. But since the computer rounds when saving bucket_1 = 1/6, multiplying it by 6 doesn't necessarily give a whole number. In this case, you can just use round(): bucket_1_correct = round(number_of_balls*bucket_1);
Regarding data types, you should always declare your variables with their data type. It's a little hard for me to tell, but it looks to me like you never declared ball_position or colour in your Ball class, and you never opened up the class with the typical class Ball {. This might have been a copy/paste error, though.

Updating z-values for ILNumerics ILSurface

I'm a new ILNumerics Visualization Engine user and I'm still coming up to speed on how to use it well. I've searched extensively for how to update the z-values of an ILSurface and read the posts, but I'm still not clear on how to do this.
I'm able to generate a surface and set up a camera to view it (Hamyo Kutschbach told me that's the best way to ensure that the aspect ratios of the surface don't change when rotating the surface, which is important in my application). Here's the code that displays a sin(x)/x function:
// Generate the data
ILArray<double> z = SincFunc(rows, cols, 10, 50);
ILArray<double> x = new double[cols];
ILArray<double> y = new double[rows];
for (int i = 0; i < cols; i++)
x[i] = (double)i;
for (int i = 0; i < rows; i++)
y[i] = (double)i;
// create the scene
scene = new ILScene();
pointCloudSurface = new ILSurface(z, x, y)
{
Colormap = Colormaps.Jet,
UseLighting = true,
Wireframe = { Visible = false },
Children = { new ILColorbar()
{
Height = 0.5f,
Location = new PointF(0.95f, 0.05f),
Children = { new ILLabel("microns") { Position = new Vector3(0.5,1,0), Anchor = new PointF(0.5f,0) } } }
},
Alpha = 1.0f
};
// Configure the surface and display it
scene.Camera.Add(pointCloudSurface);
scene.Camera.Position = new Vector3(50, 50, 700);
scene.Camera.LookAt = new Vector3(50, 50, 0);
scene.Camera.Top = new Vector3(0, 0, 700);
scene.Camera.Projection = Projection.Perspective;
scene.Camera.ZNear = 1.0f;
scene.Camera.ZFar = 0.0f;
scene.Camera.Top = new Vector3(1, 0, 0);
// Turn off the Powered by ILNumerics label
scene.Screen.First<ILLabel>().Visible = false;
ilPanel1.Scene = scene;
ilPanel1.Configure();
ilPanel1.Refresh();
And it works well. So now I want to change the z-values and update the plot without closing ilPanel1 because this plot is embedded in a Windows Form. Advice would be appreciated! Hopefully other newbies will find this post useful as well.
After further rummaging around, I came across a method, UpdateColormapped(), that does the trick. It's placed near the end of the code above like this:
scene.Camera.First<ILSurface>().UpdateColormapped(z);
ilPanel1.Scene = scene;
ilPanel1.Configure();
ilPanel1.Refresh();
It can be found in the API documentation here: UpdateColormapped()
It can also change the x and y data and perform other mods, but it requires that the z data be a float array, so if you're working double precision, you'll have to take the appropriate steps to get it into a float array.

Get the pixel color from GWT canvas

I am working on GWT canvas which is similar to HTML 5 canvas. My goal is to get the pixel color from the canvas. I found one way to do this by using CanvasPixelArray which store the image data. Image data store the pixel information. I am using following code to get the pixel color from canvas :
CanvasPixelArray imageData = canvas.getRendererCanvas().getContext2d().getImageData(0, 0, canvas.getWidth(), canvas.getHeight()).getData();
int length = imageData.getLength() / 4;
int index = 0, r, g, b, a;
for (int i = 0; i < length; i++) {
index = 4 * i;
r = imageData.get(index); //red
g = imageData.get(++index); //green
b = imageData.get(++index); //blue
a = imageData.get(++index); //alpha
if (r == 255 || g == 255 || b == 255) { // pixel is white
System.out.println(r+","+g+","+b+","+a);
}
}
But the major problem is that it's working too slow and down the performance. This is the main issue otherwise above code working fine.
So what is the best way performance wise to get the color information from canvas.
Any help is highly appreciated. Thank you.

Boundry detect paper sheet opencv

I am new in openCV, I already detect edge of paper sheet but my result image is blurred after draw lines on edge, How I can draw lines on edges of paper sheet so my image quality remain unaffected.
what I am Missing..
My code is below.
Many thanks.
-(void)forOpenCV
{
if( imageView.image != nil )
{
cv::Mat greyMat=[self cvMatFromUIImage:imageView.image];
vector<vector<cv::Point> > squares;
cv::Mat img= [self debugSquares: squares: greyMat ];
imageView.image =[self UIImageFromCVMat: img];
}
}
- (cv::Mat) debugSquares: (std::vector<std::vector<cv::Point> >) squares : (cv::Mat &)image
{
NSLog(#"%lu",squares.size());
// blur will enhance edge detection
Mat blurred(image);
medianBlur(image, blurred, 9);
Mat gray0(image.size(), CV_8U), gray;
vector<vector<cv::Point> > contours;
// find squares in every color plane of the image
for (int c = 0; c < 3; c++)
{
int ch[] = {c, 0};
mixChannels(&image, 1, &gray0, 1, ch, 1);
// try several threshold levels
const int threshold_level = 2;
for (int l = 0; l < threshold_level; l++)
{
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0)
{
Canny(gray0, gray, 10, 20, 3); //
// Dilate helps to remove potential holes between edge segments
dilate(gray, gray, Mat(), cv::Point(-1,-1));
}
else
{
gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// Test contours
vector<cv::Point> approx;
for (size_t i = 0; i < contours.size(); i++)
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.02, true);
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if (approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)))
{
double maxCosine = 0;
for (int j = 2; j < 5; j++)
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if (maxCosine < 0.3)
squares.push_back(approx);
}
}
}
}
NSLog(#"%lu",squares.size());
for( size_t i = 0; i < squares.size(); i++ )
{
cv:: Rect rectangle = boundingRect(Mat(squares[i]));
if(i==squares.size()-1)////Detecting Rectangle here
{
const cv::Point* p = &squares[i][0];
int n = (int)squares[i].size();
NSLog(#"%d",n);
line(image, cv::Point(507,418), cv::Point(507+1776,418+1372), Scalar(255,0,0),2,8);
polylines(image, &p, &n, 1, true, Scalar(255,255,0), 5, CV_AA);
fx1=rectangle.x;
fy1=rectangle.y;
fx2=rectangle.x+rectangle.width;
fy2=rectangle.y+rectangle.height;
line(image, cv::Point(fx1,fy1), cv::Point(fx2,fy2), Scalar(0,0,255),2,8);
}
}
return image;
}
Instead of
Mat blurred(image);
you need to do
Mat blurred = image.clone();
Because the first line does not copy the image, but just creates a second pointer to the same data.
When you blurr the image, you are also changing the original.
What you need to do instead is, to create a real copy of the actual data and operate on this copy.
The OpenCV reference states:
by using a copy constructor or assignment operator, where on the right side it can
be a matrix or expression, see below. Again, as noted in the introduction, matrix assignment is O(1) operation because it only copies the header and increases the reference counter.
Mat::clone() method can be used to get a full (a.k.a. deep) copy of the matrix when you need it.
The first problem is easily solved by doing the entire processing on a copy of the original image. That way, after you get all the points of the square you can draw the lines on the original image and it will not be blurred.
The second problem, which is cropping, can be solved by defining a ROI (region of interested) in the original image and then copying it to a new Mat. I've demonstrated that in this answer:
// Setup a Region Of Interest
cv::Rect roi;
roi.x = 50
roi.y = 10
roi.width = 400;
roi.height = 450;
// Crop the original image to the area defined by ROI
cv::Mat crop = original_image(roi);
cv::imwrite("cropped.png", crop);