Kinect 2 SDK: MapSkeletonPointToDepthPoint? - kinect-sdk

What is the Kinect 2 equivalent of MapSkeletonPointToDepthPoint?
In the 1.8 and below SDK, it was used to map skeleton points to the color or depth images like this:
DepthImagePoint newJointPos = coordinateMapper.MapSkeletonPointToDepthPoint(skeletonPt, depthFormat);
But that method is missing from the new Kinect 2 CoordinateMapper class.

MapCameraPointToDepthSpace() is the Kinect2 equivalent of Kinect1's MapSkeletonPointToDepthPoint().
Given a skeletonPt of type CameraSpacePoint:
DepthSpacePoint depthPt = _mapper.MapCameraPointToDepthSpace(skeletonPt);
ColorSpacePoint colorPt = _mapper.MapCameraPointToColorSpace(skeletonPt);

Related

How to get depth images from the camera in pyBullet

In pyBullet, I have struggled a bit with generating a dataset. What I want to achieve is to get pictures of what the camera is seeing: img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
Basically: to get the images that are seen in Synthetic Camera RGB data and Synthetic Camera Depth Data (especially this one), which are the camera windows you can see in the following picture on the left.
p.resetDebugVisualizerCamera(cameraDistance=0.5, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=[center_x, center_y, 0.785])
img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]
list_of_rgbs.append(rgbBuffer)
list_of_depths.append(depthBuffer)
rgbim = Image.fromarray(rgbBuffer)
depim = Image.fromarray(depthBuffer)
rgbim.save('test_img/rgbtest'+str(counter)+'.jpg')
depim.save('test_img/depth'+str(counter)+'.tiff')
counter += 1
I already run the following, so I don't know if it is related to the settings. p.configureDebugVisualizer(p.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
I have tried several methods because the depth part is complicated. I don't understand if it needs to be treated separately because of the pixel color information or if I need to work with the project matrixes and view matrixes.
I need to save it as a .tiff because I get some cannot save F to png errors. I tried playing a bit with the bit information but acomplished nothing. In case you asked,
# depthBuffer[depthBuffer > 65535] = 65535
# im_uint16 = np.round(depthBuffer).astype(np.uint16)
# depthBuffer = im_uint16
The following is an example of the the .tiff image
And to end, just to remark that these depth images keep changing (looking at all of them, then to the RGB and passing again to the depth images, shows different images regardless of being the same image. I have never ever seen something like this before.
I thought "I managed to fix this some time ago, might as well post the answer found".
The data structure of img has to be taken into account!
img = p.getCameraImage(224, 224, shadow = False, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_opengl = (np.reshape(img[2], (IMG_SIZE, IMG_SIZE, 4)))
depth_buffer_opengl = np.reshape(img[3], [IMG_SIZE, IMG_SIZE])
depth_opengl = far * near / (far - (far - near) * depth_buffer_opengl)
seg_opengl = np.reshape(img[4], [IMG_SIZE, IMG_SIZE]) * 1. / 255.
rgbim = Image.fromarray(rgb_opengl)
rgbim_no_alpha = rgbim.convert('RGB')
rgbim_no_alpha.save('dataset/'+obj_name+'/'+ obj_name +'_rgb_'+str(counter)+'.jpg')
# plt.imshow(depth_buffer_opengl)
plt.imsave('dataset/'+obj_name+'/'+ obj_name+'_depth_'+str(counter)+'.jpg', depth_buffer_opengl)
# plt.show()
Final Images:

Drag and drop with pinch zoom doesn't work as expected

In the zoomed mode for pinch-zoom the drag doesn't align properly with the mouse pointer.
I've detailed the problem here:https://stackblitz.com/edit/angular-t7hwqg
I expect the drag to work same way irrespective of the zoom.
I saw in version 8 of angular material they have added #Input('cdkDragConstrainPosition')
constrainPosition: (point: Point, dragRef: DragRef) => Point, which will solve my problem as in the zoomed mode I can write a custom logic to map the drag properly with pointer, but I can't upgrade to version 8 as there are other parts of the application with version 7.
So if someone can suggest what can be done? Either somehow the drag can be modified and take into account the current amount of zoom, or if I can take 'cdkDragConstrainPosition' from version 8 of material and integrate into my current packages.
I had to manually calculate the updated coordinates something like this:
Here imageHeight is the width/height of the DOM element and height is the actual image height that was loaded into the DOM element.
item is the DOM element to be moved around.
this.zoomFactorY = this.imageHeight / this.height;
this.zoomFactorX = this.imageWidth / this.width;
// to be called at every position update
const curTransform = this.item.nativeElement.style.transform.substring(12,
this.item.nativeElement.style.transform.length - 1).split(',');
const leftChange = parseFloat(curTransform[0]);
const topChange = parseFloat(curTransform[1]);
and then update the DOM item's location:
item.location.left = Math.trunc(
item.location.left + leftChange * (1 / this.zoomFactorX)
);
item.location.top = Math.trunc(
item.location.top + topChange * (1 / this.zoomFactorY)
);

IOS-Charts set maximum visible x axis values

I'm using ios-charts (https://github.com/danielgindi/Charts). I have a LineChartView with 12 values in the x axis.
This however is far too many to see at the same time, so I want to display only 5 and then let the user drag to the right to see the next.
I've tried this:
let chart = LineChartView()
chart.dragEnabled = true
chart.setVisibleXRangeMaximum(5)
let xAxis = chart.xAxis
xAxis.axisMinValue = 0
xAxis.axisMaxValue = 5.0
xAxis.setLabelsToSkip(0)
But still see all 11 values at the time. How can I only see 5?
I finally got it!
The correct answer is:
chart.setVisibleXRangeMaximum(5)
This however needs to be set after the data has been set in the chart (not in a configure before)
This did the trick for me
You should set the X axis's labelCount property of the chart view.
In objc,like this
_chartView.xAxis.labelCount = 5;
Swift
chartView.xAxis.labelCount = 5
Here is my finding!!
you don't need to really use label count
if you are using DefaultAxisValueFormatter, NEVER use this. a lot of errors pop ! just use no2.
chart.setVisibleXRangeMaximum(number) will do.
please put this after chart data setting here you can see detail
combinedChartView.data = combineData. //this need to come first
combinedChartView.setVisibleXRangeMaximum(2) //after data setting

offline retina map tiles too big

This question has no answer since august.
I've found no question identical.
I hope it's clear enough.
There seems to be few questions on maps but someone must know the answer.
I'm trying to display an offline map using 512x512 tiles. I have the tiles named like 22524#x2.png
if I use the 256x256 tiles the map displays correctly, but the 512 tiles only show a quarter of each tile.
here's my code
//Get the URL template to the map tiles
let baseURL = NSBundle.mainBundle().bundleURL.absoluteString
//let urlTemplate = baseURL.stringByAppendingString("osmm/{z}/{x}/{y}.png/")
let urlTemplate = baseURL.stringByAppendingString("two/{z}/{x}/{y}#2x.png/")
print(urlTemplate)
let carte_indice = MKTileOverlay(URLTemplate:urlTemplate)
carte_indice.geometryFlipped = false
carte_indice.canReplaceMapContent = true
carte_indice.maximumZ = 18
carte_indice.minimumZ = 16
self.mapView.addOverlay(carte_indice)
What do I need to add to get the tiles to display correctly?
Is a map using 256x256 tiles on a retina screen acceptable by apple?

cocos2d moving objects

Im trying to move an object around the screen like in the game geometry wars. I can rotate the object just fine, however I cant seem to get it to move based on the direction it is facing. I have this code here which i think is right for doing this but I keep getting syntax errors:
spriteObject.x = spriteObject.x + speed*cos(Angle)
spriteObject.y = spriteObject.y + speed*sin(Angle)
The errors are 'request for member x not in struct or union.' How do you do this in Objective-c/cocos2d syntax?
Looking at the documentation for the sprite class, you would need to do the following:
float angle = spriteObject.rotation
spriteObject.position.x = spriteObject.position.x + speed*cos(angle)
spriteObject.position.y = spriteObject.position.y + speed*sin(angle)
edit (in response to comment):
I see that you are programming for the iPhone, which means you need to be using the iphone cocos2d library, and not the one I linked to before.
The syntax will be different, as will the example code, since the iPhone version uses the Objective-C langugage, whereas the original cocos2d uses Python.
Google code has good documentation on the iPhone version of cocos2d, including sample code.
Based on that sample code, you will have to do the following:
float newX = spriteObject.position.x + speed * cos(angle);
float newY = spriteObject.position.y + speed * sin(angle);
spriteObject.position = ccp( newX, newY );