Paperjs subtract path to shape not working properly - subtraction

With Paperjs, I try to subtract a path from a circle, but it is not working as expected. Here is my code:
// Create circle
var c1 = new Path.Circle(new Point(100, 70), 50);
c1.fillColor = 'red';
// Create path
var eraser = new paper.Path({strokeColor: 'black', strokeWidth: 20, strokeCap: 'round'});
eraser.add(new paper.Point(20, 20));
eraser.add(new paper.Point(100, 80));
eraser.add(new paper.Point(150, 150));
eraser.fillColor = 'white';
eraser.opacity = 0.6;
// Subtract
result = c1.subtract(eraser);
result.selected = true;
result.opacity = 0.8;
result.fillColor = 'pink';
It seems the path is seen as a polygone, not lines when subtracted:
Here is a jsFiddle : https://jsfiddle.net/Imabot/785ergpy/35/

Yes, this is because Paper.js do the boolean operation with the paths fill geometry, ignoring the stroke.
This is more obvious if you remove the stroke from your example (see this sketch).
What you need to do, if you want to subtract the stroke, is turning it into a path first.
Unfortunately, Paper.js doesn't have this feature yet, even if it's planned for a long time and exist as an experimental version (see this issue).
So you have to either use this experimental feature or use a vectorial drawing software like Adobe Illustrator, and export your stroke path as SVG for example, before using it with Paper.js.

Related

How to get depth images from the camera in pyBullet

In pyBullet, I have struggled a bit with generating a dataset. What I want to achieve is to get pictures of what the camera is seeing: img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
Basically: to get the images that are seen in Synthetic Camera RGB data and Synthetic Camera Depth Data (especially this one), which are the camera windows you can see in the following picture on the left.
p.resetDebugVisualizerCamera(cameraDistance=0.5, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=[center_x, center_y, 0.785])
img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]
list_of_rgbs.append(rgbBuffer)
list_of_depths.append(depthBuffer)
rgbim = Image.fromarray(rgbBuffer)
depim = Image.fromarray(depthBuffer)
rgbim.save('test_img/rgbtest'+str(counter)+'.jpg')
depim.save('test_img/depth'+str(counter)+'.tiff')
counter += 1
I already run the following, so I don't know if it is related to the settings. p.configureDebugVisualizer(p.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
I have tried several methods because the depth part is complicated. I don't understand if it needs to be treated separately because of the pixel color information or if I need to work with the project matrixes and view matrixes.
I need to save it as a .tiff because I get some cannot save F to png errors. I tried playing a bit with the bit information but acomplished nothing. In case you asked,
# depthBuffer[depthBuffer > 65535] = 65535
# im_uint16 = np.round(depthBuffer).astype(np.uint16)
# depthBuffer = im_uint16
The following is an example of the the .tiff image
And to end, just to remark that these depth images keep changing (looking at all of them, then to the RGB and passing again to the depth images, shows different images regardless of being the same image. I have never ever seen something like this before.
I thought "I managed to fix this some time ago, might as well post the answer found".
The data structure of img has to be taken into account!
img = p.getCameraImage(224, 224, shadow = False, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_opengl = (np.reshape(img[2], (IMG_SIZE, IMG_SIZE, 4)))
depth_buffer_opengl = np.reshape(img[3], [IMG_SIZE, IMG_SIZE])
depth_opengl = far * near / (far - (far - near) * depth_buffer_opengl)
seg_opengl = np.reshape(img[4], [IMG_SIZE, IMG_SIZE]) * 1. / 255.
rgbim = Image.fromarray(rgb_opengl)
rgbim_no_alpha = rgbim.convert('RGB')
rgbim_no_alpha.save('dataset/'+obj_name+'/'+ obj_name +'_rgb_'+str(counter)+'.jpg')
# plt.imshow(depth_buffer_opengl)
plt.imsave('dataset/'+obj_name+'/'+ obj_name+'_depth_'+str(counter)+'.jpg', depth_buffer_opengl)
# plt.show()
Final Images:

Grouping (without collision), adding and removing multiple bodies and polygons in pymunk?

I'm using code from the pymunk index_video to create a generic function that creates multiple cars which race each other and if they reach the right extreme of the screen, they are removed from Space and re-generated on the left extreme of the screen.
The problem is, that in the example code, each part of the car (chassis, pin joint, motor, wheels) is added to Space separately. I wanted to treat the entire car as a single body whose coordinates I can keep track of by storing the reference of the entire car body in a List and add or delete it to the Space easily.
Also, if the wheels are too close to the chassis, they collide with each other. I presume using a ShapeFilter can help avoid such collisions, but for that I need all parts of the car as a single body.
Please bear with me. I'm completely new to this jargon.
def car(space):
pos = Vec2d(100,200)
wheel_color = 52,219,119
shovel_color = 219,119,52
mass = 100
radius = 25
moment = pymunk.moment_for_circle(mass, 20, radius)
wheel1_b = pymunk.Body(mass, moment)
wheel1_s = pymunk.Circle(wheel1_b, radius)
wheel1_s.friction = 1.5
wheel1_s.color = wheel_color
space.add(wheel1_b, wheel1_s)
mass = 100
radius = 25
moment = pymunk.moment_for_circle(mass, 20, radius)
wheel2_b = pymunk.Body(mass, moment)
wheel2_s = pymunk.Circle(wheel2_b, radius)
wheel2_s.friction = 1.5
wheel2_s.color = wheel_color
space.add(wheel2_b, wheel2_s)
mass = 100
size = (50,30)
moment = pymunk.moment_for_box(mass, size)
chassi_b = pymunk.Body(mass, moment)
chassi_s = pymunk.Poly.create_box(chassi_b, size)
space.add(chassi_b, chassi_s)
vs = [(0,0),(25,45),(0,45)]
shovel_s = pymunk.Poly(chassi_b, vs, transform = pymunk.Transform(tx=85))
shovel_s.friction = 0.5
shovel_s.color = shovel_color
space.add(shovel_s)
wheel1_b.position = pos - (55,0)
wheel2_b.position = pos + (55,0)
chassi_b.position = pos + (0,-25)
space.add(
pymunk.PinJoint(wheel1_b, chassi_b, (0,0), (-25,-15)),
pymunk.PinJoint(wheel1_b, chassi_b, (0,0), (-25, 15)),
pymunk.PinJoint(wheel2_b, chassi_b, (0,0), (25,-15)),
pymunk.PinJoint(wheel2_b, chassi_b, (0,0), (25, 15))
)
speed = 4
space.add(
pymunk.SimpleMotor(wheel1_b, chassi_b, speed),
pymunk.SimpleMotor(wheel2_b, chassi_b, speed)
)
So this question is actually two questions.
A. How to make a "car object" that consists of multiple parts
There is no built in support for this, you have keep track of it yourself.
One way to do it is to create a car class that contains all the parts of the car. Something like this (not complete code, you need to fill in the full car)
class Car():
def __init__(self, pos):
self.wheel_body = pymunk.Body()
self.wheel_shape = pymunk.Circle()
self.chassi_body = pymunk.Body()
self.chassi_shape = pymunk.Poly()
self.motor = pymunk.SimpleMotor(wheel_body, chassi_body, 0)
def add_to_space(self, space)
space.add(self.wheel_body, self.wheel_shape, self.chassi_body, self.chassi_shape, self.motor)
def set_speed(self, speed)
self.motor.rate = speed
def car_position(self)
return self.chassi_body.position
B. How to make parts of the car to not collide with each other
This is quite straight forward, just as you already found the ShapeFilter is the way to go. For each "car", create a ShapeFilter and set a unique non-zero group on it. Then set that ShapeFilter as the filter property on each shape that makes up the car. It doesnt matter if the shapes belong to the same body or not, any shape with a ShapeFilter with a group set will not collide to other shapes with the same group set.

How to draw concave shape using Stencil test on Metal

This is the first time I'm trying to use Stencil Test but I have seen some examples using OpenGL and a few on Metal but focused on the Depth test instead. I understand the theory behind the Stencil test but I don't know how to set it up on Metal.
I want to draw irregular shapes. For the sake of simplicity lets consider the following 2D polygon:
I want the stencil to pass where the number of overlapping triangles is odd, so that I can reach something like this, where the white area is the area to be ignored:
I'm doing the following steps in the exact order:
Setting the depthStencilPixelFormat:
mtkView.depthStencilPixelFormat = .stencil8
mtkView.clearStencil = .allZeros
Stencil attachment:
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .stencil8, width: drawable.texture.width, height: drawable.texture.height, mipmapped: true)
textureDescriptor.textureType = .type2D
textureDescriptor.storageMode = .private
textureDescriptor.usage = [.renderTarget, .shaderRead, .shaderWrite]
mainPassStencilTexture = device.makeTexture(descriptor: textureDescriptor)
let stencilAttachment = MTLRenderPassStencilAttachmentDescriptor()
stencilAttachment.texture = mainPassStencilTexture
stencilAttachment.clearStencil = 0
stencilAttachment.loadAction = .clear
stencilAttachment.storeAction = .store
renderPassDescriptor.stencilAttachment = stencilAttachment
Stencil descriptor:
stencilDescriptor.depthCompareFunction = MTLCompareFunction.always
stencilDescriptor.isDepthWriteEnabled = true
stencilDescriptor.frontFaceStencil.stencilCompareFunction = MTLCompareFunction.equal
stencilDescriptor.frontFaceStencil.stencilFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthStencilPassOperation = MTLStencilOperation.invert
stencilDescriptor.frontFaceStencil.readMask = 0x1
stencilDescriptor.frontFaceStencil.writeMask = 0x1
stencilDescriptor.backFaceStencil = nil
depthStencilState = device.makeDepthStencilState(descriptor: stencilDescriptor)
and lastly, Im setting the reference value and the stencil state in the main pass:
renderEncoder.setStencilReferenceValue(0x1)
renderEncoder.setDepthStencilState(self.depthStencilState)
Am I missing something because the result I got is just like there is no stencil at all. I can see some differences when changing the settings of the depth test but nothing happens when changing the settings of the stencil ...
Any clue?
Thank you in advance
You're clearing the stencil texture to 0. The reference value is 1. The comparison function is "equal". So, the comparison will fail (1 does not equal 0). The operation for when the stencil comparison fails is "keep", so the stencil texture remains 0. Nothing changes for subsequent fragments.
I would expect that you'd get no rendering, although depending on the order of your vertexes and the front-face winding mode, you may be looking at the back faces of your triangles, in which case the stencil test is effectively disabled. If you don't otherwise care about front vs. back, just set both stencil descriptors the same way.
I think you need to do two passes: first, a stencil-only render; second, the color render governed by the stencil buffer. For the stencil only, you would make the compare function .always. This will toggle (invert) the low bit for each triangle that's drawn over a given pixel, giving you an indication of even or odd count. Because neither the compare function nor the operation involve the reference value, it doesn't matter what it is.
For the second pass, you'd set the compare function to .equal and the reference value to 1. The operations should all be .keep. Also, make sure to set the stencil attachment load action to .load (not .clear).

How do I specify size of a dxf in openscad?

I am new to openscad and trying to make a 3d model from a dxf file. I want to specify its size as 130x130. I've been able to get as far as the code below but it still does not assert the size I want:
linear_extrude(height = 5, center = true, convexity = 10) import (file="bahtinov.dxf");
Any help is appreciated!
You can use dxf_dim(): create a further layer in your dxf, e.g. "dimensions", draw a horizontal and a vertical dimension line with the max. width resp. the max. height as described in Documentation, as identifier e.g. "TotalWidth" and "TotalHeight", here my test-drawing as example:
get the values with:
tw = dxf_dim(file="bahtinov.dxf", name="TotalWidth", layer="dimensions", scale=1);
th = dxf_dim(file="bahtinov.dxf", name="TotalHeight", layer="dimensions", scale=1);
scale the part:
scale([130/tw,130/th,1]) linear_extrude(height = 5, center = true) import(file="bahtinov.dxf", layer="layerName", scale=1);
You can achieve this by using resize() on the imported DXF:
linear_extrude(height = 5, center = true, convexity = 10) resize([130,130]) import (file="bahtinov.dxf");
I don't think you can, but you can scale it afterwards.
scaling_factor=0.5;
scale([scaling_factor,scaling_factor,1])
linear_extrude(height = 5, center = true, convexity = 10) import (file="bahtinov.dxf");

Add rectangle as inline-element with iText

How do I add a rectangle (or other graphical elements) as inline-elements to an iText PDF?
Example code of what I'm trying to achieve:
foreach (Row r in entrylist)
{
p = new Paragraph();
p.IndentationLeft = 10;
p.SpacingBefore = 10;
p.SpacingAfter = 10;
p.Add(new Rectangle(0, 0, 10, 10)); <<<<<<<<< THAT ONE FAILS
p.Add(new Paragraph(r.GetString("caption"), tahoma12b));
p.Add(new Paragraph(r.GetString("description"), tahoma12));
((Paragraph)p[1]).IndentationLeft = 10;
doc.Add(p);
}
It's something like a column of text-blocks, of which each of them have (only a printed) checkbox.
I've tried various things with DirectContent, but it requires me to provide absolute X and Y values. Which I simply don't have. The elements should be printed at the current position, wherever that may be.
Any clues?
You need a Chunk for which you've defined a generic tag. For instance, in this example listing a number of movies, a snippet of pellicule is drawn around the year a movie was produced and an ellipse was drawn in the background of the link to IMDB.
If you look at the MovieYears example, you'll find out how to use the PdfPageEvent interface and its onGenericTag() method. You're right that you can't add a Rectangle to a Paragraph (IMHO that wouldn't make much sense). As you indicate, you need to draw the rectangle to the direct content, and you get the coordinates of a Chunk by using the setGenericTag() method. As soon as the Chunk is drawn on the page, its coordinates will be passed to the onGenericTag() method.