How do I specify size of a dxf in openscad? - openscad

I am new to openscad and trying to make a 3d model from a dxf file. I want to specify its size as 130x130. I've been able to get as far as the code below but it still does not assert the size I want:
linear_extrude(height = 5, center = true, convexity = 10) import (file="bahtinov.dxf");
Any help is appreciated!

You can use dxf_dim(): create a further layer in your dxf, e.g. "dimensions", draw a horizontal and a vertical dimension line with the max. width resp. the max. height as described in Documentation, as identifier e.g. "TotalWidth" and "TotalHeight", here my test-drawing as example:
get the values with:
tw = dxf_dim(file="bahtinov.dxf", name="TotalWidth", layer="dimensions", scale=1);
th = dxf_dim(file="bahtinov.dxf", name="TotalHeight", layer="dimensions", scale=1);
scale the part:
scale([130/tw,130/th,1]) linear_extrude(height = 5, center = true) import(file="bahtinov.dxf", layer="layerName", scale=1);

You can achieve this by using resize() on the imported DXF:
linear_extrude(height = 5, center = true, convexity = 10) resize([130,130]) import (file="bahtinov.dxf");

I don't think you can, but you can scale it afterwards.
scaling_factor=0.5;
scale([scaling_factor,scaling_factor,1])
linear_extrude(height = 5, center = true, convexity = 10) import (file="bahtinov.dxf");

Related

gdal2tiles.py: Slice an image with its center being (0,0)

I made a map for my Minetest Game I play.
The project is here : https://github.com/amelaye/aiwMapping I created the map with the Python script gdal2tiles, like this : ./gdal2tiles.py -l -p raster -z 0-10 -w none ../map.png ../tiles
Here is the interesting part of code :
var minZoom = 0
var maxZoom = 9
var img = [
20000, // original width of image
20000 // original height of image
]
// create the map
var map = L.map(mapid, {
minZoom: minZoom,
maxZoom: maxZoom
})
var rc = new L.RasterCoords(map, img)
map.setView(rc.unproject([9000, 10554]), 7)
L.control.layers({
'Spawn': layerGeoGlobal(window.geoInfoSpawn, map, rc, 'red', 'star', 'fa'),
}, {
'Bounds': layerBounds(map, rc, img),
}).addTo(map)
L.tileLayer('./tiles/{z}/{x}/{y}.png', {
noWrap: true,
attribution: ''
}).addTo(map)
It works like a charm, but there is a problem : in Minetest, the coords (lat and lon) go to -10000 to 10000. In my Leaflet map, the coords still positive, then they go from 0 to 20000.
How can I solve this problem ?
CRS SIMPLE does not work.
PS : No relative questions have been posted, please read my message carefully.
When gdal2tiles is not given a georeferenced image (or a world file accompanying the image), it will assume that the (0,0) coordinate is at one of the corners of the image, and that one pixel equals one map unit (both horizontally and vertically).
Since your input image is a .png file, the most straightforward way of working around the issue will be creating an appropriate world file.
If your input image is map.png, create a new plain text file named map.pgw with the following contents...
1
0
0
1
-10000
-10000
...then run gdal2tiles.py again. Note that this worldfile assumes that one pixel equals one map unit, and the image's first corner is at the (-10000,-10000) coordinate.

How to get depth images from the camera in pyBullet

In pyBullet, I have struggled a bit with generating a dataset. What I want to achieve is to get pictures of what the camera is seeing: img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
Basically: to get the images that are seen in Synthetic Camera RGB data and Synthetic Camera Depth Data (especially this one), which are the camera windows you can see in the following picture on the left.
p.resetDebugVisualizerCamera(cameraDistance=0.5, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=[center_x, center_y, 0.785])
img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]
list_of_rgbs.append(rgbBuffer)
list_of_depths.append(depthBuffer)
rgbim = Image.fromarray(rgbBuffer)
depim = Image.fromarray(depthBuffer)
rgbim.save('test_img/rgbtest'+str(counter)+'.jpg')
depim.save('test_img/depth'+str(counter)+'.tiff')
counter += 1
I already run the following, so I don't know if it is related to the settings. p.configureDebugVisualizer(p.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
I have tried several methods because the depth part is complicated. I don't understand if it needs to be treated separately because of the pixel color information or if I need to work with the project matrixes and view matrixes.
I need to save it as a .tiff because I get some cannot save F to png errors. I tried playing a bit with the bit information but acomplished nothing. In case you asked,
# depthBuffer[depthBuffer > 65535] = 65535
# im_uint16 = np.round(depthBuffer).astype(np.uint16)
# depthBuffer = im_uint16
The following is an example of the the .tiff image
And to end, just to remark that these depth images keep changing (looking at all of them, then to the RGB and passing again to the depth images, shows different images regardless of being the same image. I have never ever seen something like this before.
I thought "I managed to fix this some time ago, might as well post the answer found".
The data structure of img has to be taken into account!
img = p.getCameraImage(224, 224, shadow = False, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_opengl = (np.reshape(img[2], (IMG_SIZE, IMG_SIZE, 4)))
depth_buffer_opengl = np.reshape(img[3], [IMG_SIZE, IMG_SIZE])
depth_opengl = far * near / (far - (far - near) * depth_buffer_opengl)
seg_opengl = np.reshape(img[4], [IMG_SIZE, IMG_SIZE]) * 1. / 255.
rgbim = Image.fromarray(rgb_opengl)
rgbim_no_alpha = rgbim.convert('RGB')
rgbim_no_alpha.save('dataset/'+obj_name+'/'+ obj_name +'_rgb_'+str(counter)+'.jpg')
# plt.imshow(depth_buffer_opengl)
plt.imsave('dataset/'+obj_name+'/'+ obj_name+'_depth_'+str(counter)+'.jpg', depth_buffer_opengl)
# plt.show()
Final Images:

Paperjs subtract path to shape not working properly

With Paperjs, I try to subtract a path from a circle, but it is not working as expected. Here is my code:
// Create circle
var c1 = new Path.Circle(new Point(100, 70), 50);
c1.fillColor = 'red';
// Create path
var eraser = new paper.Path({strokeColor: 'black', strokeWidth: 20, strokeCap: 'round'});
eraser.add(new paper.Point(20, 20));
eraser.add(new paper.Point(100, 80));
eraser.add(new paper.Point(150, 150));
eraser.fillColor = 'white';
eraser.opacity = 0.6;
// Subtract
result = c1.subtract(eraser);
result.selected = true;
result.opacity = 0.8;
result.fillColor = 'pink';
It seems the path is seen as a polygone, not lines when subtracted:
Here is a jsFiddle : https://jsfiddle.net/Imabot/785ergpy/35/
Yes, this is because Paper.js do the boolean operation with the paths fill geometry, ignoring the stroke.
This is more obvious if you remove the stroke from your example (see this sketch).
What you need to do, if you want to subtract the stroke, is turning it into a path first.
Unfortunately, Paper.js doesn't have this feature yet, even if it's planned for a long time and exist as an experimental version (see this issue).
So you have to either use this experimental feature or use a vectorial drawing software like Adobe Illustrator, and export your stroke path as SVG for example, before using it with Paper.js.

Grouping (without collision), adding and removing multiple bodies and polygons in pymunk?

I'm using code from the pymunk index_video to create a generic function that creates multiple cars which race each other and if they reach the right extreme of the screen, they are removed from Space and re-generated on the left extreme of the screen.
The problem is, that in the example code, each part of the car (chassis, pin joint, motor, wheels) is added to Space separately. I wanted to treat the entire car as a single body whose coordinates I can keep track of by storing the reference of the entire car body in a List and add or delete it to the Space easily.
Also, if the wheels are too close to the chassis, they collide with each other. I presume using a ShapeFilter can help avoid such collisions, but for that I need all parts of the car as a single body.
Please bear with me. I'm completely new to this jargon.
def car(space):
pos = Vec2d(100,200)
wheel_color = 52,219,119
shovel_color = 219,119,52
mass = 100
radius = 25
moment = pymunk.moment_for_circle(mass, 20, radius)
wheel1_b = pymunk.Body(mass, moment)
wheel1_s = pymunk.Circle(wheel1_b, radius)
wheel1_s.friction = 1.5
wheel1_s.color = wheel_color
space.add(wheel1_b, wheel1_s)
mass = 100
radius = 25
moment = pymunk.moment_for_circle(mass, 20, radius)
wheel2_b = pymunk.Body(mass, moment)
wheel2_s = pymunk.Circle(wheel2_b, radius)
wheel2_s.friction = 1.5
wheel2_s.color = wheel_color
space.add(wheel2_b, wheel2_s)
mass = 100
size = (50,30)
moment = pymunk.moment_for_box(mass, size)
chassi_b = pymunk.Body(mass, moment)
chassi_s = pymunk.Poly.create_box(chassi_b, size)
space.add(chassi_b, chassi_s)
vs = [(0,0),(25,45),(0,45)]
shovel_s = pymunk.Poly(chassi_b, vs, transform = pymunk.Transform(tx=85))
shovel_s.friction = 0.5
shovel_s.color = shovel_color
space.add(shovel_s)
wheel1_b.position = pos - (55,0)
wheel2_b.position = pos + (55,0)
chassi_b.position = pos + (0,-25)
space.add(
pymunk.PinJoint(wheel1_b, chassi_b, (0,0), (-25,-15)),
pymunk.PinJoint(wheel1_b, chassi_b, (0,0), (-25, 15)),
pymunk.PinJoint(wheel2_b, chassi_b, (0,0), (25,-15)),
pymunk.PinJoint(wheel2_b, chassi_b, (0,0), (25, 15))
)
speed = 4
space.add(
pymunk.SimpleMotor(wheel1_b, chassi_b, speed),
pymunk.SimpleMotor(wheel2_b, chassi_b, speed)
)
So this question is actually two questions.
A. How to make a "car object" that consists of multiple parts
There is no built in support for this, you have keep track of it yourself.
One way to do it is to create a car class that contains all the parts of the car. Something like this (not complete code, you need to fill in the full car)
class Car():
def __init__(self, pos):
self.wheel_body = pymunk.Body()
self.wheel_shape = pymunk.Circle()
self.chassi_body = pymunk.Body()
self.chassi_shape = pymunk.Poly()
self.motor = pymunk.SimpleMotor(wheel_body, chassi_body, 0)
def add_to_space(self, space)
space.add(self.wheel_body, self.wheel_shape, self.chassi_body, self.chassi_shape, self.motor)
def set_speed(self, speed)
self.motor.rate = speed
def car_position(self)
return self.chassi_body.position
B. How to make parts of the car to not collide with each other
This is quite straight forward, just as you already found the ShapeFilter is the way to go. For each "car", create a ShapeFilter and set a unique non-zero group on it. Then set that ShapeFilter as the filter property on each shape that makes up the car. It doesnt matter if the shapes belong to the same body or not, any shape with a ShapeFilter with a group set will not collide to other shapes with the same group set.

IOS-Charts set maximum visible x axis values

I'm using ios-charts (https://github.com/danielgindi/Charts). I have a LineChartView with 12 values in the x axis.
This however is far too many to see at the same time, so I want to display only 5 and then let the user drag to the right to see the next.
I've tried this:
let chart = LineChartView()
chart.dragEnabled = true
chart.setVisibleXRangeMaximum(5)
let xAxis = chart.xAxis
xAxis.axisMinValue = 0
xAxis.axisMaxValue = 5.0
xAxis.setLabelsToSkip(0)
But still see all 11 values at the time. How can I only see 5?
I finally got it!
The correct answer is:
chart.setVisibleXRangeMaximum(5)
This however needs to be set after the data has been set in the chart (not in a configure before)
This did the trick for me
You should set the X axis's labelCount property of the chart view.
In objc,like this
_chartView.xAxis.labelCount = 5;
Swift
chartView.xAxis.labelCount = 5
Here is my finding!!
you don't need to really use label count
if you are using DefaultAxisValueFormatter, NEVER use this. a lot of errors pop ! just use no2.
chart.setVisibleXRangeMaximum(number) will do.
please put this after chart data setting here you can see detail
combinedChartView.data = combineData. //this need to come first
combinedChartView.setVisibleXRangeMaximum(2) //after data setting