I make my robot using cyberbotics webots. I can’t figure out how to make a beautiful 3D model. At least at the mantis hexapod level.
I understand that you can import ready-made fraud only in vrml97 format, but it is not supported by fusion 360 and other programs.
But in webots itself, I did not find a way to build a model more complicated than using cubes, pyramids, and other simple objects.
There was also an idea to assemble a model from a large number of rectangles using grouping, but it seems to me that such a model will greatly slow down.
Is it possible to see how the finished robots are made, and make changes to them?
The node you are looking for is the IndexedFaceSet https://www.cyberbotics.com/doc/reference/indexedfaceset, it allows you to efficiently model a shape using a set of triangle faces, you will find an example of this in this simulation world: https://cyberbotics.com/doc/guide/samples-geometries#high_resolution_indexedfaceset-wbt
One possible workflow to do this is to use Blender to create your mesh and then use the Webots exporter to export it to Webots: https://github.com/cyberbotics/blender-webots-exporter
Related
I'm trying to figure out a good way to programmatically generate contours describing a 2D surface, from a 3D STEP model. Application is generating NC code for a laser-cutting program from a 3D model.
Note: it's easy enough to do this in a wide variety of CAD systems. I am writing software that needs to do it automatically.
For example, this (a STEP model):
Needs to become this (a vector file, like an SVG or a DXF):
Perhaps the most obvious way of tackling the problem is to parse the STEP model and run some kind of algorithm to detect planes and select the largest as the cut surface, then generate the contour. Not a simple task!
I've also considered using a pre-existing SDK to render the model using an orthographic camera, capture a high-res image, and then operating on it to generate the appropriate contours. This method would work, but it will be CPU-heavy, and its accuracy will be limited to the pixel resolution of the rendered image - not ideal.
This is perhaps a long shot, but does anyone have thoughts about this? Cheers!
I would use a CAD library to load the STEP file (not a CAD API), look for the planar face with the higher number of edge curves in the face loop and transpose them on the XY plane. Afterward, finding 2D geometry min/max for centering etc. would be pretty easy.
Depending on the programming language you are using I would search for "CAD control" or "CAD component" on Google combining it with "STEP import".
I am making a game in Unity that involves creatures whose animations are determined by physics. For example, to move a limb of a creature, I can apply forces to the rigidbody it's associated with. I know how to apply forces programmatically through a script to create movements, but I'd like to create more complex and organic movements and thought that I might be able to use a neural network to do this.
I'd like each of the creatures to have a distinct way of moving in the world. I'd like to first puppeteer the creatures manually using my hand (with a Leap Motion controller), and have a neural network generate new movements based on the training I did with my hand.
More concretely, my manual puppeteering setup will apply forces to the rigidbodies of the creature as I move my hand. So if I lift my finger up, the system would apply a series of upward forces to the limb that is mapped to my finger. As I am puppeteering the creature, the NN receives Vector3 forces for each of the rigidbodies. In a way this is the same task as generating a new text based on a corpus of texts, but in this case my input is forces rather than strings.
Based on that training set, is it possible for the NN to generate movements for the characters (forces to be applied to the limbs) to mimic the movements I did with my hand?
I don't have that much experience with neural networks, but am eager to learn, specifically for this project. It would be great to know about similar projects that were done in Unity, or relevant libraries I could use that would simplify the implementation. Also, please let me know if there is anything I can clarify!
Not really an answer but would not fit for comments
I'm not sure the strategy you want to apply to train your model is the right one.
I would go for reinforcement learning methods (you can check this question for more infos about it) using, for example, the distance traveled by the center of mass of the creature on the x-axis as a fitness. If this leads to weird behaviours (like this well known robot) you could, for example, think of strategies like penalizing your individuals given the distance traveled on y and z axis (still by the CoM) to try having guys that keep there CoM on the same plane.
Without knowing exactly what you want to achieve this is hard to give you more advices. Although, if you are not looking only for neural network based techniques, there is this really great paper you might want to have a look at (here is the video of their results).
I am working on a drone based video surveillance project. I am required to implement object tracking in the same. I have tried conventional approaches but these seem to fail due to non static environment.
This is an example of what i would want to achieve. But this uses background subtraction which is impossible to achieve with a non static camera.
I have also tried feature based tracking using SURF features, but it fails for smaller objects and is prone to false positives.
What would be the best way to achieve the objective in this scenario ?.
Edit : An object can be anything within a defined region of interest. The object will usually be a person or a vehicle. The idea is that the user will make a bounding box which will define the region of interest. The drone now has to start tracking whatever is within this region of interest.
Tracking local features (like SURF) won't work in your case. Training a classifier (like Boosting with HAAR features) won't work either. Let me explain why.
Your object to track will be contained in a bounding box. Inside this bounding box there could be any object, not a person, a car, or something else that you used to train you classifier.
Also, near the object, in the bounding box there will be also background noise that will change as soon as your target object moves, even if the appearance of the object doesn't change.
Moreover the appearance of you object changes (e.g. a person turns, or drop the jacket, a vehicle get a reflection of the sun, etc...), or the object gets (partially or totally) occluded for a while. So tracking local features is very likely to lose the tracked object very soon.
So the first problem is that you must deal with potentially a lot of different objects, possibly unknown a priori, to track and you cannot train a classifier for each one of these.
The second problem is that you must follow an object whose appearance may change, so you need to update your model.
The third problem is that you need some logic that tells you that you lost the tracked object, and you need to detect it again in the scene.
So what to do? Well, you need a good long term tracker.
One of the best (to my knowledge) is Tracking-Learning-Detection (TLD) by Kalal et. al.. You can see on the dedicated page a lot of example videos, and you can see that it works pretty good with moving cameras, objects that change appearance, etc...
Luckily for us, OpenCV 3.0.0 has an implementation for TLD, and you can find a sample code here (there is also a Matlab + C implementation in the aforementioned site).
The main drawback is that this method could be slow. You can test if it's an issue for you. If so, you can downsample the video stream, upgrade your hardware, or switch to a faster tracking method, but this depends on you requirements and needs.
Good luck!
The simplest thing to try is frame differencing instead of background subtraction. Subtract the previous frame from the current frame, threshold the difference image to make it binary, and then use some morphology to clean up the noise. With this approach you typically only get the edges of the objects, but often that is enough for tracking.
You can also try to augment this approach using vision.PointTracker, which implements the KLT (Kanad-Lucas-Tomasi) point tracking algorithm.
Alternatively, you can try using dense optical flow. See opticalFlowLK, opticalFlowHS, and opticalFlowLKDoG.
I have a very simple but large scene containing lots of objects and a lot of these objects are small but curved objects so they have large polygon counts. The FPS on the scene is really horrible. I learned that a Level of Detail optimization should help alot.
I am using three.js and it has an option to set LOD. But the model, doesn't have any LOD information (alternate meshes for each object corresponding to distance from the object). Is there something like a tool to generate this information by automatically by decimating the original mesh to create the alternate meshes?
But I can't imagine how textures will be skinned on the decimated meshes. Do I have to manually create the LOD information? 3D editors like Blender, 3dsMax, Unity editor let me set these meshes up individually. But I have about 200 meshes in my scene.
Level of Detail information can not be generally generated automatically. And yes it a painstaking process to create the LOD info. You can look at the LOD Book site for help.
The accepted answer to this question is actually not quite correct anymore.
While it's true that it's a painstaking process to create LOD data it gets easy when using InstaLOD. InstaLOD is a fully automatic 3d optimization solution that's able to optimize any static and skeletal mesh and maintain all vertex attributes like texture coordinates. Besides the polygon optimization, InstaLOD also features remeshing, occlusion culling, imposter creation and other unique methods related to the optimization of individual 3D models and complex scenes.
DISCLAIMER: I am one of the devs of InstaLOD.
i want to make fruit ninja blade. i am using cocos2d and the MotionStreak is really ugly for this. Any other approach or better settings for MotionStreak? maybe particle system? any free great tools similar to ParticleDesigner?
I have my own implementation with OpenGL triangle strips mapped with texture. The blade is very smooth if the distances between adjacent points are small enough. I use linear interpolation to insert more points between two points which the distance is greater than a predefined constant. I'm thinking of use order 2 interpolation but the implementation is more difficult and the performance may reduces.
Source code is available here https://github.com/hiepnd/CCBlade
i don't know how much effort it will take but the thing is you can create and change shape of filter and just apply a white to gray gradient as it's texture, it'll give a very good looking results. i myself am working with cocos2d-x (it's just a c++ port of cocos2d) and it has samples for dynamic filters (it's just like you create and manipulate a mesh and all the things are done automaticaly), it uses CCActionGrid class but i just didn't used this class yet if you couln't solve your problem using that ask me to search deeper.
http://pixlatedstudios.com/2012/02/fruit-ninja-like-blade-effect/
Worth Checking out!!!! based on hiepnd CCBlade tutorial.