I am trying to understand how shapes, walls and visibility in annylogic work. I am designing a store with various different work stations. I use 3d shapes to denote these work stations. I then put rectangular walls around these stations to prevent agents(workers) from walking through the 3d shapes representing the stations. I want to know if the additional rectangular walls are needed or if the 3d shapes on their own can prevent agents passing through the stations. Also how does visibility work. Suppose I have a shape in the presentation layer but it is set to false in terms of visibility. Does this mean the shape has no effect on the simulation or is it still having an effect and the only thing that has changed is that we can't see it in the 3d visualisation. Same with the rectangular walls. If I build a wall around a shape and set the walls visibility to false, does that mean the walls are no longer there and are no effect on the simulation?
Regular agents do not avoid any type of elements (not walls, not shapes)
Pedestrian and Transporter type agents avoid walls.
Pedestrian and Transporter type agents do not avoid shapes.
Visibility only affects visibility and not function. So if you have a wall set to not visible, Pedestrians/transporters would still avoid it.
Related
I have a set of 2D polygon shapes, each one with a varying amount of points that are determined at runtime.
I need to draw (or "paint") these polygons on top of a surface.
I also need to fill in the areas of each polygon with a specific color that is also determined at runtime.
The polygons only really need to be drawn once, however solutions that would allow me to update/change the colors would be nice.The project will be built for WebGL.
The polygons are to mark of specific areas on a surface so they shouldn't repeat.
Polygons can overlap with eachother
What are the different/best solutions that may help me achieve this?
Am also open to suggestions and further reading
I'm a relative Beginner to Unity but a somewhat experienced programmer and know a slight bit about shader programming.
By polygons I don't mean geometry I just mean a regular shapes I would like to paint on to a surface. Its just that I will always have polygons with a varying amount of points that are loaded in at runtime.
First, u need to split the surface into that polygon then u need to have materials on them and the code that specifies the colour should update the colour of the material which is on that specific polygon. Creating materials could be done in code to avoid a lot of work in a simple loop.
I am working on an application that uses horizontal surfaces in AR. I don't have much experience with Unity but I was able to create automatically generated planes with which objects can collide (example: a falling and rolling dice). Unfortunately, sometimes such objects fall outside the plane area and fall into the void.
I would like to create something similar to invisible walls around the detected plane to keep the objects inside the plane.
Plane configuration i am currently using:
Application:
Edges of plane are marked with red line.
I think the term for what you are trying to do is geo-fencing. The easiest example is to put a square around the area your objects are contained where you have four conditions, one for each edge, like if objectX >= edgeX then objectX = edgeX and so forth. To do that in Unity you would probably have to mess with that C# language.
So I have these mushroom models, and some of the faces are blue as opposed to purple.
I was hoping to make the top part of the mushroom glow, but not the stem of the mushroom. Currently i just use a point light in Unity but it doesn't look very good.
Any help would be awesome! Thank you
Glowing Mushrooms
U can try making the textures emissive by adding an emission map, that way they will glow when using post processing bloom.
The bloom post-processing effect relies on a material or light's intensity values. So, you'd need a separate material for the stem and another for the head. For the head of the mushroom, you then need to set the material's emission colour as well as the albedo. This will allow you to make it glow slightly and then the bloom will then spread out that glow for you.
As per Bean5's post, you can assign an emissions map if you don't want to use two separate materials. That way, if you have one mesh then only parts of your model will glow.
I want to know if there is a way to check the perfect collision of two SpriteKitNode objects. Below I added an example of what I want.
I tried the SKNode.intersects(_:) but this check the collision of the whole object yellow and pink rather than the object from images.
The objects I will use will be a SKSpriteKitNode with a png SKTexture.
Thanks!
This is not an answer, but an attempt to illustrate my comment above. Physics bodies used for collision detection can be (in increasing order of computational cost) circles, rectangles, polygons or bitmap perfect.
Unless you have a very-real need for pixel-perfect collisions (large sprites, slow-moving etc), using circular and rectangular physics bodies for your scenario (the red outlines in the 'wrong' section' of the image below) might be enough:
The axe body could be easily change if necessary to an 'L' shaped polygon, for more accurate collisions with a (hopefully) slight increase in cost.
I'm now making my very first 2D game using Unity and I wonder how could 2D Area Effectors apply different effect to different objects.
To be specific, I'm now implementing something like magnetic forces. When a negative charge passes the field (aka. the 2D Area Effector), the force applied to it should be exactly the reverse when a positive charge passes the field.
With 2D Area Effector, I could only apply the same (absorbing) force to both negative charge and positive charge. However, that's not what I want. Do any body has a solution to this? Thanks!
You simply need two diffrent Area Effectors attached to one GameObject. Each Effector has diffrent colliderMask dependings on Layer, which it should affect.