I'm now making my very first 2D game using Unity and I wonder how could 2D Area Effectors apply different effect to different objects.
To be specific, I'm now implementing something like magnetic forces. When a negative charge passes the field (aka. the 2D Area Effector), the force applied to it should be exactly the reverse when a positive charge passes the field.
With 2D Area Effector, I could only apply the same (absorbing) force to both negative charge and positive charge. However, that's not what I want. Do any body has a solution to this? Thanks!
You simply need two diffrent Area Effectors attached to one GameObject. Each Effector has diffrent colliderMask dependings on Layer, which it should affect.
Related
I'm creating a puzzle game that generates random sized pieces with 2D meshes. The images contain transparent portions and sometimes a piece is completely transparent. I need to detect what percentage of a piece is transparent. One way I found to do this is to go pixel by pixel. I posted my solution to this HERE. However, this process adds a few seconds during loading which I'd like to avoid and I'm looking for other ideas
I've considered using the selection outline of a MeshCollider to somehow to get a surface area I can compare to the surface area of the mesh but everything I find is on the rendering of outline with specialized shaders. Does anyone have any ideas on to solve this?
.
1) I guess you could add a PolygonCollider2D to your sprite and use its Path for the outline and calculation of the surface area. Not sure however if this will be faster.
PolygonCollider2D.GetPath:
A path is a cyclic sequence of line segments between points that define the outline of the Collider
Checking PolygonCollider2D.GetTotalPointCount or path length may be good enough to determine if the sprite is 'empty'.
Sprite.vertices, Sprite.triangles may also be helpful.
2) You could also improve performance of your first approach:
instead of calling GetPixel as you do now use GetPixels or GetPixels32 and loop through the array in one for loop.
Using GetPixels can be faster than calling GetPixel repeatedly, especially for large textures. In addition, GetPixels can access individual mipmap levels. For most textures, even faster is to use GetPixels32 which returns low precision color data without costly integer-to-float conversions.
check only every 2nd or nth pixel as it should be good enough for approximation
limit number of type casts
I'm trying to use GA to train an ANN whose job is to move a bar vertically so that it makes a ball bounce without hitting the wall behind the bar, in other words, a single bar pong.
I'm going to ask it directly because i think to know what the problem is.
The game window is 200x200 pixels, so i created 40000 input neurons.
The obvious doubt is: can GA handle chromosomes of 40000(input)*10(hidden)*2 elements(genes)?
Since i think the answer is no(i implemented this solution and doesn't seem to work), the solution seems simple, i feed the NN with only 4 parameters which are the coordinates x,y of bar and ball, nailed it.
Nice solution, but the problem is: how can i apply such a solution in a game like supermario where the number of enemies in the screen is not fixed? Surely i cannot create a NN with dynamic numbers of inputs.
I hope you can help me.
You have to use features to represent your state. For example, you can divide the screen in tiles and assign a value according to a function that takes into account the enemy (e.g., a boolean if the enemy is in the tile or the distance to the closest enemy).
You can still use pixels but you might need to preprocess them in order to reduce their size (e.g., use a recurrent NN).
Btw, a NN might not be able to handle 200x200 pixels, but it was able to learn to play Atari games using a representation of the state by preprocessed pixels of size 84x84x4 (see this paper).
I'm fairly new to Box2D and trying to figure out the best way to make a unicycle. The unicycle essentially is in two pieces, the wheel and the stem (with seat post etc). I've tried attaching the two with a revolute joint and using a motor for the wheel, which works well except that the stem is then subject to forces from the movement of the wheel. I want to be able to directly control the rotation of the stem (via the accelerometer on the iPhone), and have it unaffected by the movement of the wheel, except to maintain its position based on the position of the wheel.
What is the best way to do this? How do you control rotation of b2Body's? Should I be using a distance joint instead? Any help would be appreciated.
I see several routes, depending on your needs. Which way is preferable is up to you and your game.
1. Fix the stem's rotation
For the bodyDef for the stem set the fixedRotation-flag to true. This prevents any rotation of the stem (be it by forces from the motor joint, (de)acceleration or collisions.
Than you'd have to manually set the rotation each tick. This is easy if it's purely based on the iPhone's position. If you still want to calculate other factors into it, things might get from a bit more complicated (e.g. adding rotation if stem leans too far in one direction) to somewhat painful (have collisions affect the rotation).
2. Constantly apply balancing forces to the stem
Each tick read the stems angular velocity and apply countering forces to balance the stem.
While this would probably be more complicated to implement correctly (always find the right force to apply etc.) it may result in a more realistic behaviour as the fixed rotation obviously removes most reactions stem movement would have and how the stem itself is affected by the world.
3. Don't actually use a wheel
While your layout is the obvious choice for a unicycle (and seems to be a somewhat popular choice for all kind of characters) it might not be the best choice from a gameplay point of view.
Instead you could combine the stem and wheel fixtures in a single body (or attach them with a prismatic joint) and create all movement by applying forces to this body. A sensor at the bottom can inform you of ground contact to determine if movement forces are to be applied.
This way you'd get rid of all the forces a wheel creates (forces to the stem may not be the only unwelcome ones in gameplay) and still have it react to all external forces.
I want to make isometric, tile-based, iPhone games with cocos2D.
Sprites need to be drawn on-top of other sprites that are "behind" it. I'm looking for the best way to do this.
I'd like to avoid the painter's algorithm because it involves sorting all the sprites every frame which is expensive.
The Z buffer algorithm is supported by the GPU and cocos2D so this is what I'd like to use, but there is a problem. Some sprites, like buildings for example, occupy multiple tiles. Assigning a Z value to such sprites is difficult.
These are the options I've thought of:
Comparing two buildings and determining which one is "in-front" is
easy. So buildings can be sorted then assigned a Z value based on
the sort order. This wouldn't be any different from the painter's
algorithm. The OpenGl ES Z buffer wouldn't be necessary.
Assign a Z value to each building based purely on its location on
the map (without knowledge of where other buildings are). I'm
finding this difficult. I think it is possible, but I haven't been
able to come up with a formula yet.
Use multiple sprites for images that occupy more than one tile, so
all sprites will be exactly the same size. Z orders can then be
easily assigned based on what tile the sprite is occupying. The
problem with this solution is that it makes the game logic much more
complicated. All operations on a single building will have to be
repeated for each sprite the building is made-up of. I'd like to
treat each object as a single entity.
Modify the cocos2D code to allow sprites to have multiple Z values
at different points. If a sprite can have multiple Z values based on
what tile a particular part of the sprite falls on, then calculating
a Z value for that section is easy. I won't need to compare the
sprite to any other sprites. I believe this is possible by using
multiple quads for each sprite. The problem with this is that it is
a bit complicated for me since I am new to OpenGL ES and cocos2D. I
don't completely understand how all of the internal data structures
work. Although it seems like the most elegant solution if a formula
cannot be found.
I will up-vote any suggestions or references to helpful resources.
For #2, you can compute the Manhattan distance of the center of the object and use this value as the z-value of that object. It will work as long as you avoid very long objects in your map like 5x1 object or worse. But if you really need a long object to be placed in a tiled map, managing the z-order of objects in the map by setting a z-value using a formula is impossible.
To prove this:
1.) Place two 2x2 objects in a map horizontally and leave a unit tile between them.
2.) Place a 3x1 object between them. Let's name the 2x2 objects to A and B, and the 3x1 object to C.
3.) If you just rotate C(not changing its position), z-order of A and B interchange.
-If B is now in front, some objects behind B will be in front of A because of just the rotation of C. And it's costly to know which objects in back of both A and B previously will become in front of A after C's rotation.
I am developing an app which uses LK for tracking and POSIT for estimation. I am successful in getting rotation matrix, projection matrix and able to track perfectly but the problem for me is I am not able to translate 3D object properly. The object is not fitting in to the right place where it has to fit.
Will some one help me regarding this?
Check this links, they may provide you some ideas.
http://computer-vision-talks.com/2011/11/pose-estimation-problem/
http://www.morethantechnical.com/2010/11/10/20-lines-ar-in-opencv-wcode/
Now, you must also check whether the intrinsic camera parameters are correct. Even a small error in estimating the field of view can cause troubles when trying to reconstruct 3D space. And from your details, it seems that the problem are bad fov angles (field of view).
You can try to measure them, or feed the half or double value to your algorithm.
There are two conventions for fov: half-angle (from image center to top or left, or from bottom to top, respectively from left to right) Maybe you just mixed them up, using full-angle instead of half, or vice-versa
Maybe you can show us how you build a transformation matrix from R and T components?
Remember, that cv::solvePnP function returns inverse transformation (e.g camera in world) - it finds object pose in 3D space where camera is in (0;0;0). For almost all cases you need inverse it to get correct result: {Rt; -T}