MotionLayout multi state transition is not smooth - android-constraintlayout

I have three states A, B, C.
With OnSwipe I go from A to B and then with another OnSwipe from B to C.
The directions of the swipes are the same. So, continuous dragging from state A should eventually go to state C.
The problem I face is transition is not smooth. There is a stop at the end of the first transition. Sometimes it works smoothly when I drag fast(?). But generally, there is a freeze in the middle of two transitions.
Is there any way to get rid of this freeze?
For reference, I am just testing samples given by Google team. Two transitions are given as below
<Transition
motion:constraintSetStart="#id/base_state"
motion:constraintSetEnd="#id/half_people"
motion:duration="3000">
<OnSwipe
motion:dragDirection="dragRight"
motion:touchAnchorId="#id/people_pad"
motion:touchAnchorSide="right" />
</Transition>
<Transition
motion:constraintSetStart="#id/half_people"
motion:constraintSetEnd="#id/people"
motion:duration="3000">
<OnSwipe
motion:dragDirection="dragRight"
motion:touchAnchorId="#id/people_pad"
motion:touchAnchorSide="right" />
</Transition>

Short answer no but you might try adjusting the motion:dragThreshold in onswipe.
Long answer
at B it evaluates that there is another transition that can loads that transition.
The switching transitions is computationally expensive.
The constraintSets need to be evaluated by constraint layout.
Monotonic splines for all objects that move need to be built.
There is a delay to ensure the drag is in the same direction.
Long term we hope to build a TransitionSet that chains the transitions.
So there is no logical break.
Medium term we are considering adding "click stops" to Transitions. So you build out of one transition and it travels to that stop.

Related

Why does the sprite costume not change?

I'm just starting to play about in Scratch...
I seem to have a sprite of a cat with two 'costumes', which I guess are like frames.
I made this sequence:
...but when I click the green flag the cat moves to the right but the costumes don't switch.
If I make a simpler sequence:
...and manually change the costume in the drop-down then the costume does change.
What is the limitation here?
This is by design. By default, loops have a built-in delay of about 1/30 second. (There are ways to eliminate that delay, but that is off-topic here.) This was done to help inexperienced programmers witness the effect of a loop; possibly also to make execution speed more consistent (regardless of client's CPU power).
In your case, that means costume2 will be visible for 1/30 second before switching back to costume1.
Costume1 on the other hand, is instantly followed up by costume2.
Consequently, you will only see costume2.
There are various ways to fix that.
Change your script to repeat 5 { move 10 steps; next costume; } This gives both costumes an implicit 1/30 second delay. If this is still too short, add a delay (wait ... seconds). Note: next costume wraps around, so assuming the sprite has 2 costumes, it will flip back and forth between costume1 and costume2.
Too jerky? Use glide ... secs to ... instead of 'move and wait'.
Or just take smaller steps; swap costume once every few steps.
Make two separate scripts running in parallel, one for the movement, the other for switching costumes. That makes it easier to specify a different delay for each.
Try using a [wait] block: the change between costumes may be so fast that it seems like it is walking without changing costumes...

To use GKAgents or not

I am developing (or atleast trying to develop)
a decently big real time tactics game (something similar to RTS) using SpriteKit.
I am using GamePlay kit for pathfinding.
Initially I used SKActions to move the sprites within the Path but fast enough I realized that it was a big mistake.
Then I tried to implement it with GKAgents (this is my current state)
I feel that GKAgents are very raw and premature also they are following some strange Newton Law #1 that makes them to move forever (I can't think of any scenario where it would be useful - maybe for presentations at WWDC)
as well as I see that they have some Angular speed to perform rotations
which I don't need at all and can't really find how to disable it...
As well as GKBehaviors given a GKGoals seems to make some weird thing...
Setting behavior to avoid obstacles makes my units to joggle around them...
Setting behavior with follow path goal completely ignores everything unless the maxPredictionTime is low enough...
I am not even willing to tell what happens when I combine both them.
I feel broken...
I feel like I have 2 options now:
1) to struggle more with those agents and trying to make them behave as I wish
2) To roll all the movement on my own with help by GKObstacleGraph and a path Finding (which is buggy as well I have to say at some points the path to the point will generate the most awful path like "go touch that obstacle then reverse touch that one then go to the actual point (which from the beginning could be achieved by a straight line)").
Question is:
What would be the best out of those options ?
One of the best ways (in SpriteKit/GameplayKit) to get the kind of behavior you're after is to recognize that path planning and path following need not be the same operation. GameplayKit provides tools for both — GKObstacleGraph is good for planning and GKAgent is good for following a planned path — and they work best when you combine the strengths of each.
(It can be a bit misleading that GKAgent provides obstacle avoidance; don't think of this in the same way as finding a route around obstacles, more like reacting to sudden obstacles in your way.)
To put it another way, GKObstacleGraph and GKAgent are like the difference between navigating with a map and safely driving a car. The former is where you decide to take CA-85 and US-101 instead of I-280. (And maybe reevaluate your decision once in awhile — say, to pick a different set of roads around a traffic jam.) The latter is where you, continuously moment-to-moment, change lanes, avoid potholes, pass slower vehicles, slow down for heavy traffic, etc.
In Apple's DemoBots sample code, they break this out into two steps:
Use GKObstacleGraph to do high level path planning. That is, when the bad guys are "here" and the hero is "way over there", and there are some walls in between, select a series of waypoints that roughly approximates a route from here to there.
Use GKAgent behaviors to make the character roughly follow that path while also reacting to other factors (like making the bad guys not step on each other and giving them vaguely realistic movement curves instead of simply following the lines between waypoints).
You can find most of the relevant stuff behind this in TaskBotBehavior.swift in that sample code — start from addGoalsToFollowPath and look at both the places that gets called and the calls it makes.
As for the "moving forever" and "angular speed" issues...
The agent simulation is a weird mix of a motivation analogy (i.e. the agent does what's needed to move it toward where it "wants" within constraints) and a physics system (i.e. those movements are modeled like forces/impulses). If you take away an agent's goals, it doesn't know that it needs to stop — instead, you need to give it a goal of stopping. (That is, a movement speed goal of zero.) There might be a better model than what Apple's chosen here — file bugs if you have suggestions for design improvements.
Angular speed is trickier. The notion of agents' intrinsic physical constraints being sort of analogous to, say, vehicles on land or boats at sea is pretty well baked into the system. It can't really handle things like space fighters that have to reorient to vector their thrust, or walking creatures that can just as happily walk sideways or backwards as forward — at least, not on its own. You can get some mileage toward changing the "feel" of agent movement with the maxAcceleration property, but you're limited by the fact that said property covers both linear and angular acceleration.
Remember, though, that the interface between what the agent system "wants" and what "actually happens" in your game world is under your control. The easiest way to implement GKAgentDelegate is to just sync the velocity and position properties of the agent and the sprite that it represents. However, you don't have to do it that way — you could calculate a different force/impulse and apply it to your sprite.
I can't comment yet so I post as an answer. I faced the same problem recently: agent wiggling around the target or the agent that keeps moving even if you remove the behavior. Then I realized that the behavior is just the algorithm controling the movement, but you can still access and set the agent's speed, position and angle by hand.
In my case, I have a critter entity that chases for food in the scene. When it makes contact with the food agent, the food entity is removed. I tried many things to make the critter stop after eating the food (it would keep going in a straight line). And all I had to do was to set its speed to 0. That is because the behavior will influence not the position directly, but the speed/angle combination instead (from what I understand). When there is no goal for the entity, it doesn't "want" to change it's state, so whatever speed and direction it reached, it will keep. It will simply not update/change it. So unless you create a goal to make it want to stop, it will wiggle/keep going. The easy way is to set the behavior to nil and set the speed to 0 yourself.
If the behavior/goal system doesn't do it for the type of animation you are looking for, you can still use the Agent system and customize the movement with the AgenDelegate protocol and the update method and make it interact with other agents later on. You can even synchronize the agent with a node that is moved with the physics engine or with actions (or any other way).
I think the agent system is nice to keep around since you can use it later, even if it's only for special effects. But just as mixing actions and physics can give some weird results, mixing goal/behaviors and any other "automated" tool will probably result in an erratic behavior.
You can also use the agent system for other stuff than moving an actual sprite around. For example, you could use an agent to act as a "target seeker" to simulate reaction time for your enemies. The agent moves around the scene and finds other agents, when it makes contact with a suitable target, the enemy entity would attack it (random idea).
It's not a "one size fits all" solution, but it's a very nice tool to have.

Simulating physics for voxel constructions (Minecraft, Dwarf Fortress, etc)?

I'm hoping to prototype some very basic physics/statics simulations for "voxel-based" games like Minecraft and Dwarf Fortress, so that the game can detect when a player has constructed a structure that should not be able to stand up on its own.. Obviously this is a very fuzzy definition -- whether a structure is impossible depends upon multitude of material and environmental properties -- but the general idea is to motivate players to build structures that resemble the buildings we see in the real world. I'll describe what I mean in a bit more detail below, but I generally want to know if anyone could suggest either an potential approach to the problem or a resource that I could use.
Here's some examples of buildings that could be impossible if the material was not strong enough.
Here's some example situations. My understanding of this subject is not great but bear with me.
If this structure were to be made of concrete with dimensions of, say, 4m by 200m, it would probably not be able to stand up. Because the center of mass is not over its connection to the ground, I think it would either tip over or crack at the base.
The center of gravity of this arch lies between the columns holding it up, but if it was very big and made of a weak, heavy material, it would crumble under its own weight.
This tower has its center of gravity right over its base, but if it is sufficiently tall then it only takes a bit of force for the wind to topple it over.
Now, I expect that a full-scale real-time simulation of these physics isn't really possible... but there's a lot of ways that I could simplify the simulation. For example:
Tests for physics-defying structures could be infrequently and randomly performed, so a bad building doesn't crumble right as soon as it is built, but as much as a few minutes later.
Minecraft and Dwarf Fortress hardly perform rigid- or soft-body physics. For this reason, any piece of a building that is deemed to be physically impossible can simple "pop" into rubble instead of spawning a bunch of accurate physics props.
Have you considered taking an existing 3d environment physics engine and "rounding off" orientations of objects? In the case of your first object (the L-shaped thing), you could run a simulation of a continuous, non-voxelized object of similar shape behind the scenes and then monitor that object for orientation changes. In a simple case, if the object's representation of the continuous hits the ground, the object in the voxelized gameplay world could move its blocks to the ground.
I don't think there is a feasible way to do this. Minecraft has no notion of physical structure. So you will have to look at each block individually to determine if it should fall (there are other considerations but this is minimum). You would therefore need a way to distinguish between ground and "not ground". This is modeling problem first and foremost, not a programming problem (not even simulation design). I think this question is out of scope for SO.
For instance consider the following model, that may give you an indication of the complexities involved:
each block above height = 0 experiences a "down pull" = P, P may be any of the following:
0 if the box is supported by another box
m*g (where m is its mass which depends on material density * voxel volume) otherwise if it is free
F represents some "friction" or "glue" between vertical faces of boxes, it counteracts P.
This friction should have a threshold beyond which it "breaks" and the block then has a net pull downwards.
if m*g < sum F, box stays where it is. Otherwise, box falls.
F depends on the pairs of materials in contact
for n=2, so you can form a line of blocks between two towers
F is what causes the net pull of a box to be larger than m*g. For instance if you have two blocks a-b-c with c being on d, then a pulls on b, so b should be "heavier" than m*g where it contacts c. If this net is > F, then the pair a-b should fall.
You might be able to simulate the above and get interesting results, but you will find it really challenging to handle the case where there are two towsers with a line of blocks between them: the towers are coupled together by line of blocks, there is no longer a "tip" to the line of blocks. At this stage you might as well get out your physics books to create a system of boxes and springs and come up with equations that you might be able to solve numerically, but in a full 3D system you will have a 3D mesh of springs to navigate iteratively to converge to force values on each box and determine which ones move.
A professor of mine suggested that I look at this paper.
Additionally, I found the keyword for what it is I'm looking for. "Structural Analysis." I bought a textbook and I have a long road ahead of me.

iPhone add inertia/momentum physics to animate "wheel of fortune" like rotating control

I'm trying to create a new kind of rotary control with a look and feel of a real massive wheel supported on a pin, able to rotate around it's center point with a certain degree of damping, which will eventually bring the wheel to a halt. Think "wheel of fortune".
Rotary control sample project
When working with native UITableView and/or UIScrollView, they can be "flicked" in one direction and continue scrolling in that direction for a while. I know similar effect may be achieved with the animation curve.
What I'm interested in is how to calculate how far the interface element should scroll/rotate based on the distance/speed that the finger travelled. Different types of "flicks" would produce different behavior by transferring "momentum" to the wheel. Can anyone suggest examples of such calculations?
I'm particularly interested in applying the same concept of inertia to a custom rotary control as described here: Rotary control on stack overflow . The current examples, and even the convertbot app mentioned in the examples do not use inertia. The wheel simply follows the finger and snaps to pre-defined positions. I want to make a similar wheel feel like it has real mass to it, so it can be flicked and will spin in a predictable manner until coming to a smooth halt. (Think "wheel of fortune" type shows).
How can I achieve this kind of "wheel of fortune" inertia-based behavior for a rotary control that would make the control feel like it has real mass to it?
Thank you!
Implementing a fully realistic mass effect is not that trivial, and maybe requires too much work (especially for such a simple task), thus I'd ignore physic's laws (except those strictly needed) and make everything simpler.
A momentum is r x F where r is the radius and F is the force applied. Since it depends on the radius, the more you get far from the center of your wheel the greater will be the momentum. Let's ignore it and suppose that the force is tangent and is applied always at r=wheelWidth/2.
Besides, instead of computing force you may simply make it depends on the speed of the gesture (easier than computing force, since it requires mass and acceleration). Thus we can image that a faster touch will correspond to a greater momentum.
Of course, the gesture will never be perfectly horizontal or vertical, thus you can assume that every time there are two torques applied: the first corresponding to the vertical movement and the second to the horizontal.
You can get the gesture speed you may simply compute initial and final position of the touch, and divide their difference by the time elapsed.
To get the effect of slowing down you must assume that, when the wheel is moving, there is always an opposite torque. You can define it at compile time or make it dependent from the width of your wheel (such that larger wheels will slow down faster).
Now you have pretty much all that you need...good luck :)
I found that using iCarousel, it is possible to create a wheel like control with inertia/momentum!

How can I chain animations in iPhone-OS?

I want to do some sophisticated animations. But I only know how to animate a block of changes. Is there a way to chain animations, so that I could for example make changes A -> B -> C -> D -> E -> F (and so on), while the core-animation waits until each animation has finished and then proceeds to the next one?
In particular, I want to do this: I have an UIImageView that shows a cube.
Phase 1) Cube is flat on the floor
Phase 2) Cube rotates slightly to left, while the rotation origin is in bottom left.
Phase 3) Cube rotates slightly to right, while the rotation origin is in the bottom right.
These phases repeat 10 times, and stop at Phase 1). Also, the wiggling becomes lesser with time.
I know how to animate ONE change in a block, but how could I do such a repeating-thing with some sophisticated code in between? It's not the same thing over time. It changes, since the wiggling becomes lesser and lesser until it stops.
Assuming you're using UIView animation...
You can provide an animation 'stopped' method that gets called when an animation is actually finished (check out setAnimationDelegate). So you can have an array of animations queued up and ready to go. Set the delegate then kick off the first animation. When it ends it calls the setAnimationDidStopSelector method (making sure to check the finished flag to make sure it's really done and not just paused). You can also pass along an animation context which could be the index in your queue of animations. It can also be a data structure that contains counters so you can adjust the animation values through each cycle.
So then it's just a matter of waiting for one to be done then kicking off the next one taken off the queue.
The same method applies if you're using other types of animation. You just need to set up your data structures so it knows what to do after each one ends.
You'll need to setup an animation delegate inside of your animation blocks. See setAnimationDelegate in the UIView documentation for all the details.
In essence your delegate can be notified whenever an animation ends. Use the context parameter to determine what step in your animation is currently ending (Phase 1, Phase 2, etc)
This strategy should allow you to chain together as many animation blocks as you want.
I think the answer to your question is that you need to get more specification on what you want to do. You clearly have an idea in mind, but more specification of details will help you answer your own question.