I'm working on a 2D turn based game and I'm in the condition to handle mouse and touch.
The game has an hexagonal map and requires pan, zoom and click action.
I decided to apply delegate-pattern so each object that requires an action, sends the event to its delegate who will do or not stuff.
In this way all inputs converge to a TurnManager that, with a state machine, handles events in accordance to the current game state.
Example:
MapCell.OnMouseUpAsButton() calls
delegate.OnCellClick(MapCell)
All works well and in this way I can handle when do something.
The problems arrived when I started to implement Zoom and Pan in the map.
For these two actions, I had to avoid classic Monobehaviour method (OnMouseDown, OnMouseUpAsButton, ...) and use LateUpdate.
So,
I created a CameraHandler that in the LateUpdate uses:
HandleMouse()
HandleTouch()
and, using delegate pattern, evokes the actions below:
OnMapWillPan()
OnMapPan()
OnMapEnd()
To avoid Pan or Clicks over UI elements, TurnManager filters received events with EventSystem.current.IsPointerOverGameObject()
Problem
On Mac/Mouse all works great! :D
On smartphone/Touch I can't click on nothing and only Pan is working. The debug on device is infernal because the lack of breakpoint or console.
Questions
Do you ever handle this things? How?
Which approach did you use?
What do you think I'm doing wrong?
Are there best practices to avoid problem like this and handle correctly crossplatform input?
Are there any good lecture/book for this argument?
PS: if needed I can show the code
Related
I've been trying to use a mixture of Unity's Animators and Playables in my game, and for the most part it works well, but there's two issues that I've been having for a long time, and I've at best worked around them. Today I bashed my head against them again, and after finding no solution online I decided to get my lazy ass to finally ask for help.
The basic setup is that my characters have:
An Animator with its controller, state machine, etc. that is used mostly for movement, jumping, climbing, etc. In case this is relevant, each character has an override controller of a generic one.
A very simple playable graph with just an output (wrapping the animator) and an input (wrapping the specific clip I want to play at the time). This is used for actions and attacks.
The problems I have are:
1- I can't seem to figure out an elegant, clean way to know when the clip fed to the graph (second part above) is finished. Currently I circumvent this by simply calculating how long the clip is and dividing by the current animation speed factor; I also have to account for when the animation is paused (e.g. hitstop). This gets the job done but is quite unelegant, and I'm sure there must be a better way.
2- Most importantly, when I'm done with the graph and standalone animation, the values of all of the properties the clip touches become locked at their last value. They stay locked even during any animation played by the regular animator; even if any of these later animations change its value, it snaps back to that locked "last frame" value when they end.
I've tried several things to solve this:
2.1- Set the default / desired value of the properties in the idle / default animation (to "mark" them as animatable properties in the normal animator's animation). This only fixes the issue for whatever animation is touched; any other animation played after that instantly reverts to the value locked by the last frame of the animation played by the graph.
2.2- Destroy the playable wrapping the animation (I do this anyway for cleanup since I need to recreate it each time a new animation plays).
2.3- Destroy the graph and recreate it each time (surprisingly, even this keeps the values locked).
2.4- Disabling the animator and enabling it again.
I'm frankly starting to lose my mind with the second problem, so any help would be exceedingly appreciated. Thanks in advance for any help!
Although this question is pretty old, I'm adding answer (along with my related follow up question) just in case there's more people that end up here from a search engine.
Animations (both "legacy" and non-legacy) can fire off events at some frame - just pick point (frame on dopesheet, place in graph on curves) and click "add event"...
There's some difference on how to specify which object/script & function to call between legacy and non-legacy - but in both cases it's basically a callback so you can know for sure when some animation started/finished (or any point in between).
Instead of trying to change values of those properties that are "locked by animations" from void Update() you seem to need to do those from within void LateUpdate().
From my testing - using/doing "legacy" animations (that also means "animation" component instead of "animator controller") allows you to use Update() - at least once the animation is finished.
And also worth keeping in mind that "animator controller" (component) doesn't accept importing "legacy" animations for any of it's states.
And animation (component) doesn't seem to play (at least not auto-play) non-legacy animations.
As of my question, well it's basically same as OPs question - is it possible to somehow "unlock" these properties (obviously without any states/animations playing) while using "newer" animator controller?
Although - based on things I've read while trying to find what's going on. Those "legacy" animations are not really "legacy" - and seem to be there to stay for reasons like being better for performance.
I am trying to code an end for a level in a simple game. A lot of things need to happen at slightly different times. The character needs to do a celebration. Text needs to pop up on screen. The camera needs to move to show off the win, and finally there needs to be a scene transition.
This all seems like a great thing to solve with an animation. All these things could come in and act on specific key-frames, at the end raising an event and ending the scene.
The problem is it looks like animations have to be attached to specific objects. My camera, player, and the static global GameController are completely unrelated. In fact the global controller can't be related to anything. Because of that my animations don't see all the objects and can't control them. I am instead stuck writing synchronized animations, and code with a lot of yield return new WaitForSeconds(...);. I find this very difficult to manage, and seems like a lot of waste. Is there any way I can use animations, or some other frame based tool to globally animate my game?
Look into Unity's Timeline system. I believe this is exactly the sort of thing it was made for.
I was playing around with navmesh agent and im pretty happy about the results i get. But i am a liitle bit concerned about the code getting complicated.
I want to organize my code in a way that allows me to edit it later without trying to figure it out what i did there.
what i need is basically is this:
Handle mouse click on ground, enemies, objects, skill / spell targeting, gui
Handle mouse over objects, enemies, gui
my approach was:
in update function raycast mouse position
check if the mouse was clicked
if clicked check the target tag : enemy, ground, object (loot) and
call a related function
if not clicked check the target tag again for hover effects.
so what would be the best way to handle all of these listed above? any code examples in any language would be appreciated.
thanks for your time
its been a while that i asked this question. i ended up using state design pattern explained here.
I want to develop a "scrollview" in Unity.
Basically I have a Parent Game Object on the screen with many items inside it.
The Parent Game Object is big enough so it goes outside the screen.
I developed a scrolling script so when the user drags the parent object, I move it and it looks like scrolling.
I did this by implementing the OnMouseDrag event.
How can I calculate the inertia and apply it so when the user drags it fast, it continues to move?
What you want, the effect that you want, is called kinetic scroll/scrolling/panning .
Here is an answer with a generic algorithm in it, I'm sure that with this keyword in mind you will probably find a ton more of examples.
Next time watch out for the tags, your first set of tags was incorrect, other frameworks have the same unity name .
When you drag a finger across the iPhone touchscreen, it generates touchesMoved events at a nice, regular 60Hz.
However, the transition from the initial touchesBegan event to the first touchesMoved is less obvious: sometimes the device waits a while.
What's it waiting for? Larger time/distance deltas? More touches to lump into the event?
Does anybody know?
Importantly, this delay does not happen with subsequent fingers, which puts the first touch at a distinct disadvantage. It's very asymmetric and bad news for apps that demand precise input, like games and musical instruments.
To see this bug/phenomenon in action
slowly drag the iPhone screen unlock slider to the right. note the sudden jump & note how it doesn't occur if you have another finger resting anywhere else on the screen
try "creeping" across a narrow bridge in any number of 3D games. Frustrating!
try a dual virtual joystick game & note that the effect is mitigated because you're obliged to never end either of the touches which amortizes the unpleasantness.
Should've logged this as a bug 8 months ago.
After a touchesBegan event is fired the UIKit looks for a positional movement of the finger touch which translates into touchedMoved events as the x/y of the finger is changed until the finger is lifted and the touchesEnded event is fired.
If the finger is held down in one place it will not fire the touchesMoved event until there is movement.
I am building an app where you have to draw based on touchesMoved and it does happen at intervals but it is fast enough to give a smooth drawing appearance. Since it is an event and buried in the SDK you might have to do some testing in your scenario to see how fast it responds, depending on other actions or events it could be variable to the situation it is used. In my experience it is within a few ms of movement and this is with about 2-3k other sprites on the screen.
The drawing does start on the touchesBegan event though so the first placement is set then it chains to the touhesMoved and ends with the touchesEnd. I use all the events for the drag operation, so maybe the initial move is less laggy perceptually in this case.
To test in your app you could put a timestamp on each event if it is crucial to your design and work out some sort of easing.
http://developer.apple.com/IPhone/library/documentation/UIKit/Reference/UIResponder_Class/Reference/Reference.html#//apple_ref/occ/instm/UIResponder/touchesMoved:withEvent:
I don't think it's a bug, it's more of a missing feature.
Ordinarily, this is intended behavior to filter out accidental micro-movements that would transform a tap or long press into a slide when this was not intended by the user.
This is nothing new, it has always been there, for instance there are a few pixels of tolerance for double clicks in pointer-based GUIs - or even this same tolerance before a drag is started, because users sometimes inadvertently drag when they just meant to click. Try slowly moving an item on the desktop (OSX or Windows) to see it.
The missing feature is that it doesn't appear to be configurable.
An idea: Is it possible to enter a timed loop on touchesBegan that periodically checks the touch's locationInView:?
I don't represent any kind of official answer but it makes sense that touchesBegan->touchesMoved has a longer duration than touchesMoved->touchesMoved. It would be frustrating to developers if every touchesBegan came along with a bunch of accidental touchesMoved events. Apple must have determined (experimentally) some distance at which a touch becomes a drag. Once the touchesMoved has begun, there is no need to perform this test any more because every point until the next touchesUp is guaranteed to be a touchesMoved.
This seems to be what you are saying in your original post, Rythmic Fistman, and I just wanted to elaborate a bit more and say that I agree with your reasoning. This means if you're calculating a "drag velocity" of some sort, you are required to use distance traveled as a factor, rather than depending on the frequency of the update timer (which is better practice anyway).
Its waiting for the first move.
That's how the OS distinguishes a drag from a tap. Once you drag, all new notifications are touchesMoved.
This is also the reason why you should write code to execute on touch up event.
Currently such "delay" between touchesBegan and touchesMoved is present also when other fingers are touching the screen. Unfortunately it seems that an option to disable it doesn't exist yet. I'm also a music app developer (and player), and I find this behavior very annoying.