I have a dock as an static resource and I want to move the current ship in the dock to a waiting area when a ship of higher priority arrives (the dock can only attend one ship at a time. Ships are the agent in the flowchart). To do this, I allowed preemption in the seize block (the one that seizes the dock) and on it's "On task suspended" box I wrote the code:
agent.moveTo(waitingArea);
When the ship of higher priority arrives and suspends the task of the current ship, the current ship remains without movement in the dock, the new ships gets to the dock (on top of the current ship), and only after a few seconds (hours in the model), the current ship jumps to the waitingArea like if the code was jumpTo instead of moveTo.
Not only the movement is not being shown in the animation (just jumps) and the movement (jump) is being executed with an extrage delay, but also, later in the model run I get the error "Can't set arrival callback during movement".
If I remove the code described above from the "On task suspended" box, the error doesn't appear, but of course the animations of the ships would overlap like if there were two ships in the dock one on top of the other and that's what I don't want to happen.
Any idea on what is happening and how to fix it?
Try not writing the code in the pre-emption section.
Instead, continue the flow chart out of the pre-emption port and use a MoveTo block to do what you need:
Always make sure you understand all ports of your blocks so you can use them when needed.
Seize-block help is here.
I solved all the points as follow:
To allow the animation to occur instead of jump, I had to remove the delay block location propertie and leave it empty. Apparently the moveTo code doesn't override the location entered in the block properties, but the destination of the moveTo code does (unintuitive behaviour in my opinion).
The strange delay before the jump occurred was the time of the "invisible" movement the ship was doing, so when the ship "arrived" to the waiting area the jump was executed by the animation. With the above this point was understood and solved.
Finally, the error occurred because while the ship was returning to the dock from the waiting area, the delay time ended and the next block was a moveTo block, so the ship received two moveTo instructions at the same time (the one from the "When task resumed" and the one from the moveTo block after the delay). To solved this, I had to enter a code that paused the delay countdown until the ship was in the dock again, and then resume it.
Related
I am making a game and I want door to open when I enter trigger box and close when I am exiting it.
The blueprint of the door
The timeline of door sliding(both enter and exit use same timeline just exit uses reverse)
When I am at the edge of the trigger box, it just glitches and cannot decide if the door should close or open.
Is there a way to put a deadzone or some kind of filter to prevent this from happening?
Thanks!
It would seem to me that at least part of the problem is that you have two timelines fighting each other.
It makes more sense to have a single timeline, and play it to open the door and reverse it to close the door.
So, delete everything after the bottom overlap event and instead run its execution pulse into the 'Reverse' input of the timeline at the top.
You should also use the Play input on the top timeline instead of the PlayFromStart input. Otherwise, if the door is still closing when you re-enter the trigger box, it will suddenly jump to the closed position to play the opening animation from the start.
Your screenshots are somewhat difficult to read, however I will try to answer:
Make sure the event that fires when the actor leaves the box happens correctly and that it STOPS the current animation from playing. What's happening is your actor's location may be "bouncing" between entering and exiting the event box.
I am using Unreal Engine 4's visual blueprint scripting language to make a shotgun in my game. When I call the function fire, it simply spawns an actor at a given location (this actor moves like a bullet, and has a collision mesh). The only problem is when I want to add another "spawn actor from class" node to the event graph, both of the nodes stop working, and nothing happens. I tested to see if my for loop, select node combination was messing up, but it worked and printed out everything fine, but for some reason when the "spawn actor from class" node is put in more than once, it stops functioning.
Here are pictures provided, if you needed them, and feel free to ask any additional questions.
Here is the Imgur link: https://imgur.com/a/2ggqoAW
Can anyone please help me with this problem
Thank You.
As was stated in a comment, you should never use a for-loop in this way. Just use sequence node if you want to execute stuff in order. In your case, looping over the different transformations would be even better.
Also, don't use actors for projectiles. Actors are pretty heavy objects which require a lot of resources to create and maintain by the engine. A few hundred actors each ticking every frame can easily tank your framerate. Create a custom component and maybe have a look at UProjectileMovementComponent or EPIC's shooter tutorial project.
As for your current problem, check the collision handling, they might not spawn because they overlap when spawned.
First off, I just want to say thanks to the team at AudioKit for shedding some light on some difficult problems through their code. I have a few questions.
1: It does not appear the the AKAudioPlayer class applies on-the-spot fades if a player is stopped before reaching the end of the file/buffer. Is there another place in the AudioKit library where this is handled?
2: Does anybody know if the AVAudioMixer node’s volume can be adjusted in real time? E.G. can I make adjustments every 1/441 ms to follow the curve of my fade envelope? There is also the AVAudioUnitEQ with its globalGain property.
3: Is it possible to write to an AVAudioPCMBuffer’s floatChannelData after it has been scheduled, and while it is being played?
I’m writing a sampler app with AVFoundation. When it came time to tackle the problem of applying fades to loaded audio files within AVAudioPlayerNodes my first plan was to adjust the volume of the mixer node attached to my player node(s) in real time. This did not seem to have any sort of effect. It is entirely possible that my timing was off when doing this.
When I finally looked at the AKAudioPlayer class, I realized that one could adjust the actual buffer associated with an audio file. After a day or two of debugging, I was able to adapt the code from the AKAudioPlayer class into my PadModel class, with a few minor differences, and it works great.
However, I’m still getting those nasty little clicks whenever I stop one of my Pads from playing before the end of the file because the fades I apply are only in place at the start and the end of the file/buffer.
As far as my first question is concerned, in looking through the AKAudioPlayer class, it appears that the only fades applied to the buffer occur at the beginning and end of the buffer. The stop() method does not appear to apply any sort of on-the-spot fade to the buffer.
In my mind, the only way to have a fade out happen once a stop event happens is to apply it after said stop event, correct?
I have tried doing this, playing a 10 ms long faded-out buffer consisting of the buffer 10 ms after the stop position immediately after I call stop on my player node. It does not have the desired affect. I did not have much confidence in this scheme from the onset, but it seemed worth a try.
To be clear, once my stop() method is called, before actually stopping the the player node, I allocate the 10 ms fade buffer, read into the buffer at the position it is currently at, for the number of frames my fade buffer consists of. I then apply the envelope to the recently allocated fade out buffer, just as it is done in fadeBuffer() method in the AKAudioPlayer class. At this point I finally call stop() on the playing node, then schedule and play the fade out buffer.
Obviously there is going to be a discontinuity between stopping the buffer and playing the fade out buffer, e.g. by the time I apply the fade to the fade out buffer, the stop frame position I assigned to a local variable will no longer be valid, etc. And indeed, once I let off a pad, the sound that is played can only be described as discontinuous.
The only other solution to the problem I can think of strikes me as a daunting task, which would be to continually apply the fade envelope in realtime to the samples immediately ahead of the current play position as the buffer is being played. I currently do not believe I have the coding chops to pull this off.
Anyway, I looked through all the questions on S.O. concerned with AudioKit and this particular subject did not seem to come up. So anybodies thoughts on the matter would be greatly appreciated. Thanks in advance!
If anybody wants to look at my code, the PadModel class starts on line 223 of this file:
https://github.com/mike-normal13/pad/blob/master/Pad.swift
AudioKit is lacking in a fade-to-stop method. I would suggest requesting the feature as it is a worth while endeavor. If you are using AVAudioUnitSampler, I believe you can set ADSR values to achieve the fading effect, but not in a very straightforward way. You have to create a preset using AULab, figure out how to get the release to work, then import it into your project.
I am currently working on a mobile game and I can't help but notice that whenever an object (an obstacle in my case) is instantiated or destroyed, I get a sudden FPS drop which is critical for my gameplay.
To help give an idea, I instantiate obstacles on top of my screen every 1.5 seconds, then I scroll them down. If the obstacles reaches the bottom of the screen already, I destroy them to prevent memory leaks/waste.
I'm still pretty new to Unity development. Am I on the right track though? What is a better solution to prevent this sudden frame rate drop?
Do you have any big / nested loops or complex processes going off as part of instantiation (Look at your awake/start methods)?
Regardless, look into object pooling as a better method to handle this type of thing.
For a basic example, instead of creating/destroying projectiles of a gun every time it's used, give that gun a "projectile pool" that creates n projectiles when the level loads. Then, when shooting, just set the projectile's position back to the gun and set the projectile as active. After impact, have the projectile deactivate (or after a few seconds if nothing is hit).
When you drag a finger across the iPhone touchscreen, it generates touchesMoved events at a nice, regular 60Hz.
However, the transition from the initial touchesBegan event to the first touchesMoved is less obvious: sometimes the device waits a while.
What's it waiting for? Larger time/distance deltas? More touches to lump into the event?
Does anybody know?
Importantly, this delay does not happen with subsequent fingers, which puts the first touch at a distinct disadvantage. It's very asymmetric and bad news for apps that demand precise input, like games and musical instruments.
To see this bug/phenomenon in action
slowly drag the iPhone screen unlock slider to the right. note the sudden jump & note how it doesn't occur if you have another finger resting anywhere else on the screen
try "creeping" across a narrow bridge in any number of 3D games. Frustrating!
try a dual virtual joystick game & note that the effect is mitigated because you're obliged to never end either of the touches which amortizes the unpleasantness.
Should've logged this as a bug 8 months ago.
After a touchesBegan event is fired the UIKit looks for a positional movement of the finger touch which translates into touchedMoved events as the x/y of the finger is changed until the finger is lifted and the touchesEnded event is fired.
If the finger is held down in one place it will not fire the touchesMoved event until there is movement.
I am building an app where you have to draw based on touchesMoved and it does happen at intervals but it is fast enough to give a smooth drawing appearance. Since it is an event and buried in the SDK you might have to do some testing in your scenario to see how fast it responds, depending on other actions or events it could be variable to the situation it is used. In my experience it is within a few ms of movement and this is with about 2-3k other sprites on the screen.
The drawing does start on the touchesBegan event though so the first placement is set then it chains to the touhesMoved and ends with the touchesEnd. I use all the events for the drag operation, so maybe the initial move is less laggy perceptually in this case.
To test in your app you could put a timestamp on each event if it is crucial to your design and work out some sort of easing.
http://developer.apple.com/IPhone/library/documentation/UIKit/Reference/UIResponder_Class/Reference/Reference.html#//apple_ref/occ/instm/UIResponder/touchesMoved:withEvent:
I don't think it's a bug, it's more of a missing feature.
Ordinarily, this is intended behavior to filter out accidental micro-movements that would transform a tap or long press into a slide when this was not intended by the user.
This is nothing new, it has always been there, for instance there are a few pixels of tolerance for double clicks in pointer-based GUIs - or even this same tolerance before a drag is started, because users sometimes inadvertently drag when they just meant to click. Try slowly moving an item on the desktop (OSX or Windows) to see it.
The missing feature is that it doesn't appear to be configurable.
An idea: Is it possible to enter a timed loop on touchesBegan that periodically checks the touch's locationInView:?
I don't represent any kind of official answer but it makes sense that touchesBegan->touchesMoved has a longer duration than touchesMoved->touchesMoved. It would be frustrating to developers if every touchesBegan came along with a bunch of accidental touchesMoved events. Apple must have determined (experimentally) some distance at which a touch becomes a drag. Once the touchesMoved has begun, there is no need to perform this test any more because every point until the next touchesUp is guaranteed to be a touchesMoved.
This seems to be what you are saying in your original post, Rythmic Fistman, and I just wanted to elaborate a bit more and say that I agree with your reasoning. This means if you're calculating a "drag velocity" of some sort, you are required to use distance traveled as a factor, rather than depending on the frequency of the update timer (which is better practice anyway).
Its waiting for the first move.
That's how the OS distinguishes a drag from a tap. Once you drag, all new notifications are touchesMoved.
This is also the reason why you should write code to execute on touch up event.
Currently such "delay" between touchesBegan and touchesMoved is present also when other fingers are touching the screen. Unfortunately it seems that an option to disable it doesn't exist yet. I'm also a music app developer (and player), and I find this behavior very annoying.