How to record animation in runtime - unity3d

I have some animations for upper body and lower body. I use avatar mask and set weight for them so them can override each other. I have 1 button for each animation. You can see it in this video.
https://youtu.be/fYdoFFJCuxk
All I want is how can I record the animation that I play in runtime and export it to a file(anim, fbx, ...).
Thanks in advance!

Your question is not trivial, meaning that maybe you need to try something out to narrow down the specific problem you might encounter in your attempt.
I am not 100% about how would I do it but I think there are 2 options.
1.- Runtime animation info serialize + save and load:
You would need to have a script that keeps track of the animation's state machine to save that info along the timeline. This script should save the info to a file (.xml, . json or text) so that it can be loaded and played. This info should hold basically the time and the state of the animation changes along the timeline since the animation play.
2.- Transform record:
You can have component for all the gameobjects you want to track that serialize and store the transform of your gameObject along a timeline. This is basically a replay system. If you play the animation, all the transforms info is saved to file (positions and rotations) and then when loaded, this positions and rotations are applied to the respective gameObjects.
You can give a look to the Easy replay system asset in the asset store. I bought it, it really simple and works really good. I tried it also with animations and it works.
Does not have the serialization and save and load part, but you can work that out very probably.
From scratch I think the best option is 1. With some asset, that might be more expensive if includes the serialization and save/load part, maybe you can get it done faster with option 2.

Related

Unity Playables graph locking animated properties with values at the last frame

I've been trying to use a mixture of Unity's Animators and Playables in my game, and for the most part it works well, but there's two issues that I've been having for a long time, and I've at best worked around them. Today I bashed my head against them again, and after finding no solution online I decided to get my lazy ass to finally ask for help.
The basic setup is that my characters have:
An Animator with its controller, state machine, etc. that is used mostly for movement, jumping, climbing, etc. In case this is relevant, each character has an override controller of a generic one.
A very simple playable graph with just an output (wrapping the animator) and an input (wrapping the specific clip I want to play at the time). This is used for actions and attacks.
The problems I have are:
1- I can't seem to figure out an elegant, clean way to know when the clip fed to the graph (second part above) is finished. Currently I circumvent this by simply calculating how long the clip is and dividing by the current animation speed factor; I also have to account for when the animation is paused (e.g. hitstop). This gets the job done but is quite unelegant, and I'm sure there must be a better way.
2- Most importantly, when I'm done with the graph and standalone animation, the values of all of the properties the clip touches become locked at their last value. They stay locked even during any animation played by the regular animator; even if any of these later animations change its value, it snaps back to that locked "last frame" value when they end.
I've tried several things to solve this:
2.1- Set the default / desired value of the properties in the idle / default animation (to "mark" them as animatable properties in the normal animator's animation). This only fixes the issue for whatever animation is touched; any other animation played after that instantly reverts to the value locked by the last frame of the animation played by the graph.
2.2- Destroy the playable wrapping the animation (I do this anyway for cleanup since I need to recreate it each time a new animation plays).
2.3- Destroy the graph and recreate it each time (surprisingly, even this keeps the values locked).
2.4- Disabling the animator and enabling it again.
I'm frankly starting to lose my mind with the second problem, so any help would be exceedingly appreciated. Thanks in advance for any help!
Although this question is pretty old, I'm adding answer (along with my related follow up question) just in case there's more people that end up here from a search engine.
Animations (both "legacy" and non-legacy) can fire off events at some frame - just pick point (frame on dopesheet, place in graph on curves) and click "add event"...
There's some difference on how to specify which object/script & function to call between legacy and non-legacy - but in both cases it's basically a callback so you can know for sure when some animation started/finished (or any point in between).
Instead of trying to change values of those properties that are "locked by animations" from void Update() you seem to need to do those from within void LateUpdate().
From my testing - using/doing "legacy" animations (that also means "animation" component instead of "animator controller") allows you to use Update() - at least once the animation is finished.
And also worth keeping in mind that "animator controller" (component) doesn't accept importing "legacy" animations for any of it's states.
And animation (component) doesn't seem to play (at least not auto-play) non-legacy animations.
As of my question, well it's basically same as OPs question - is it possible to somehow "unlock" these properties (obviously without any states/animations playing) while using "newer" animator controller?
Although - based on things I've read while trying to find what's going on. Those "legacy" animations are not really "legacy" - and seem to be there to stay for reasons like being better for performance.

Unity - How to change player spritesheet while runtime?

I'm currently working on a 2D pixel Jump'n'Run. I want the player to be able to "buy" new skins for the player-character. I have multiple sprite-sheets. They all have the same structure. I'm using sprite animations.
How can I change the sprite-sheet at runtime? I found the following solution, but it's very resource intense: https://youtu.be/HM17mAmLd7k?t=1818
Sincerly,
Julian
The reason it's so resource intensive in the video is because the all the sprites are loaded in each LateUpdate(), which is once per frame. The script looks like it's grabbing all the sprites in the sprite-sheet and loading them every frame so that if the spriteSheetName ever changes, it will update the renderer on the next frame.
I don't believe that's necessary and in the video he mentions that it's just being used as an example. What I'd do is move it out of the LateUpdate() method and into its own method that can be called only whenever the user wants to change the sprite-sheet. So instead of mindlessly loading the sprites from the sprite-sheet each frame, you'll only load them whenever the user selects it.
That should accomplish drastically cutting down the intensity of this script because you're not loading all the sprites in a sprite-sheet and looping through each of their renderers on every single frame.

Do AKAudioPlayer nodes apply a 10 ms fade out once they are stopped before reaching the end of the file/buffer?

First off, I just want to say thanks to the team at AudioKit for shedding some light on some difficult problems through their code. I have a few questions.
1: It does not appear the the AKAudioPlayer class applies on-the-spot fades if a player is stopped before reaching the end of the file/buffer. Is there another place in the AudioKit library where this is handled?
2: Does anybody know if the AVAudioMixer node’s volume can be adjusted in real time? E.G. can I make adjustments every 1/441 ms to follow the curve of my fade envelope? There is also the AVAudioUnitEQ with its globalGain property.
3: Is it possible to write to an AVAudioPCMBuffer’s floatChannelData after it has been scheduled, and while it is being played?
I’m writing a sampler app with AVFoundation. When it came time to tackle the problem of applying fades to loaded audio files within AVAudioPlayerNodes my first plan was to adjust the volume of the mixer node attached to my player node(s) in real time. This did not seem to have any sort of effect. It is entirely possible that my timing was off when doing this.
When I finally looked at the AKAudioPlayer class, I realized that one could adjust the actual buffer associated with an audio file. After a day or two of debugging, I was able to adapt the code from the AKAudioPlayer class into my PadModel class, with a few minor differences, and it works great.
However, I’m still getting those nasty little clicks whenever I stop one of my Pads from playing before the end of the file because the fades I apply are only in place at the start and the end of the file/buffer.
As far as my first question is concerned, in looking through the AKAudioPlayer class, it appears that the only fades applied to the buffer occur at the beginning and end of the buffer. The stop() method does not appear to apply any sort of on-the-spot fade to the buffer.
In my mind, the only way to have a fade out happen once a stop event happens is to apply it after said stop event, correct?
I have tried doing this, playing a 10 ms long faded-out buffer consisting of the buffer 10 ms after the stop position immediately after I call stop on my player node. It does not have the desired affect. I did not have much confidence in this scheme from the onset, but it seemed worth a try.
To be clear, once my stop() method is called, before actually stopping the the player node, I allocate the 10 ms fade buffer, read into the buffer at the position it is currently at, for the number of frames my fade buffer consists of. I then apply the envelope to the recently allocated fade out buffer, just as it is done in fadeBuffer() method in the AKAudioPlayer class. At this point I finally call stop() on the playing node, then schedule and play the fade out buffer.
Obviously there is going to be a discontinuity between stopping the buffer and playing the fade out buffer, e.g. by the time I apply the fade to the fade out buffer, the stop frame position I assigned to a local variable will no longer be valid, etc. And indeed, once I let off a pad, the sound that is played can only be described as discontinuous.
The only other solution to the problem I can think of strikes me as a daunting task, which would be to continually apply the fade envelope in realtime to the samples immediately ahead of the current play position as the buffer is being played. I currently do not believe I have the coding chops to pull this off.
Anyway, I looked through all the questions on S.O. concerned with AudioKit and this particular subject did not seem to come up. So anybodies thoughts on the matter would be greatly appreciated. Thanks in advance!
If anybody wants to look at my code, the PadModel class starts on line 223 of this file:
https://github.com/mike-normal13/pad/blob/master/Pad.swift
AudioKit is lacking in a fade-to-stop method. I would suggest requesting the feature as it is a worth while endeavor. If you are using AVAudioUnitSampler, I believe you can set ADSR values to achieve the fading effect, but not in a very straightforward way. You have to create a preset using AULab, figure out how to get the release to work, then import it into your project.

What is the difference between storing data in plist vs setting up data in scene editor in Sprite Kit?

It may not appear to be a wise question to ask but I have confusion in mind which I want to clear. In Apple's 2d game framework Sprite kit, normally it is suggested to store data related to nodes in Plist and then retrieve when required in code. What i don't get is while using .sks scene editor you have opportunity to set and store data related to nodes in attributes inspector.Or I am misunderstanding it? What is the difference in both approaches or setting data related to nodes by using .sks scene editor through attribute inspector does not get stored anywhere and only accessible dynamically. I would appreciate if someone can state the clear difference between these two approaches. I have tried to look for it but could not get any help.
I've recently started using the Scene editor for all aspects of my game. Not just the scenes but to create complex SKSPriteNode objects, modular popups, etc. It has mostly been great, and a lot quicker than trying to layout the objects in code.
So if I could keep all of the data in the sks file even better.
Yes, you are correct you can set and store data relevant to the node in the scene editor. It's in the Attributes Inspector under User Data.
You can also store that info in a Plist file. However a big difference between the two methods is that if you store the data in the Plist file you have to then sort through the data and find the relevant data and bind it to the node. When using User Data the data is already bound to the node and goes with it, so you don't have to find it associate it later.
From Apple.
You use this property to store your own data in a node. For example,
you might store game-specific data about each node to use inside your
game logic. This can be a useful alternative to creating your own node
subclasses to hold game data. SpriteKit does not do anything with the
data stored in the node. However, the data is archived when the node
is archived.
Here is an example of how easy it is to access User Data stored from an sks file
if let userData = redVillian.userData {
if let hitPoints = userData.object(forKey: "hitPoints") as? Int {
redVillain.hitPoints = hitPoints
}
if let name = userData.object(forKey: "name") as? String {
redVillian.name = name
}
}
You can also set the User Data in code
redVillian.userData = ["hitPoints": 5, "name" : "Herbert"]
or modify it in code
redVillian.userData?.setValue("Eleanor", forKey: "name")
I think you should go simple on that matter. PLists are perfect to store data that is more general. Like game settings, etc. If you need to store data related directly to a node, then use the node's User Data.
It is important to understand that in a game where you have, let's say, spawning ennemies, you could/should store the information about the different ennemy types in a plist.
When the ennemies are spawned, they will have different attributes that will make them unique even though a bunch of them are of the same type. It could be color, health points, level, etc. If you need that data serialized and saved for later use, it should go in the user data.
Based on what I just said, I don't know, performance wise, if user data is very efficient when it comes to data that changes a lot (like health points, for instance). I feel like in that situation, using a custom class and adding properties to store that data might make things go faster, but in that case, if you need the data serialized, you will have to implement the NSCoding protocol.
Edit
Based on your comments here is some clarification:
If you are only working on your scene in xCode's scene editor, then you might not see the real difference between User Data and using a plist. If you never create sprites by code then it's not obvious.
Why plists?
If you create your characters by code, then it might be interesting to use plists to store the "model information". As you mentionned, the video presentation made by Apple about SpriteKit best practices is talking about storing information such as the anchor, the position, scale and the like in a plist. For example, if you are making a space ship game, you might want to have different spaceship models. They might have very different shapes and because of that you might not want them to rotate around the same point for it to look good. They will also have different textures and different stats (max health points, max shield, etc). In the plist you can then store things like that:
ship model 1
texture: ship1.png
anchor: (0.5, 0.5)
maxhealth: 100
maxshield: 100
...
ship model 2
texture: ship2.png
anchor: (0.2, 0.5)
maxhealth: 150
maxshield: 200
...
...
If you are working with a graphist, it's easy for him to edit a plist without getting involved with xCode. So when his graphics are ready, he can fill in the informations about the texture and the anchor (for example) by himself and let the game designer add the other stats, etc.
Then, if you create a ship in the code, simply retrieve the informations from the plist to create the new ship with all the correct settings.
Over time, the graphist can update the graphics and related info in the plist and you won't even need to recompile to test the new content. You can even use generic pictures (or a solid color when no picture is provided) to develop features before you even get the graphics material.
Why use UserData?
Your spaceships are created and you are fighting for your life. At some point, someome will get hurt and you will need to keep track of it. You could do it easily with variables, but then, it you want to use NSCoder to save your game state, you will have to do it by hand.
If you store specific information about a sprite, then UserData is the key. It's something that is specific to THAT node only. It will also get saved with all the rest when you use NSCoder to save your game state.
This is ponctual data. Health will greatly vary during the course of the game, but when you load your last save, you want the boss you were fighting to be hurt just like it was when you saved. So it's ponctual, but persistent. And it's related only to that node. So it makes sense to store it in the node itself.
Performances
As with anything you might worry about performance. If you have hundreds of sprites all getting hurt and healed non-stop, it might bottleneck if you update the userData all the time. If you ever get to that point, you might consider subclassing from the SKSpriteNode , adding a variable to keep track of the health and such and instead of dealing with the NSCoder stuff directly, simply commit the content of your variables to the userData when you need to save it.

Having multiple unity scenes open simultaneously

I've been developing a board-style game in Unity3D. The main scene is the board, and has the data about each player and the current (randomly-generated) board stored within it.
I intend to add minigames into the game, for example when landing on a particular space on the board. Naturally, I would like to code the minigame in a separate scene. Is there a way I can do this without losing the instance of the current scene, so that the current scene's state is maintained?
Thanks in advance :)
Short answer: no, but there may be another way to do what you want.
A basic call to Application.LoadLevel call will destroy the current scene before loading the next one. This isn't what you want.
If your minigame is relatively simple, you could use Instantiate to bring in a prefab and spawn it far away from the rest of your scene. You can even use scripts to switch to another camera, toggle player controls and other interactions in the scene, and so on. Once the minigame is done, you can destroy or disable whatever you brought in, and re-enable whatever needs to be turned on in the main scene.
You could create a separate scene and call Application.LoadLevelAdditive to load that scene without destroying the current one. As above, you can then use scripts to manage which cameras and scene behaviors are active.
If you're careful, you don't really need two separate scenes. It may be enough to "fake" a scene switch.
Hard to give a complete answer without code, but you should look into the following things either with the unity documentation or youtube:
PlayerPrefs, this is one way of saving data, although i believe it isn't entirely secure i.e. being able to edit from a text file.
Serializable, this is apparently better than playerprefs.
DonDestroyOnLoad, can carry over information to multiple scenes.
Static variables, again not sure if this will help your particular problem.