I am working in Unreal 4.19 and am trying to create a matinee. Every time I try to add keyframes, even though I have the items I want added to it selected, the system tells me that I don't have them selected. Why?
Before you go too far down this path, I would recommend using Sequencer in place of Matinee - you'll find actor selections and interactions far more robust and easy to use than they were in Matinee. Matinee is a legacy tool that Epic has retained for backward compatibility, but is no longer maintained or updated.
Related
I'm working on a system that automatically generate and release scriptable object that represent uniqueness for gameobject with a given component attach.
The generation process seems to be working really fine, but I'm facing problem when I want to release the scriptable object.
The given script is tagged "ExecuteInEditMode" and is implementating OnDestroy method to advertise a manager that his scriptable object should be deleted.
The problem is that OnDestroy is called in 4 situation from what i can tell :
Click on play
Switching scene
Manual removal (the only one I want to work with)
On editor shutdown
I've been able to avoid the 2 first on with this :
if (Application.isPlaying == false && Math.Abs(Time.timeSinceLevelLoad) > 0.00001f)
But I don't find any good solution for "On editor shutdown" I've seen that EditorApplication.quitting can be use for that case, but documentation warn that this event is not called when unity crash or is forced to quit, and I don't want to lose all my scriptable object if the editor crash.
I'm looking for a robust solution to avoid this problem if someone can help me please.
Thanks, have a nice day
Just handle the application quitting event.
this event is not called when unity crash or is forced to quit, and I don't want to lose all my scriptable object if the editor crash.
When unity crashes, so will your program, so it won't work any ways. Just make sure that under normal operating procedure (so a normal shutdown) you save your modifications.
In Unity3D if you edit your scene and you don't save, if the editor crashes you lose all your changes. The same will happen for what you're building here. In order to mitigate this risk (since prevention is reasonably difficult if not impossible) you can opt to save every minute, always save each change to a temporary location, or save for example every play.
For example i added a script to my project that autosaves the scene(s) whenever i press play, so when editing the scene i press play now and then to test, and it all gets autosaved.
Apparently this is some beginner's question (and that I am) but it's driving me crazy and I couldn't find any mention:
I Occasionally forget to exit Play mode and go on building my UI making objects and changes, only to realize that I'm still in Play Mode and as soon as I unpress the play button, these will be purged! I suppose the Unity Editor has its reasons for allowing editing of Scripts/Scenes while in Play Mode (would be happy to hear some examples-maybe testing scenes?) but my main question here is:
Is there some way for me to prevent this behavior? Or at least some trick that you use to prevent me from making changes while in play mode? (Other than becoming paranoid about it and checking constantly...)
Thank you
PS. sigh time to head back to Unity and rebuild that UI that I lost...
Other Unity coders have had this problem before me and they came up with a neat solution.
Setting the UI to a different colour while in playmode "Playmode tint".
You can read the details here (originally posted 2009 but I have checked it still works in latest Unity 5.3):
http://answers.unity3d.com/questions/9159/best-strategies-for-not-accidently-editing-whilst.html
There is no settings to prevent changing things during play mode but there are ways to reduce the chances of losing changes during play mode.
1.Edit->Preferences... -> Colors. Now, on the right change Playmode Tint to red. That will remind you you are making changes in play mode.
2.Click on the gear icon of each component you change during play then click Copy Component. When are done with playmode, select the component you want to keep its changes. Click the gear icon again and this time, Click Paste Component Values.
3.Write an Editor Plugin that will do that for you. This is hard but possible.
Use event to find out when entering playmode. Store all GameObject public important variables such as transform/rigidbody properties in a list.
Wait for the stop event to fire then ask your self which GameObjects to overwrite settings to. Then overwrite the properties of the selected GameObjects That's it.
Usefull APIs for this:
EditorApplication.isPlaying
EditorApplication.isPaused
EditorApplication.isPlayingOrWillChangePlaymode
EditorApplication.playmodeStateChanged += callBackFunc;
EditorApplication.HierarchyWindowItemCallback
EditorApplication.ProjectWindowItemCallback
Note: According to Unity roadmap, a feature that enables you to save playtime changes is in construction and will be released soon but the above seems to be the only way at this time.
You can select objects you want to keep while in play mode, copy them with Ctrl + C, and then just paste them back in with Ctrl + V after returning to edit mode. Then you can eithr delete the originals from the scene or copy values from individual components like #Programmer suggested.
Apparently this is some beginner's question (and that I am) but it's driving me crazy and I couldn't find any mention:
I Occasionally forget to exit Play mode and go on building my UI making objects and changes, only to realize that I'm still in Play Mode and as soon as I unpress the play button, these will be purged! I suppose the Unity Editor has its reasons for allowing editing of Scripts/Scenes while in Play Mode (would be happy to hear some examples-maybe testing scenes?) but my main question here is:
Is there some way for me to prevent this behavior? Or at least some trick that you use to prevent me from making changes while in play mode? (Other than becoming paranoid about it and checking constantly...)
Thank you
PS. sigh time to head back to Unity and rebuild that UI that I lost...
Other Unity coders have had this problem before me and they came up with a neat solution.
Setting the UI to a different colour while in playmode "Playmode tint".
You can read the details here (originally posted 2009 but I have checked it still works in latest Unity 5.3):
http://answers.unity3d.com/questions/9159/best-strategies-for-not-accidently-editing-whilst.html
There is no settings to prevent changing things during play mode but there are ways to reduce the chances of losing changes during play mode.
1.Edit->Preferences... -> Colors. Now, on the right change Playmode Tint to red. That will remind you you are making changes in play mode.
2.Click on the gear icon of each component you change during play then click Copy Component. When are done with playmode, select the component you want to keep its changes. Click the gear icon again and this time, Click Paste Component Values.
3.Write an Editor Plugin that will do that for you. This is hard but possible.
Use event to find out when entering playmode. Store all GameObject public important variables such as transform/rigidbody properties in a list.
Wait for the stop event to fire then ask your self which GameObjects to overwrite settings to. Then overwrite the properties of the selected GameObjects That's it.
Usefull APIs for this:
EditorApplication.isPlaying
EditorApplication.isPaused
EditorApplication.isPlayingOrWillChangePlaymode
EditorApplication.playmodeStateChanged += callBackFunc;
EditorApplication.HierarchyWindowItemCallback
EditorApplication.ProjectWindowItemCallback
Note: According to Unity roadmap, a feature that enables you to save playtime changes is in construction and will be released soon but the above seems to be the only way at this time.
You can select objects you want to keep while in play mode, copy them with Ctrl + C, and then just paste them back in with Ctrl + V after returning to edit mode. Then you can eithr delete the originals from the scene or copy values from individual components like #Programmer suggested.
I have a very intriguing obstacle to overcome. I am trying to display the live contents of a UIView in another, separate UIView.
What I am trying to accomplish is very similar to Mission Control in Mac OS X. In Mission Control, there are large views in the center, displaying the desktop or an application. Above that, there are small views that can be reorganized. These small views display a live preview of their corresponding app. The preview is instant, and the framerate is exact. Ultimately, I am trying to recreate this effect, as cheaply as possible.
I have tried many possible solutions, and the one shown here is as close as I have gotten. It works, however the - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx method isn't called on every change. My solution was to call [cloneView setNeedsDisplay] using a CADisplayLink, so it is called on every screen refresh. It is very near my goal, but the framerate is extremely low. I think that [CALayer renderInContext:] is much too slow.
If it is possible to have two CALayers render the same source, that would be golden. However, I am not sure how to approach this. Luckily, this is simply a concept app and isn't destined for the App Store, so I can make use of the private APIs. I have looked into IOSurface and Quartz contexts, but I haven't been able to solve this puzzle so far. Any input would be greatly appreciated!
iOS and OSX are actually mostly the same underneath at the lowest level. (However, when you get higher up the stack iOS is actually largely more advanced than OSX as it is newer and had a fresh start)
However, in this case they both use the same thing (I believe). You'll notice something about Mission Control. It isolates "windows" rather than views. On iOS each UIWindow has a ".contentID" property and CALayerHost can use to make the render server share the render context between the 2 of them (2 layers that is).
So my advice is to make your views separate UIWindows and get native mirroring for free-(ish). (In my experience the CALayerHost takes over the target layers place with the render server and so if both the CALayerHost and the window are visible the window won't be anymore, only the layer host will be (which the way they are used on OSX and iOS isn't a problem).)
So if you are after true mirroring, 2 copies of it, you'll need to resort to the sort of thing you were thinking about.
1 Option for this is to create a UIView subclass that uses
https://github.com/yyfrankyy/iOS5.1-Framework-Headers/blob/master/UIKit.framework/UIView-Rendering.h#L12
this UIView private method to get an IOSurface for a target view and then using a CADisplayLink once per second get and draw the surface.
Another option which may work (I'm not sure as I don't know your setup or desired effect) is possibly just to use a CAReplicatorLayer which displays a mirror of a CALayer using the same backing store (very fast and efficient + public stable API).
Sorry I couldn't give you a fixed, "this is the answer reply", but hopefully I've given you enough ideas and possibilities to get started.
I've also included some links to things you might find useful to read.
What's the magic behind CAReplicatorLayer?
http://aptogo.co.uk/2011/08/no-fuss-reflections/
http://iphonedevwiki.net/index.php/SBAppContextHostManager
http://iphonedevwiki.net/index.php/SBAppContextHostView
http://iphonedevwiki.net/index.php/CALayerHost
http://iky1e.tumblr.com/post/33109276151/recreating-remote-views-ios5
http://iky1e.tumblr.com/post/14886675036/current-projects-understanding-ios-app-rendering
I'm creating an AIR app, but I realized that it doesn't seem to natively support the "fling" momentum. I thought I'd ask if anyone out there has created an object or plugin that would put this back in? Currently, on the objects I want to fling, I'm recreating the momentum, but it's not perfect yet. Could anyone put me onto the right path in doing this?
Thanks!
I ended up creating my own fling, and it works great.
I keep the last 3 touchmove locations and times in memory, then on touchend, I find the difference between the most recent and oldest touch in memory, to find speed and direction. Then I used Actuate to ease it a certain distance past that if necessary.
It took a lot of tweaking to make it feel natural, though.
Greensock has a great plugin which works just like iOS native flick, and can be customized to work in an array of ways. Performance in Air is superb even on older iPads. For best results use BitMask.