I want to save the video from the results of my Anylogic simulation. Could you send me a guide?
I want to send my result for others that they didn't install Anylogic. Also, I want use a video of results in my powerpoint.
THANKS
AnyLogic does not come with video recording. Either, you upload the model to the AnyLogic cloud and let your users play with it themselves.
Or you install a screen-recording software such as the free ScreenCast-o-Matic
You can set up a script to speed up, slow down, jump to a time, zoom, pan, change views etc. during the run. Then you will need to use screen capture software like snagit, or others to actually capture the video and save it to a file.
With the script you can create a more interesting video if you want to move around and focus on different areas without making a whole bunch of smaller videos that you have to piece together.
Related
I don't know if this is the right place to ask, but I have a question..
I am working on a students project where we want to stream a video from one server to multiple devices. But the video should only play on one display at a time. The other displays should be black. But if you switch to another display, the video should continue seamlessly. (Please ask if you need clarification)
The video outputs can be plain displays or with additional servers so a wide range of protocols can be implemented. Wifi connection is possible. Web solutions for running the video in a browser is also possible.
I thought of a lot of alternatives like DLNA, RTP or Chromecast. But I don't know where to start and which is the right solution. It is important that the video that is streamed from the server can be continued on any display seamlessly.
If would mean much to me if you'd give me a hint.
I was wondering if there was a way to record the sensor and video data from my iPhone, save it in some way, and then feed it into Unity to test an AR app.
I'd like to see how different algorithms behave on identical input, and that's hard to do when the only way to test is to pick up my phone and wave it around.
What do you can do is capture the image buffer. I've done something similar using ARCore. Not sure if ARKit has a similar implementation. I found this when I did a brief search https://forum.unity.com/threads/how-to-access-arframe-image-in-unity-arkit.496372/
In ARCore, you can take this image buffer and using ImageConversion.EncodeToPNG you can create PNG files with the timestamp. You can pull your sensor data in parallel. Depending on what you want, you can write it to a file using a similar approach: https://support.unity3d.com/hc/en-us/articles/115000341143-How-do-I-read-and-write-data-from-a-text-file-
After which, you can use FFMPEG to convert these PNGs into a video. If you want to try different algorithms, there's a good chance the PNGs alone will be enough. Else you can use a command like so: http://freesoftwaremagazine.com/articles/assembling_video_png_stream_ffmpeg/
You should be able to pass these images and the corresponding sensor data to your algorithm to check.
We're using the HoloLens' locatable camera (in Unity) to perform a number of image recognition tasks. We'd like to utilize the mixed reality capture feature (MRC) available in the HoloLens developer portal so that we can demo our app, but MRC crashes because we're hogging the camera in Photo Mode.
Does anyone have a good workaround for this? We've had some ideas, but none of them are without large downsides.
Solution: Put your locatable camera in Video Mode so that you can share the video camera with MRC.
Downside: Video Mode only allows us to save the video to disk, but we need realtime access to the buffer in memory (the way photo mode gives us access) so that we can do our detection in realtime.
Solution: Capture the video in a C++ plugin, and pass the frame bytes to Unity. This allows MRC to work as expected.
Downside: We lose the 'locatable' part of the 'locatable camera' as we no longer get access to the cameraSpaceToWorldSpace transformation matrix, which we are utilizing in our UI to locate our recognized objects in world space.
Sub-solution: recreate the locatable camera view's transformation matrix yourself.
Sub-downside: I don't have any insight into how Microsoft creates this transformation matrix. I imagine it involves some hardware complexities, such as accounting for lens distortions. If someone can guide me to how this matrix is created, that might be one solution.
Solution: Turn off object recognition while you create the MRC, then turn it back on when you're done recording
Downside: Our recognition system runs in real time, n times per second. There would be no way to capture the recognitions on video.
We ended up creating a plugin for Unity that uses Microsoft's Media Foundation to get access to the video camera frames. We open sourced it in case anyone else runs into this problem.
The plugin mimics Unity's VideoCapture class so that developers will be able to easily understand how to implement it.
Hopefully this is helpful to a few.
I tried to track motion in Matlab by using this tutorial (http://www.mathworks.com/help/vision/examples/motion-based-multiple-object-tracking.html) and it works fine but it implies video as source to work.
I wanna know if it's possible to track motion by using the same tutorial but in real time by using camera as source!
Everything is possible, just please try to find some stuff by yourself before asking here.
I think you may find the information you need in this link:
http://www.matlabtips.com/realtime-processing/
Alternately, you could of course just store the camera output as a (very short) video and continuously analyse that instead.
As of release R2014a, MATLAB includes support for USB webcams. If you have an older version, or if you want to use a high-end camera, you would need the Image Acquisition Toolbox.
Once you are able to get frames from the camera, you can reuse almost all of the code in the multiple object tracking example. You would only need to rewrite the readFrame function with code to get a frame from the camera.
I have a number of music tracks which I would like the user to be able to preview a small clip of each.
These tracks are on a server.
How is media streamed into the app and which player is used? Can a custom player be created to play the clips within the view, without e.g. quicktime player opening?
Thanks
If you don't want to use QuickTime, the matter is rather complex, as far as I know. Fortunately, a lot of work already has been done for you by Matt Gallagher. See this excelent post for further information. The code, that he provides works perfectly in my application.