How do I create a test app to create many screenshots in UE4? - unreal-engine4

I'd like to create a test application for my Unreal Engine based game to create screenshots. I'd like to place many (possibly thousands) of cameras throughout the maps and then have my test application enumerate them all and take a screen capture at each camera location.
I came across Taking Screenshots, but wanted to first check to see if this isn't already built into UE4 in the editor, or some tool. I'm also aware of the Screenshot Comparison Tool, but that doesn't seem to be what I need because I don't really want to use UE4 to do the image matching, but instead just want a directory full of images that I can do with what I want.
Any suggestions?

This is not directly what you want to do but I found this article very interesting: https://www.unrealengine.com/en-US/blog/capturing-stereoscopic-360-screenshots-videos-movies-unreal-engine-4?sessionInvalidated=true
It explains how people at Ninja Theory Ltd proceeded to produce their 360 video trailer which is, in the end, producing two 360 screenshots per frame.
So what they did was having everything exported in a folder (as a sequence of images) and then did what they wanted with it. (In this case put them all together with ffmpeg to make a video)
They used a plugin, I do not know if it can be tweaked not to make 360 captures but the built-in "take screenshot" from UE4 could work for you.
More specifically to what you need, you could probably store all positions/transform in an array, loop over it when you want to make the screenshot. Each step, you place your Camera at the specific position, make sure it is the current active camera to change the "view" and take a screenshot.
Taking screenshots and setting parameters such as export folder, resolution etc. ... can be called via console commands and console commands can be executed from code or blueprint using the "execute console command" node (there is an example in the article).
I hope it helps.

I think the best bet you have is rendering camera to a texture.
this way you can have multiple inactive camera then iterate through them, activate them, capture their screen view and going to the next one.
for basic tutorial have a look at
https://www.youtube.com/watch?v=a9iho861SlY

Related

Why is the play button on my title screen not starting the game?

I managed to open the demo game that I need to see/play, however, it looks like the title screen isn't loading correctly. Clicking on the "Play" button should allow the user to start the game, but when I try clicking on it, nothing happens.
I'm not sure why this is happening because I downloaded the exact same files as the ones that were used in the demo and I also tried deleting/redownloading the files a couple of times. I also double checked the console messages and there aren't any errors/warnings for any scripts. I'll attach a screenshot of what I see and the link to the game files themselves if anyone wants to try it on their end.
Also, if this helps, I'm using Unity version 2018.3.2f1.
Here is a link to the project if you want to try it out yourself (I'd post the code, but I don't want to put a giant block of code up without a clear direction; however, I believe the main menu content is in the "Manager.cs" file): https://drive.google.com/file/d/1ekXt948b612dmyT1AZReUOuzh2XbnSDG/view?usp=sharing
This is what the game looks like if it helps:
After reading through the code in the other scripts, I realized that the error was coming from the specific region that was being used as a "hitbox" on the screen for the play button. And because I was setting my aspect ratio differently than what the developer used, the positioning of the "hitbox" did not line up correctly on my screen. So instead, I had to change the aspect ratio to fixed resolution and specific canvas sizes (width and length).

Screen record in unity3d

How to do screen record in unity?
I want to record my screen(gameplay) during my running game.
That should be play/stop , replay , save that recording on locally from device, open/load from my device (which is already we recorded).
In my game one camera which can capture native camera, and one 3d model.
I wish to record that both and use my functionality whenever i want.
Thank you in advance.
This is hard to implement, but not impossible. Because every frame or interval you need to capture screen shot of your camera view and store it in the list. You need good, (Smaller interval but not much. Because when it becomes smaller, needs more memory) interval value. If your interval is big raplay can be seen laggy.
While you play game your ram becomes full and os will terminate the app. So you need to fully cover memory optimization. Another solution is assets in Unity Asset store.
EZ Replay Manager can be used. (Keep in mind: I haven't tried it yet.)
Free
Pro
Check out this open-source project: https://github.com/getsocial-im/getsocial-capture.
By default our project records Main Camera's rendered content. C# examples are in the repo.
You can record in 2 modes:
Continuous mode - capture last X frames.
Manual mode - capture frames on your own when needed. For example, record a timelapse of the level.
Once the recording is done, you can generate GIF, get raw bytes and do whatever you want. E.g. let your users share that GIF with friends.
Here's the recording of a game session from the test app. The recorded GIF shows up in the end:
Disclaimer: I worked at GetSocial at the time of writing.
well i know a guy who post a similar project on github. link :- https://github.com/thanh-nguyen-kim/Unity_Android_Screen_Recorder
but there is a limitation and that is this code is only works on android devices(android means only android not even on ios).
but this is very powerful recorder and it is capture whatever appear on screen(so basically it is a screen recorder made with unity) and also it will capture your microphone output.give it a try.
and if you find any other solution then please also tell me. because it will very helpful for me.because i want to record video with in-game audio and also save it into gallery
Unity now has a screen recording tool builtin. It's called Recorder and doesn't require any coding.
In Unity, go to the Window menu, then click on Package Manager
By default, Packages might be set to "In Project". Select "Unity
Registry" instead
Type "Recorder" in the search box
Select the Recorder and click Install in the lower right corner of the window
That's about all you need to get everything set up and hopefully the
options make sense. The main thing to be aware of that setting
"Recording Mode" to "Single" will take a single screenshot (with
F10)
NOTE: This is a copy of my answer from a Unity screenshots question

Which formats can I use for animated 3D objects

.. in order to use in augmented reality application on iphone?
I have an app working like that; camera detects the marker and then it places an object related to that marker. However the object is not animated. It stands there. Of course i can move the object programmatically but i don't want to do that. What i want is the object has animation itself. I searched but i can't find exact file format. There are .obj files (not animated itself, is it true?), .mtl files, .anm files .etc. If the format is one of them, then can you give me an example model?
Thanks
Definitely MD2 is your best choice. I have integrated jPCT-AE and Vuforia and it works like a charm. AFAIK MD2 is best for animated models mainly because it can store the key-frame animation in itself and you can have any kind of animation with it at almost no cost.
Here is a test video:
http://youtu.be/chsHh0pEhzw
If you have more question do not hesitate ;)
You should specify what AR SDK/platform you are using to create the iPhone application you are talking about. That being said, for many of the common AR SDK available the MD2 format is often used to display animated models (either with a built in render engine or with example code that shows how to use the MD2 format with that SDK):
http://en.wikipedia.org/wiki/MD2_(file_format)
http://www.junaio.com/develop/docs/documenation/general/3dmodels/
OBJ (Wavefront) files do not support animation.

iOS frame by frame animation, by script

There are a few SO questions regarding frame by frame animation (such as frame by frame animation and other similar questions), however I feel mine is different so here goes.
This is partially a design question from someone with very little ios experience.
I'm not sure "frame by frame" is the correct description of what I want to do so let me describe that. Basically, I have a "script" of an animated movie and I'd like to play this script.
This script is a json file which describes a set of scenes. In each scene there are a few elements such as a background image, a list of actors with their positions and a background sound clip. Further, for each actor and background there's an image file that represents it. (it's a bit more complex - each actor has a "behavior" such as how it blinks, how he talks etc). So my job is to follow the given script referencing actors and background and with every frame, place the actors in their designated position, draw the correct background and play the sound file.
The movie may be paused, scrubbed forward or backward similar to youtube's movie player functionality.
Most of the questions I've seen which refer to frame-by-frame animation have different requirements than I do (I'll list some more requirements later). They usually suggest to use animationImages property of a UIImageView. This is fine for animating a button or a checkbox but they all assume there's a short and predefined set of images that need to be played.
If I were to go with animationImages I'd have to pre-create all the images up front and my pure guess is that it won't scale (think about 30fps for one minute, you get 60*30=1800 images. Plus the scrub and pause/play abilities seem challenging in this case).
So I'm looking for the right way to do this. My instinct, and I'm learning more as I go, is that there are probably three or four main ways to achieve this.
By using Core Animations and defining "keypoints" and animated transitions b/w those keypoints. For example if an actor needs to be at point A at time t1 and point B at time t2 then all I need to do it animate what's in between. I've done something similar in ActionScript in the past and it was nice but was particularly challenging to implement the scrub action and keep everytyhing in sync so I'm not a big fan of the approach. Imagine that you have to implement a pause in the middle of an animation or scrub to a middle of an animation. It's doable but not pleasant.
Set a timer for, say 30 times a second and on every tick consult the model (the model is the script json file along with the description of the actors and the backgrounds) and draw what needs to be drawn at this time. Use Quartz 2D's API and drawRect. This is probably the simple approach and but I don't have enough experience to tell how well it's going to work on different devices, probably CPU wise, it all depends on the amount of calculations I need to make on each tick and the amount of effort it takes ios to draw everything. I don't have a hunch.
Similar to 2, but use OpenGL to draw. I prefer 2 b/c the API is easier but perhaps resource wise OpenGL is more suitable.
Use a game framework such as cocos2d which I'd never used before but seems to be solving more or less similar problems. They seem to have a nice API so I'd be happy if I could find all my requirements answered by them.
Atop of the requirements I'd just described (play a movie given it's "script" file and a description of the actors, backgrounds and sounds), there's another set of requirements -
The movie needs to be played in full screen mode or partial screen mode (where the rest of the screen is dedicated to other controls)
I'm starting with the iphone by naturally an ipad should follow.
I'd like to be able to create a thumbnail of this movie for local phone use (display it in a gallery in my application). The thumbnail may just be the first frame of the movie.
I want to be able to "export" the result as a movie, something that could be easily uploaded to youtube or facebook.
So the big question here is whether any of the suggested 1-4 implementations I have in mind (or others you might suggest) can somehow export such a movie.
If all four fail on the movie export task then I have an alternative in mind. The alternative is to use a server which runs ffmpeg and which accept a bundle of all the movie images (I'd have to draw them in the phone and upload them to the sever by their sequence) and then the server would compile all the images with their soundtrack to a single movie.
Obviously to keep things simple I'd prefer to do this server-less, i.e. be able to export the movie from the iphone but if that's too much to ask then the last requirement would be to at least be able to export the set of all images (keyframes in the movie) so I can bundle them and upload to a server.
The length of the movie is supposed to be a one or two minutes. I hope the question wasn't too long and that it's clear...
Thanks!
well written question. for your video export needs check out AVFoundation (available as of iOS 4). If I were going to implement this, I'd try #1 or #4. I think #1 might be the quickest to just try out, but that's probably because I don't have any experience with cocos2d. I think you will be able to pause and scrub CoreAnimation: check out the CAMediaTiming protocol it adopts.
Ran, you do have a number of options. You are not going to find a "complete solution" but it will be possible to make use of existing libraries in order to skip a bunch of implementation and performance issues. You can of course try to build this whole thing in OpenGL, but my advice is that you go with another approach. What I suggest is that you render the entire "video" frame by frame on the device based on your json settings. That basically comes down to setting up your scene elements and then determining the positions of each element for times [0, 1, 2] where each number indicates a frame at some framerate (15, 20, or 24 FPS would be more than enough). First off, please have a look at my library for non-trivial iOS animations, in it you will find a class named AVOfflineComposition that does the "comp items and save to a file on disk" step. Obviously, this class does not do everything you need, but it is a good starting point for the basic logic of creating a comp of N elements and writing the results out to a video file. The point of creating a comp is that all of your code that reads settings and places objects at a specific spot in the comp can be run in an offline mode and the result you get at the end is a video file. Compare this to all the details involved with maintaining all theses elements in memory and then going forward more quickly or slowly depending on how quickly everything is running.
The next step will be to create 1 audio file that is the length that the "movie" of all the comped frames and have it include any sounds at specific times. This basically means mixing the audio at runtime and saving the results to an output file so that the results are easy to play with AVAudioPlayer. You can have a look at some very simple PCM mixer code that I wrote for this type of thing. But, you might want to consider a more complete audio engine like theamazingaudioengine.
Once you have an audio file and a movie file, these can be played together and kept in sync quite easily using the AVAnimatorMedia class. Take a look at this AVSync example for source code that shows a tightly synced example of playing a video and showing a movie.
Your last requirement can be implemented with the AVAssetWriterConvertFromMaxvid class, it implements logic that will read a .mvid movie file and write it as a h.264 encoded video using the h.264 encoder hardware on the iPhone or iPad. With this code, you will not need to write a ffmpeg based server module. Also, that would not work anyway because it would take too long to upload all the uncompressed video to your server. You need to compress the video to h.264 before it can be uploaded or emailed from the app.

iPhone Smooth Transition from One Video To Another

I have to figure out the best way to transition from one video to the next
BASIC IDEA: An example would be that there is a video of a person walking.....the user taps the video and a seamless transition occurs to a video of a person running (over simplified example)
My first thought was to create 2 movie players and use transitions between the 2 view elements. But movie-player doesn't support that.
stopping the current video, loading new content, and then starting it is a solution but not very elegant. We are making a interactive sales tool for our reps and we want this to look as professional as possible.
CURRENT THOUGHT: If there was some sample code for AVPlayer, it would seem I could use AVVideoComposition to switch between videos? But details on how that might happen don't seem to be currently available.
POSSIBLE CLUE: I figured this would be easy as I bought an app called Live Cams HD that shows 16 different video feeds at once.
Any ideas? Thanks in advance!
Steve, the short answer is that you are not going to be able to get the kind of results you want using AVPlayer. The h.264 video logic included in iOS is really great at playing video and video/audio together, but it really sucks at starting/stopping and switching from one clip to another. The reason is that there is a lot of buffering that needs to happen to load up and start playing a h.264 video in hardware. Basically, you need to roll your own code that sets UIImage/CGImageRef for your views in a way that makes it easy to switch from one clip to another by simply switching from one array of UIImage objects to another. Of course, that is easy to say yet not so easy to implement.
What I would suggest is that you evaluate existing code that already implements this logic instead of rolling your own. For example, have a look at this StreetFighter demo app. It shows how a very simple game like iPhone UI can be constructed using a series of clips that show a character doing a kick, a punch, or throwing a fireball. The results looks like this:
I also wrote up a blog post about seamless-video-looping-on-ios. You can of course roll your own code to do all this, but I would suggest reading more about my library at the linked website as it will save you a lot of time.
After the first video has played every frame except the last one, you quickly swap to a view with an image of the last frame (basically the last frame) and then you transition into a view with the an image of the first frame of the next video and start up that one.
Or you could create a animation with all your frames (programming your videos), that will make it customizable, but the quality will probably not be as good and the cpu usage can spike, so you will have to make a call on that one.