I have video in Kdenlive and I'm adding 2 guides which will be a place to cut my project:
http://www.netcreate.pl/kden1.png
This video isn't mute as you can see.
Next I render this parts using guides:
netcreate.pl/kden1-2.png
First and second. But after export first frames are mute and audio isn't synced. I have put exported videos at same time-line to show this problem:
http://www.netcreate.pl/kden3.png
I checked all settings and I can't find it.
How to export exact part with full, synced audio without mute at the beginning??
Sorry for inconvenience, I can't post images without bigger experience/reputation in Stack Overflow.
I would expect that what you are seeing is a result of the fact that compressed formats are difficult to seek accurately. I assume that your source clip is an mp4 or some similar compressed format. You may find that you have better luck if you operate on sources that are uncompressed.
It would be interesting to see the results if you started with a clip with PCM audio and exported the cut to the same format. I expect that the synchronization would be better. Note: I haven't tried your exact scenario this myself. But I have fought similar problems with KDENLIVE in the past. Personally, before I edit any clips in KDENLIVE, I transcode everything to .mov files with video encoded as dnxhd and audio encoded as pcm_s16le. Since doing this, I see much fewer problems with audio sync and seeking.
I reproduced it, and it appears to be a bug. It reproduces whether using a zone/guide selection in the render dialog; or simply trimming a clip, adding it to beginning of timeline and rendering entire project. Interestingly, I do not reproduce it using Shotcut 15.03 by encoding a trimmed clip or adding a trimmed clip to the timeline and encoding that. For all of my tests I was using PAL MPEG-2 clip rendering to DV. A patch to fix the bug is welcome.
Related
Desired output (pun intended)
Capture video (and audio) and then directly apply time stamp and GPS coordinates as a layer on top of the video to instantly/directly embed them in the recording. (e.g. via subview/layer or CIFilter). The text layer is dynamic, e.g. timestamp keeps changing as film is rolling.
How To Add Text to Video in Swift and AVFoundation Tutorial: Adding Overlays and Animations to Videos both describes how to do it in post production (i.e. when the file is already saved). But this adds considerable time to UX, e.g if you record a 1 min video, this would take another 30 seconds to embed after, not desired UX.
Is there a way to embed the text/CIfilter directly before saving the video?
The approach of editing each frame "manually" in capture output is slow (5-10 frames per second only).
Adding AVVideoCompositing seem like the right approach, but I don't know where to "hook it up". In the AVCaptureOutput? Or is it possible directly in the AVCaptureMovieFileOutput?
Does anyone have a GitHub repo or sample code of how to approach?
Any help appreciated. Thank you!
I have been asked to edit a few videos for a group which is outside of my developer skill set. I have settled upon Blender version 2.79 and have managed to make all of the changes and have the finished item that I am happy with.
I put the settings in according to the Excellent Mikeycal Meyers YouTube tutorials but the output, rendered from HD quality, was horrendously pixelated. I updated the settings according to the last 2,79 update video
https://www.youtube.com/watch?v=CrpWJZsZpBk&index=31&list=PLjyuVPBuorqIhlqZtoIvnAVQ3x18sNev4
But now I do not seem to get an output file at all, any thoughts please?
Render settings
File without extension
I'm building an iOS app which requires me to allow the users to record a 15sec clip (with UIImagePickerController for example) and then convert it into an animated GIF. How could I achieve this? Is there any library/framework available for such task?
Thanks!
This gist code piece may help you.
https://gist.github.com/mayoff/4969104
It shows how to export frames into a gif image.
I don't believe there is any existing library that would do a straight conversion for you. There's a lot of libraries for displaying animated GIFs - far fewer native Objective-C libraries for creating them.
Fortunately, iOS does have support for saving as GIFs. There's an existing StackOverflow answer that covers how to create animated GIFs in-depth here:
Create and and export an animated gif via iOS?
...there's also a library on GitHub that abstracts the lower-level stuff away, although it's not been maintained for a while (link here).
All you'll need to do is create an array of the frames you want to convert into your GIF. I strongly recommend you don't try and convert every single frame in your 15 second video, if only because you'll end up with a very large GIF at a frame-rate that's too high. You would be better off picking every other, or even every 3/4 frames from your video sample. Capturing images from video is also pretty well documented on iOS.
I recently created a library called Regift for converting videos to gifs on iOS. Hopefully it will help anyone coming to this in the future :)
There are a few SO questions regarding frame by frame animation (such as frame by frame animation and other similar questions), however I feel mine is different so here goes.
This is partially a design question from someone with very little ios experience.
I'm not sure "frame by frame" is the correct description of what I want to do so let me describe that. Basically, I have a "script" of an animated movie and I'd like to play this script.
This script is a json file which describes a set of scenes. In each scene there are a few elements such as a background image, a list of actors with their positions and a background sound clip. Further, for each actor and background there's an image file that represents it. (it's a bit more complex - each actor has a "behavior" such as how it blinks, how he talks etc). So my job is to follow the given script referencing actors and background and with every frame, place the actors in their designated position, draw the correct background and play the sound file.
The movie may be paused, scrubbed forward or backward similar to youtube's movie player functionality.
Most of the questions I've seen which refer to frame-by-frame animation have different requirements than I do (I'll list some more requirements later). They usually suggest to use animationImages property of a UIImageView. This is fine for animating a button or a checkbox but they all assume there's a short and predefined set of images that need to be played.
If I were to go with animationImages I'd have to pre-create all the images up front and my pure guess is that it won't scale (think about 30fps for one minute, you get 60*30=1800 images. Plus the scrub and pause/play abilities seem challenging in this case).
So I'm looking for the right way to do this. My instinct, and I'm learning more as I go, is that there are probably three or four main ways to achieve this.
By using Core Animations and defining "keypoints" and animated transitions b/w those keypoints. For example if an actor needs to be at point A at time t1 and point B at time t2 then all I need to do it animate what's in between. I've done something similar in ActionScript in the past and it was nice but was particularly challenging to implement the scrub action and keep everytyhing in sync so I'm not a big fan of the approach. Imagine that you have to implement a pause in the middle of an animation or scrub to a middle of an animation. It's doable but not pleasant.
Set a timer for, say 30 times a second and on every tick consult the model (the model is the script json file along with the description of the actors and the backgrounds) and draw what needs to be drawn at this time. Use Quartz 2D's API and drawRect. This is probably the simple approach and but I don't have enough experience to tell how well it's going to work on different devices, probably CPU wise, it all depends on the amount of calculations I need to make on each tick and the amount of effort it takes ios to draw everything. I don't have a hunch.
Similar to 2, but use OpenGL to draw. I prefer 2 b/c the API is easier but perhaps resource wise OpenGL is more suitable.
Use a game framework such as cocos2d which I'd never used before but seems to be solving more or less similar problems. They seem to have a nice API so I'd be happy if I could find all my requirements answered by them.
Atop of the requirements I'd just described (play a movie given it's "script" file and a description of the actors, backgrounds and sounds), there's another set of requirements -
The movie needs to be played in full screen mode or partial screen mode (where the rest of the screen is dedicated to other controls)
I'm starting with the iphone by naturally an ipad should follow.
I'd like to be able to create a thumbnail of this movie for local phone use (display it in a gallery in my application). The thumbnail may just be the first frame of the movie.
I want to be able to "export" the result as a movie, something that could be easily uploaded to youtube or facebook.
So the big question here is whether any of the suggested 1-4 implementations I have in mind (or others you might suggest) can somehow export such a movie.
If all four fail on the movie export task then I have an alternative in mind. The alternative is to use a server which runs ffmpeg and which accept a bundle of all the movie images (I'd have to draw them in the phone and upload them to the sever by their sequence) and then the server would compile all the images with their soundtrack to a single movie.
Obviously to keep things simple I'd prefer to do this server-less, i.e. be able to export the movie from the iphone but if that's too much to ask then the last requirement would be to at least be able to export the set of all images (keyframes in the movie) so I can bundle them and upload to a server.
The length of the movie is supposed to be a one or two minutes. I hope the question wasn't too long and that it's clear...
Thanks!
well written question. for your video export needs check out AVFoundation (available as of iOS 4). If I were going to implement this, I'd try #1 or #4. I think #1 might be the quickest to just try out, but that's probably because I don't have any experience with cocos2d. I think you will be able to pause and scrub CoreAnimation: check out the CAMediaTiming protocol it adopts.
Ran, you do have a number of options. You are not going to find a "complete solution" but it will be possible to make use of existing libraries in order to skip a bunch of implementation and performance issues. You can of course try to build this whole thing in OpenGL, but my advice is that you go with another approach. What I suggest is that you render the entire "video" frame by frame on the device based on your json settings. That basically comes down to setting up your scene elements and then determining the positions of each element for times [0, 1, 2] where each number indicates a frame at some framerate (15, 20, or 24 FPS would be more than enough). First off, please have a look at my library for non-trivial iOS animations, in it you will find a class named AVOfflineComposition that does the "comp items and save to a file on disk" step. Obviously, this class does not do everything you need, but it is a good starting point for the basic logic of creating a comp of N elements and writing the results out to a video file. The point of creating a comp is that all of your code that reads settings and places objects at a specific spot in the comp can be run in an offline mode and the result you get at the end is a video file. Compare this to all the details involved with maintaining all theses elements in memory and then going forward more quickly or slowly depending on how quickly everything is running.
The next step will be to create 1 audio file that is the length that the "movie" of all the comped frames and have it include any sounds at specific times. This basically means mixing the audio at runtime and saving the results to an output file so that the results are easy to play with AVAudioPlayer. You can have a look at some very simple PCM mixer code that I wrote for this type of thing. But, you might want to consider a more complete audio engine like theamazingaudioengine.
Once you have an audio file and a movie file, these can be played together and kept in sync quite easily using the AVAnimatorMedia class. Take a look at this AVSync example for source code that shows a tightly synced example of playing a video and showing a movie.
Your last requirement can be implemented with the AVAssetWriterConvertFromMaxvid class, it implements logic that will read a .mvid movie file and write it as a h.264 encoded video using the h.264 encoder hardware on the iPhone or iPad. With this code, you will not need to write a ffmpeg based server module. Also, that would not work anyway because it would take too long to upload all the uncompressed video to your server. You need to compress the video to h.264 before it can be uploaded or emailed from the app.
I have read several post on both matters but I haven't seen anyone comparing so far.
Suppose I just want full screen animation without any transparency etc, just a couple of seconds animation (1''-2'') when an app starts. Does anyone know how "video" compares to "sequence of images" (320x480 # 30) on the iPhone, regarding performance etc?
I think there are a few points to think about here.
Size of animation as pointed out above. You could try a framerate of 15 images per second so that could be 45 images for 3s. That is quite a lot data.
The video would be compressed as mentioned before in H.264 (Baseline Profile Level 3.0) format or MPEG-4 Part 2 video (Simple Profile) format. Which means its going to be reasonably small.
I think you will need to go for video because,
1. 45 full screen PNG images is going to require a lot of ram. I don't think this is going to work that well.
Lastly you will need to ad the Media Player Framework which will have to be loaded into memory and this going to increase your load times.
MY ADVICE: Sounds like the animation is a bit superfluous to the app, I hate apps that take ages to load and this is only going to increase you app startup times. If you can avoid doing this, then dont do it. Make you app fast. If you could do this at some other time after load then that is cool.
The video will be a lot more compressed than a sequence of images, because video compression takes previous frame data into account to reduce bitrate. It will take more power to decode, however the iPhone has hardware for that, and the OS has APIs that use this hardware, so I wouldn't feel bad about making use of them.
do not overlook the possibility of rendering the sequence in real-time.