I have a user recording and another mp3 file in my app, and I want the user to be able to export both of these as one, meaning the two files will be merged or laid over each other in some way.
In case that wasn't understood, both mp3 files are to be played at the same time just as any app where a user can record say, a song, over an instrumental.
The recording and the instrumental are two separate mp3 files that need to be exported as one.
How do I go about doing this? From what I've read, I can't find the solution. I see a lot on concatenating the two audio files, but I don't want them to play one after another, but rather at the same time.
Thanks.
EDIT: I know this is late, but in case anyone stumbles by this and was looking for sample code, it's in my answer here: How can I overlap audio files and combine for iPhone in Xcode?
If I get you right, you are asking for a audio mixer feature. This is not a trivial task.
Take a look at Core Audio. A good book to start with is this one.
One solution would be, to create a GUI-less Audio Unit (mixer unit) that plays, mixes and renders both signal (mp3s).
Besides the programming aspect, there is also an audio engineering aspect here: you should take care of the level of the signals. Imagine you have 2 identical mp3s at a level of 0dB. If you sum them up, your level will be +3dB. This don't exist in the digital world (0dB ist the maximum). Because of that, you would have to reduce the input levels before mixing.
EDIT: sorry for the late input, but maybe this helps someone in the future: Apple has an example for the Audio Mixer that I just stumpled upon.
If you are reading this in 2016 and you're looking for a solution in swift 2.x - I got you. My solution implements a closure to return the outputfile after it has been written to avoid reading a file of zero byte immediately since the operation is an asynchronous one. This is particularly for overlapping two audio tracks by using the duration of the first track as the total output duration.
func setUpAndAddAudioAtPath(assetURL: NSURL, toComposition composition: AVMutableComposition, duration: CMTime) {
let songAsset: AVURLAsset = AVURLAsset(URL: assetURL, options: nil)
let track: AVMutableCompositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
let sourceAudioTrack: AVAssetTrack = songAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
var error: NSError? = nil
var ok: Bool = false
let startTime: CMTime = CMTimeMakeWithSeconds(0, 1)
let trackDuration: CMTime = songAsset.duration
//CMTime longestTime = CMTimeMake(848896, 44100); //(19.24 seconds)
let tRange: CMTimeRange = CMTimeRangeMake(startTime, duration)
//Set Volume
let trackMix: AVMutableAudioMixInputParameters = AVMutableAudioMixInputParameters(track: track)
trackMix.setVolume(1.0, atTime: kCMTimeZero)
audioMixParams.append(trackMix)
//Insert audio into track
try! track.insertTimeRange(tRange, ofTrack: sourceAudioTrack, atTime: CMTimeMake(0, 44100))}
func saveRecording(audio1: NSURL, audio2: NSURL, callback: (url: NSURL?, error: NSError?)->()) {
let composition: AVMutableComposition = AVMutableComposition()
//Add Audio Tracks to Composition
let avAsset1 = AVURLAsset(URL: audio1, options: nil)
var track1 = avAsset1.tracksWithMediaType(AVMediaTypeAudio)
let assetTrack1:AVAssetTrack = track1[0]
let duration: CMTime = assetTrack1.timeRange.duration
setUpAndAddAudioAtPath(audio1, toComposition: composition, duration: duration)
setUpAndAddAudioAtPath(audio2, toComposition: composition, duration: duration)
let audioMix: AVMutableAudioMix = AVMutableAudioMix()
audioMix.inputParameters = audioMixParams
//If you need to query what formats you can export to, here's a way to find out
NSLog("compatible presets for songAsset: %#", AVAssetExportSession.exportPresetsCompatibleWithAsset(composition))
let format = NSDateFormatter()
format.dateFormat="yyyy-MM-dd-HH-mm-ss"
let currentFileName = "recording-\(format.stringFromDate(NSDate()))-merge.m4a"
let documentsDirectory = NSFileManager.defaultManager().URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)[0]
let outputUrl = documentsDirectory.URLByAppendingPathComponent(currentFileName)
let assetExport = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetAppleM4A)
assetExport!.outputFileType = AVFileTypeAppleM4A
assetExport!.outputURL = outputUrl
assetExport!.exportAsynchronouslyWithCompletionHandler({
audioMixParams.removeAll()
switch assetExport!.status{
case AVAssetExportSessionStatus.Failed:
print("failed \(assetExport!.error)")
callback(url: nil, error: assetExport!.error)
case AVAssetExportSessionStatus.Cancelled:
print("cancelled \(assetExport!.error)")
callback(url: nil, error: assetExport!.error)
default:
print("complete")
callback(url: outputUrl, error: nil)
}
}) }
Related
I'm trying to create a an audio loop in Swift and it works very well 9/10 times. But then suddenly, the 10th time (or so) the insertTimeRange function seems to fail in some way.
I can see that the player has the correct length, but instead of taking the full 60 seconds of audio from every loop, it just seems to take a very, very short part of it and loop every minute. Illustration of the problem with five loops:
.----------.----------.----------.----------.----------
(short but hearable audio = ., complete silence = -)
Unfortunately, it doesn't throw any error. Here's how I create the composition:
private func createComposition(audioURL: URL, minutesToLoop: UInt) -> AVMutableComposition? {
// Initiate new composition
let composition = AVMutableComposition()
// Add one audio track (channel) to the composition
let compositionAudioTrack: AVMutableCompositionTrack? = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: 0)
// Create AVAsset from audio file (mp3 -> React Native bundle string -> URL -> AVAsset)
let asset = AVURLAsset(url: audioURL)
// Extract (now compatible) audio track from AVAsset
let track = asset.tracks(withMediaType: AVMediaType.audio)[0]
// Create a time range from 0 - 60 seconds that we can use to cut that out from the track
let timeRange = CMTimeRangeMake(start: .zero, duration: CMTimeMakeWithSeconds(60, preferredTimescale: 600))
if (compositionAudioTrack != nil) {
// Repeat for as many minutes as user specified in the app
for _ in 0...(minutesToLoop - 1) {
do {
// Take first 60 seconds from the audio file (timeRange, of: track) and paste over and over again exactly at the end of the track (at: composition.duration)
try compositionAudioTrack!.insertTimeRange(timeRange, of: track, at: composition.duration)
} catch {
print("FAILED TO MERGE AUDIO")
return nil
}
}
}
return composition
}
For future AVMutableComposition lovers, I found the solution. You needed to add a boolean to your asset options, like so:
let asset = AVURLAsset(url: audioURL, options: [AVURLAssetPreferPreciseDurationAndTimingKey: true])
How could I get a video from Gallery(Photos) in custom format and size.
for example I want to read a video in 360p.
I used below code to get video data but apple said it doesn't guarantee to read it in lowest quality.
It's a PHAsset extension, so self refering to a PHAsset object.
var fileData: Data? = nil
let manager = PHImageManager.default()
let options = PHVideoRequestOptions()
options.isNetworkAccessAllowed = true
options.deliveryMode = .fastFormat
manager.requestAVAsset(forVideo: self, options: options) {
(asset: AVAsset?, audioMix: AVAudioMix?, _) in
if let avassetURL = asset as? AVURLAsset {
guard let video = try? Data(contentsOf: avassetURL.url) else {
print("reading video failed")
return
}
fileData = video
}
}
There is a simple reason it can't be guaranteed: The file in 360p might not be on the device or in the cloud. So the Photos framework will deliver a format nearest to what you request. If you want exactly 360p, I would recommend you reencode the video you get from the photos framework yourself.
Question:
How do I add an external WebVTT file to my AVPlayer in tvOS?
Description:
I've been watching this "What's New in HTTP Live Streaming" by Apple where they talk about different ways in implementing an external WebVTT file.
The whole subtitle domain is quite new to me, so I'm having hard time grasping the concept quite well. Within the video they talk about many different things which I do not quite understand such as ( Subtitle playlist ). However, getting passed all that my main concern is adding .vtt file to my AVPlayer.
In the video they talk about this AVMediaSelectionGroup() but I'm quite confused on how to use and how to implement this with my AVPlayerViewController :
class PlayerViewController: AVPlayerViewController {
override func viewDidLoad() {
self.setupVideoPlayerView()
}
private func setupVideoPlayerView()
{
let path = "https://link.to.my.video.mp4"
let subTitlePath = "https://link.to.my.webvtt.file.vtt"
let nsURL = URL(string: path)
let avPlayer = AVPlayer(url: nsURL!)
self.player = avPlayer
self.player!.seek(to: kCMTimeZero)
self.player!.play()
}
}
The AVMediaSelectionGroup doesn't seem to have any methods to add a subtitle? The closest thing I was able to find (that mentioned subtitles) are the following methods:
//Where self is the instance of AVPlayerViewController
self.allowedSubtitleOptionLanguages
self.requiresFullSubtitles
While it was almost mentioned no where in the apple documentation, I read in an article that subtitles need to be embedded in the HLS stream.
Subtitles are not intended to be added manually.. though I did find a StackOverflow post showing a hack. Unsure if it works or not.
Seeing that I use VIMEO Pro, the HLS stream that is provided has the WebVTT subtitles (that I uploaded to vimeo) embedded, thus solving my problem.
This works for me.......
let localVideoAsset = AVURLAsset(url: URL(string: url) ?? URL(string:"")!)
let videoPlusSubtitles = AVMutableComposition()
let videoTrack = videoPlusSubtitles.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
do{
guard localVideoAsset.tracks.count > 0 else{
// error msg
return
}
try? videoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: localVideoAsset.duration),
of: localVideoAsset.tracks(withMediaType: .video)[0],
at: CMTime.zero)
}
let subtitleURL = URL(fileURLWithPath: model.data?[self.selected].subtitlePath ?? "")
let subtitleAsset = AVURLAsset(url: subtitleURL)
let subtitleTrack = videoPlusSubtitles.addMutableTrack(withMediaType: .text, preferredTrackID: kCMPersistentTrackID_Invalid)
do{
guard subtitleAsset.tracks.count > 0 else{
//error msg
return
}
try? subtitleTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: localVideoAsset.duration),
of: subtitleAsset.tracks(withMediaType: .text)[0],
at: CMTime.zero)
}
let playerViewController = AVPlayerViewController()
let player = AVPlayer(playerItem: AVPlayerItem(asset: videoPlusSubtitles))
playerViewController?.player = player
self.present(playerViewController ?? UIViewController(), animated: true) {
self.videoPlaying = true
self.playerViewController?.player?.play()
}
}
Okay, I think I solve this issue. I was looking for some solutions and I haven't found, so I implemented one to work.
I made available a SPM with the solution and it is quite simple to use. So I called it SimpleSubtitles.
It does support WebVTT, without Styles, and could be improved to support other formats
More info:
https://github.com/peantunes/SimpleSubtitles
How can I extrace Audio from Video file without using FFmpeg?
I want to use AVMutableComposition and AVURLAsset for solving it.e.g. conversion from .mov to .m4a file.
The following Swift 5 / iOS 12.3 code shows how to extract audio from a movie file (.mov) and convert it to an audio file (.m4a) by using AVURLAsset, AVMutableComposition and AVAssetExportSession:
import UIKit
import AVFoundation
class ViewController: UIViewController {
#IBAction func extractAudioAndExport(_ sender: UIButton) {
// Create a composition
let composition = AVMutableComposition()
do {
let sourceUrl = Bundle.main.url(forResource: "Movie", withExtension: "mov")!
let asset = AVURLAsset(url: sourceUrl)
guard let audioAssetTrack = asset.tracks(withMediaType: AVMediaType.audio).first else { return }
guard let audioCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid) else { return }
try audioCompositionTrack.insertTimeRange(audioAssetTrack.timeRange, of: audioAssetTrack, at: CMTime.zero)
} catch {
print(error)
}
// Get url for output
let outputUrl = URL(fileURLWithPath: NSTemporaryDirectory() + "out.m4a")
if FileManager.default.fileExists(atPath: outputUrl.path) {
try? FileManager.default.removeItem(atPath: outputUrl.path)
}
// Create an export session
let exportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetPassthrough)!
exportSession.outputFileType = AVFileType.m4a
exportSession.outputURL = outputUrl
// Export file
exportSession.exportAsynchronously {
guard case exportSession.status = AVAssetExportSession.Status.completed else { return }
DispatchQueue.main.async {
// Present a UIActivityViewController to share audio file
guard let outputURL = exportSession.outputURL else { return }
let activityViewController = UIActivityViewController(activityItems: [outputURL], applicationActivities: [])
self.present(activityViewController, animated: true, completion: nil)
}
}
}
}
In all multimedia formats, audio is encoded separately from video, and their frames are interleaved in the file. So removing the video from a multimedia file does not require any messing with encoders and decoders: you can write a file format parser that will drop the video track, without using the multimedia APIs on the phone.
To do this without using a 3rd party library, you need to write the parser from scratch, which could be simple or difficult depending on the file format you wish to use. For example, FLV is very simple so stripping a track out of it is very easy (just go over the stream, detect the frame beginnings and drop the '0x09'=video frames). MP4 a bit more complex, its header (MOOV) has a hierarchical structure in which you have headers for each of the tracks (TRAK atoms). You need to drop the video TRAK, and then copy the interleaved bitstream atom (MDAT) skipping all the video data clusters as you copy.
There are 3rd party libraries you can use, aside from ffmpeg. One that comes in mind is GPAC MP4BOX (LGPL license). If the LGPL is a problem, there are plenty of commercial SDKs that you can use.
I am working on an iPhone app for school and need some help. The app should record video, make it slow motion (about 2x), then save it to the photo library. So far I have everything except how to make the video slow motion. I know it can be done as there is already an app in the App Store that does it.
How can I take a video I've saved to a temp url and adjust the speed before saving it to the photo library?
If you need to export your video then you need to use the AVMutableComposition Class
Then add your video as an AVAsset to an AVMutableComposition and scale it with:
- (void)scaleTimeRange:(CMTimeRange)timeRange toDuration:(CMTime)duration
Finally you export it using AVAssetExportSession Class
I written a code that makes your video in "slow motion" and saves it in Photos Library. "Main Thing This Code Works In Swift 5". Creating "Slow motion" video in iOS swift is not easy, that I came across many "slow motion" that came to know not working or some of the codes in them are depreciated. And so I finally figured a way to make slow motion in Swift.
This code can be used for 120fps are greater than that too. Just add the url of your video and make it slow
Here is the "code snippet I created for achieving slow motion"
func slowMotion(pathUrl: URL) {
let videoAsset = AVURLAsset.init(url: pathUrl, options: nil)
let currentAsset = AVAsset.init(url: pathUrl)
let vdoTrack = currentAsset.tracks(withMediaType: .video)[0]
let mixComposition = AVMutableComposition()
let compositionVideoTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
let videoInsertError: Error? = nil
var videoInsertResult = false
do {
try compositionVideoTrack?.insertTimeRange(
CMTimeRangeMake(start: .zero, duration: videoAsset.duration),
of: videoAsset.tracks(withMediaType: .video)[0],
at: .zero)
videoInsertResult = true
} catch let videoInsertError {
}
if !videoInsertResult || videoInsertError != nil {
//handle error
return
}
var duration: CMTime = .zero
duration = CMTimeAdd(duration, currentAsset.duration)
//MARK: You see this constant (videoScaleFactor) this helps in achieving the slow motion that you wanted. This increases the time scale of the video that makes slow motion
// just increase the videoScaleFactor value in order to play video in higher frames rates(more slowly)
let videoScaleFactor = 2.0
let videoDuration = videoAsset.duration
compositionVideoTrack?.scaleTimeRange(
CMTimeRangeMake(start: .zero, duration: videoDuration),
toDuration: CMTimeMake(value: videoDuration.value * Int64(videoScaleFactor), timescale: videoDuration.timescale))
compositionVideoTrack?.preferredTransform = vdoTrack.preferredTransform
let dirPaths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).map(\.path)
let docsDir = dirPaths[0]
let outputFilePath = URL(fileURLWithPath: docsDir).appendingPathComponent("slowMotion\(UUID().uuidString).mp4").path
if FileManager.default.fileExists(atPath: outputFilePath) {
do {
try FileManager.default.removeItem(atPath: outputFilePath)
} catch {
}
}
let filePath = URL(fileURLWithPath: outputFilePath)
let assetExport = AVAssetExportSession(
asset: mixComposition,
presetName: AVAssetExportPresetHighestQuality)
assetExport?.outputURL = filePath
assetExport?.outputFileType = .mp4
assetExport?.exportAsynchronously(completionHandler: {
switch assetExport?.status {
case .failed:
print("asset output media url = \(String(describing: assetExport?.outputURL))")
print("Export session faiied with error: \(String(describing: assetExport?.error))")
DispatchQueue.main.async(execute: {
// completion(nil);
})
case .completed:
print("Successful")
let outputURL = assetExport!.outputURL
print("url path = \(String(describing: outputURL))")
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: outputURL!)
}) { saved, error in
if saved {
print("video successfully saved in photos gallery view video in photos gallery")
}
if (error != nil) {
print("error in saing video \(String(describing: error?.localizedDescription))")
}
}
DispatchQueue.main.async(execute: {
// completion(_filePath);
})
case .none:
break
case .unknown:
break
case .waiting:
break
case .exporting:
break
case .cancelled:
break
case .some(_):
break
}
})
}
slowmoVideo is an OSS project which appears to do this very nicely, though I don't know that it would work on an iPhone.
It does not simply make your videos play at 0.01× speed. You can
smoothly slow down and speed up your footage, optionally with motion
blur. How does slow motion work? slowmoVideo tries to find out where
pixels move in the video (this information is called Optical Flow),
and then uses this information to calculate the additional frames.