How do I strip initial offsets from OGG files? - ogg

=== BACKGROUND ===
Some time ago I ripped a lot of music from an internet radio station. Unfortunately something seems to have went wrong, since the length of most files is displayed as being several hours, but they started playing at the correct position.
Example: If a file is really 3 minutes long and it would be displayed as 3 hours, playback would start at 2 hours and 57 minutes.
Before I upgraded my system, gstreamer was in an older version and its behaviour would be as described above, so I didn't pay too much attention. Now I have a new version of gstreamer which cannot handle these files correctly: It "plays" the whole initial offset.
=== /BACKGROUND ===
So here is my question: How is it possible to modify an OGG/Vorbis file in order to get rid of useless initial offsets? Although I tried several tag-edit programs, none of them would allow me to edit these values. (Interestingly enough easytag will display me both times, but write the wrong one...)

I finally found a solution! Although it wasn't quite what I expected...
After trying several other options I ended up with the following code:
#!/bin/sh
cd "${1}"
OUTDIR="../`basename "${1}"`.new"
IFS="
"
find . -wholename '*.ogg' | while read filepath;
do
# Create destination directory
mkdir -p "${OUTDIR}/`dirname "${filepath}"`"
# Convert OGG to OGG
avconv -i "${filepath}" -f ogg -acodec libvorbis -vn "${OUTDIR}/${filepath}"
# Copy tags
vorbiscomment -el "${filepath}" | vorbiscomment -ew "${OUTDIR}/${filepath}"
done
This code recursively reencodes all OGG files and then copies all vorbis comments. It's not a very efficient solution, but it works nevertheless...
What the problem was: I guess it has something to do with the output of ogginfo:
...
New logical stream (#1, serial: 74a4ca90): type vorbis
WARNING: Vorbis stream 1 does not have headers correctly framed. Terminal header page contains additional packets or has non-zero granulepos
Vorbis headers parsed for stream 1, information follows...
Version: 0
Vendor: Xiph.Org libVorbis I 20101101 (Schaufenugget)
...
Which disappears after reencoding the file...
At the rate at which I'm currently encoding it will probably take several hours until my whole media library will be completely reencoded... but at least I verified with several samples that it works :)

Related

How to correct/time shift subtitles in multiple SRT files?

Is there a way to badge edit multiple .srt files. We have a project where recent edits to videos offset the .srt files by 5 seconds. I know how to timeshifts .srt on a single file, but I'm wondering if there is a way to timeshift 1000s of .srt files by 5 seconds.
Most command lines I'm aware off can do it file by file, but I haven't seen it work on folders.
This is an interesting challenge. You'd almost certainly have to write some short script to do this. Command line tools like sed and awk are great for text processing tasks like this, but the challenge I think you'll face is the timecode. It's not as simple as just adding 5 to the seconds field of each timecode because you might tip over the edge of a minute (i.e. 00:00:59.000 + 5 = 00:01:04.000). You'll have to write some custom code to handle this part of the problem as far as I know.
The rest is pretty straight forward, you just need a command like find . -name ".srt" | xargs the-custom-script-you-have-to-write.sh
Sorry it's not a more satisfying answer. I don't know of any existing utilities that do this.

Flutter FFmpeg moov atom not found whilst running ffmpeg command during recording

Hi am currently trying to retrieve 3 second clips of an audio file whilst it is recording in flutter. I am using the recording module flutter sound and flutter ffmpeg.
I record the audio file with default codec (.aac). The file is saved to the cache getTemporaryDirectory()
I then copy the file using this flutter ffmpeg code
List<String> arguments = ["-ss", start.toString(), "-i", inPath, "-to", end.toString(), "-c", "copy", outPath];
await flutterFFmpeg.executeWithArguments(arguments);
Start: start time (e.g. 0) and End: end time (e.g. 3)
It then returns this error
FFmpeg exited with rc: 1 [mov,mp4,m4a,3gp,3g2,mj2 # 0x748964ea00] moov
atom not found
Helpful information:
A moov atom is data about the file (e.g timescale,duration)
I know the inPath exists because I check that before executing ffmpeg command
The outPath is also format .aac
This ffmpeg function is being ran whilst the recording is still occurring
Example inPath uri looks like this /data/user/0/com.my.app/cache/output.aac
I have no problems when running on iOS, only on android
I would be grateful for help, I have spent many days trying to fix this problem. If you need anymore info please leave a comment. Thanks
Default Codec is not guaranteed to be AAC/ADTS.
It will depend of the Android version of your device.
You can do several things to understand better :
ffprobe on your file to see what has been recorded by Flutter Sound.
Use a specific Codec instead of default : aac/adts is a good choice because it can be streamed (you want to process the audio data during the recording and not after closing the file)
Verify that your file contains something and that the data are not still in internal buffers
Record to a dart PCM stream instead of a file. Working with a file and use FFmpeg to seek into it is complicated and perhaps does not fill your needs.

Does exiftool require the complete file for extracting metadata

This question is about extraction of metadata only.
Is it required for exiftool to get a complete file for propperly working?
Scenario:
I want to extract the metadata of a 20 GB video file. Do I need to provide exiftool with the complete file (via stdin), or is it enough to provide it with a certain amount of bytes.
Motivation:
I am programatically (golang) calling exiftool in a streaming context and want to have the extraction as fast as possible. Magic numbers for filetypes work with the first 33 bytes and I am wondering if that is possible with the exiftool metadata as well.
The answer depends upon the file and the location of the metadata within that file.
There are a couple of threads on the subject on the ExifTool forums (link 1, link 2) and Phil Harvey, the author, says that about half the time the in the case of MP4/MOV videos, the metadata is at the end of the file.
Using the -fast option might help. I've done some quick tests using cURL and a large image file (see the second to the last example under Piping Examples) and in that case cURL didn't download the whole image file, just enough to extra the metadata. It might be different with a video file though, as I haven't tested that situation.

Low CPU Usage with dbPoweramp Powershell

I am using a program called dbPoweramp to convert music from within Powershell. I am using the documentation here which was all I could find for it when searching. Whenever I use the program itself to convert I get 100% CPU usage and it fully utilizes all eight threads. However, whenever I launch through the command line I only get something around 13% CPU usage. It obviously isn't desirable to have to launch the program manually because I am going for automation here. I have tried messing with the -processors argument but it has made no difference. Does anyone have any idea as to why that would be?
I have also tried using FFMPEG instead, but the CPU usage for FFMPEG is similarly low. If anyone could post code that would make FFMPEG utilize all eight cores that would work just as well.
Here is the section of code that does the actual conversion, essentially it just searches for all flac, m4a, or mp3 files and then automatically converts them to variable bitrate quality 1 mp3s for streaming.
$oldMusic = Get-ChildItem -Include #("*.flac", "*.m4a", "*.mp3") -Path $inProcessPath -Recurse #gets all of the music
cd 'C:\Program Files (x86)\Illustrate\dBpoweramp'
foreach ($oldSong in $oldMusic) {
$newSong = [io.path]::ChangeExtension($oldSong.FullName, '.mp3')
$oldSongPath = $oldSong.FullName
$newSongPath = "E:\Temp\$newSong"
.\CoreConverter.exe -infile= $oldSongPath -outfile= $newSong -convert_to= "mp3 (Lame)" -V $quality #converts the file
}
Thanks in advance!
I don't think the encoder runs on more than a single thread. I think that it encodes up to 8 tracks at a time, one on each core. In your example, the encoding will happen serially meaning that you're only going to use one core at a time. The same will occur with FFmpeg.
I'm no Powershell guy, but if you can get it to run up to 8 processes at once, you won't have this problem.

How to use Media Segmenter for split video?

I have read many documents still very confused about HTTP Live Streaming.
But i am still trying for solution.. and i have convert my video in .ts format with ffmpeg.
Now i know that i have to split my video and have to create playlist with the use of mediasegmenter.
But i don't know where is mediasegmenter and how to use it to split video.
I am very new to this so sorry for this silly Question..
Any help would be appreciated..!!
Thanks in advance..!!
Here: 35703_streamingtools_beta.dmg or go to http://connect.apple.com/ and search for "HTTP Live Streaming", or download from https://developer.apple.com/streaming/. Usage:
mediafilesegmenter -t 10 myvideo-iphone.ts
This will generate one .ts file for each 10 seconds of the video plus a .m3u8 file pointing to all of them.
If you use FFMpeg, it's very easy to split files with it.
Don't use Media Segmenter.
Simply write something like this:
ffmpeg.exe -i YourFile.mp4 -ss 00:10:00 -t 00:05:00 OutFile.mp4
where -ss 00:10:00 is time offset , -t 00:05:00 is duration of OutFile.mp4.
This will create OutFile.mp4 which contains 5 minute video(-t 00:05:00) of YourFile.mp4
(from 00:10:00 to 00:15:00 of YourFile.mp4).
Useful ?)
And also you can create .ASX playlist which is able to cast streams and is very simple.