I'm trying to play a 24bit audio file with my AutoHotkey app. It just uses SoundPlay. Windows 7 has no problem, however Windows XP users cannot play the 24bit files.
The documentation says:
All Windows OSes should be able to
play .wav files. However, other files
(.mp3, .avi, etc.) might not be
playable if the right codecs or
features aren't installed on the OS.
Possible fixes mentioned in the article How to play 24bit WAV files in Windows Media Player are fixing the problem for Windows Media Player, however but not for autohotkey:
Step-by-step guide
Download Legacy HD Audio Filter
regsvr32.exe AudioTypeConvert.ax
Play 24bit file in windows media player (works) and AHK (no sound).
regsvr32.exe /u AudioTypeConvert.ax to uninstall
Expected behaviour: audio file playsback without errors in both Windows Media Player and AutoHotkey Apps.
Actual behavious: audio file playsback without errors only in Windows Media Player and does not playback in AutoHotkey Apps under Windows XP.
Further Investigation
As mentioned in the AutoHotKey forums SoundPlay is using mciSendString under the hood and more information on the nature of the error can be gained by calling it directly.
Using the mciSendString alternative DLL call I get error message 320 which corresponds to MCIERR_WAVE_OUTPUTSINUSE
All waveform devices that can play files in the current format are in use. Wait until one of these devices is free; then, try again.
How do I play 24bit audio files in Windows XP in my AutoHotkey app?
SoundPlay based Test app (download)
#NoEnv
SetWorkingDir %A_ScriptDir%
FileSelectFile, f
SoundPlay, %f%
Msgbox, You should hear audio - except for 24bit wave files under Windows XP.
MCI based test app (download)
#NoEnv
SetWorkingDir %A_ScriptDir%
FileSelectFile, f
TryPlaySound(f)
Msgbox, You should hear audio - except for 24bit wave files under Windows XP.
; If SoundPlay does not work TryPlaySound
TryPlaySound( mFile )
{
; SKAN www.autohotkey.com/forum/viewtopic.php?p=361791#361791
VarSetCapacity( DN,16 ), DLLFunc := "winmm.dll\mciSendString" ( A_IsUnicode ? "W" : "A" )
DN := DllCall(DLLFunc, Str, "play " """" mFile """" "", Uint, 0, Uint, 0, Uint, 0)
Return DN
}
I would convert the 24-bit file to a 16-bit file, if that's at all feasible.
Related
Hi am currently trying to retrieve 3 second clips of an audio file whilst it is recording in flutter. I am using the recording module flutter sound and flutter ffmpeg.
I record the audio file with default codec (.aac). The file is saved to the cache getTemporaryDirectory()
I then copy the file using this flutter ffmpeg code
List<String> arguments = ["-ss", start.toString(), "-i", inPath, "-to", end.toString(), "-c", "copy", outPath];
await flutterFFmpeg.executeWithArguments(arguments);
Start: start time (e.g. 0) and End: end time (e.g. 3)
It then returns this error
FFmpeg exited with rc: 1 [mov,mp4,m4a,3gp,3g2,mj2 # 0x748964ea00] moov
atom not found
Helpful information:
A moov atom is data about the file (e.g timescale,duration)
I know the inPath exists because I check that before executing ffmpeg command
The outPath is also format .aac
This ffmpeg function is being ran whilst the recording is still occurring
Example inPath uri looks like this /data/user/0/com.my.app/cache/output.aac
I have no problems when running on iOS, only on android
I would be grateful for help, I have spent many days trying to fix this problem. If you need anymore info please leave a comment. Thanks
Default Codec is not guaranteed to be AAC/ADTS.
It will depend of the Android version of your device.
You can do several things to understand better :
ffprobe on your file to see what has been recorded by Flutter Sound.
Use a specific Codec instead of default : aac/adts is a good choice because it can be streamed (you want to process the audio data during the recording and not after closing the file)
Verify that your file contains something and that the data are not still in internal buffers
Record to a dart PCM stream instead of a file. Working with a file and use FFmpeg to seek into it is complicated and perhaps does not fill your needs.
After I create a new script and save the file to my Desktop, when I begin to type into the editor, the entire program hangs and I have to force quit. This happens even when there is text in the editor. I am running MATLAB R2014b on Windows 10. Any known solutions to this dilemma?
To summarize the comments for future users:
Windows 10 and Matlab 2014b are not compatible*. A Windows 10 user must use version 2015a or later.
I download lectures in mp4 format from Udacity, but they're often broken down into 2-5 minute chunks. I'd like to combine the videos for each lecture into one continuous stream, which I've had success with on Windows using AnyVideo Converter. I'm trying to do the same thing on Ubuntu 15, and most of my web search results suggest MP4Box, whose documentation and all the online examples I can find offer the following syntax:
MP4Box -cat vid1.mp4 -cat vid2.mp4 -cat vid3.mp4 -new combinedfile.mp4
This creates a new file with working audio, but the video doesn't work. When I open with Ubuntu's native video player, I get the error "No valid frames decoded before end of stream." When I open with VLC, I get the error "Codec not supported: VLC could not decode the format 'avc3' (No description for this codec." I've tried using the -keepsys switch, as well, but I get the same results.
All the documentation and online discussion makes it sound as though what I'm trying to do is and should be really simple, but I can't seem to find info relevant to the specific errors I'm getting. What am I missing?
Use the -force-cat option.
For example,
MP4Box -force-cat -add in1.mp4 -cat in2.mp4 -cat in3.mp4 ... -new out.mp4
From the MP4Box documentation:
-force-cat
skips media configuration check when concatenating file.
It looks, by the presence of 'avc3', that these videos are encoded with h.264|avc. There are several modes for the concatenation of such streams. Either the video streams have compatible encoder configurations (frame size, ...) in which case only one configuration description is used in the file (signaled by 'avc1'). If the configurations are not fully compatible, MP4Box uses the 'inband' storage of those configurations (signaled by 'avc3'). The other way would be to use multiple sample description entries (stream configurations) but that is not well supported by players and not yet possible with MP4Box. There is no other way unless you want to reencode your videos. On Ubuntu, you should be able to play 'avc3' streams with the player that goes with MP4Box: MP4Client.
I am uploading many files to server where I need to enter frames resolution - it's much inconvenient to do it "manually" - open each file and then check their properties.
I would like to get a list of all movie files names (in .mpg and .swf format) with information about their screen resolution from windows command line (or PowerShell, or Linux console). How I can do it?
Look at midentify (mplayer tools); The output is not pretty but can easily be parsed so you can present it in the desired format
note: midentify is a wrapper around mplayer -identify IIRC
I am working on improving Festival on Emacs. I need better control of Festival when it reads a sentence. Basically, I need two things:
Show what word is being read.
Change the speed (and maybe pitch) of what is being read.
Ideally, there would be some data structure output by Festival that would link offset/length (usually the start/length of a word) with an output WAV file (or even a location in a wav file). I could then use something like mplayer to build a playlist and somehow tell me when the next word is being played and where that word exists in the buffer.
I'm also hoping there's some simple command to change the speed of what is being read. However, mplayer can do that for me, so it's not a big deal if I can get #1 working.
See the manual here, especially the part about the "text2wave" script. I'm unclear whether this is a separate executable or just a scheme script that you will have to call. In either case, it looks that it should give you some inspiration for how to do this. It appears to me that you could possibly send a whole buffer to this command, which would generate a .wav file, which you could then control via mplayer. Of course, this would mean you wouldn't know which sentence was currently playing, so you could output each sentence as a .wav file, then queue them up in mplayer (or call mplayer repeatedly). If text2wave is an executable, I'm not sure it's available on Windows, but you should be able to accomplish the same thing with a scheme script for Festival.
Edit: text2wave is indeed a script, but you should be able to easily modify it to call festival with the script as an argument (path/to/festival --script text2wave). I don't know if the Windows binaries include this, but it should be available either from the main Festival site or in a *nix distro (it's definitely in Ubuntu).