There is a way to play an audio from ytdl-core while it's downloading? - web-audio-api

The ytdl-core has a function that creates a internal.Readable stream that can be piped to a file, example:
ytdl(url, { filter: "audioonly"}).pipe(this.fs.createWriteStream(`path/to/file.mp3`));
And then, I return the file path after it's completelly downloaded. However, I want to play it while the file is being "downloaded", is there a way to achieve that?

Related

What is the best way to play audio through the VScode API?

Basically the title. I have tried play-sound, which of course does not work. I have tried creating child processes, as such:
require('child_process').exec('python music.py', (err, stdout) => {
console.log('result', err, stdout)
})
but as the CWD of debug vscode window is the actual directory of MS VS Code (created when downloading vscode), it can never find my python script. I also can't use any getCWD() methods, as again, the CWD is not where the actual source files for the extension are.
Any input helps, and I know there are extensions to play audio, this is a special use case.
You can get it with ExtensionContext.extensionPath.
ExtensionContext can usually be received as context at activate().
For example, for ./scripts/x.py, you can get the path you want by
vscode.Uri.joinPath(context.extensionUri, "scripts", "x.py").fsPath()
extension context doc
https://code.visualstudio.com/api/references/vscode-api#ExtensionContext
uri doc
https://code.visualstudio.com/api/references/vscode-api#Uri
Note:
If saving the mp3 to a file is not a pain for you, using the audio playback extension is the easiest option.
https://marketplace.visualstudio.com/items?itemName=sukumo28.wav-preview

Canon EDSDK sample code - help to understand save file to location

I am new to the EDSDK, but so far have been very happy with the results. I have my program working just fine saving to the camera, however when I set to saveTo Host I'm unclear on where it thinks it's supposed to save to.
Everything appears to work. Callback function gets called, progress bar animates but I have no idea where it thinks it's pointing the file to.
the closest I get is finding where the #"download" command is issued, the argument to this call should be getting cast as a (EdsDirectoryItemRef)
This all seems to be coming from the EDSCALLBACK handleObjectEvent but I can't figure out how it gets constructed.
Ideally I'd like to be able to specify where on disk I want the images to go. Can someone provide some aid?
[edit]
Okay, I see the images are going into the build directory, but perhaps someone could help me to understand why. Or even better how to specify a path for myself.
When you set saveTo_Host, the image is stored on a temporary memory in the camera. The camera then triggers a DirItemRequestTransfer event that would call the callback function 'handleObjectEvent'. The reference to the image, stored in the temporary camera memory, is passed to the callback function.
Within the handleObjectEvent callback function you probably would be creating a file stream and using EdsDownload to download the file to the location on the PC (which is specified by the file stream).
When you create a file stream you need to specify a file name (the first argument). This file name determines where the image would be stored. If you just specify the file name without a path the image gets stored in the build directory. If you would like to save the file in a particular location you need to specify the file name along with its path.
Hope this helps.

Realtime AudioQueue Record-Playback

Hey fellows,
Iam trying to build an application for realtime voicechanging.
In a first step I managed to record audiodata to a specified file and to play it after recording.
Now I try to change the code for playing back the audiobuffers right after recording them in loop.
My question is, how it is possible to read the Audiodata directly from the recording Audioqueue and not (like shown in documentation) from a file.
Iam thankful for any ideas and could show code-parts if needed.
Thanks in advance,
Lukas (from Germany)
Have a look at the SpeakHere example. This line sources the audio data:
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(), false, &numBytes, inCompleteAQBuffer->mPacketDescriptions, THIS->GetCurrentPacket(), &nPackets,
inCompleteAQBuffer->mAudioData);
So, rather than call AudioFileReadPackets, you can just use a memcpy to copy over the recorded data buffer. Or, alternatively, supply to the playback AudioQueue a pointer to the audio data buffer. As playback continues, advance a mCurrentPacket pointer through the buffer.
To record, you'll do something very similar. Rather than writing out to a file, you'll write out to a buffer in memory. You'll first need to allocate that with a malloc. Then are your incoming AudioQueue captures recorded data, you copy that data to the buffer. As more data is copied, you advance the recording head, or mCurrentPacket to a new position.

How to add metadata to WAV file?

I'm looking for some sample code to show me how to add metadata to the wav files we create.
Anyone?
One option is to add your own chunk with a unique id. Most WAV players will ignore it.
Another idea would to be use a labl chunk, associated with a que set at the beginning or end of the file. You'd also need a que chunk. See here for a reference
How to write the data is simple
Write "RIFF".
save the file position.
Write 4 bytes of 0's
Write all the existing chunks. Keep count of bytes written.
Add your chunk. Be sure to get the chunksize right. Keep
count of bytes written.
rewind to the saved position. Write the new size (as a 32-bit
number).
Close the file.
It's slightly more complicated if you are adding things to an existing list chunk, but the same principle applies.
Maybe the nist file format will give you what you want:
NIST
Here is a lib that could help, but im afraid it looks old. NIST Lib
Cant find more useful information right now how exactly to use it, and im afraid the information papers from my company must stay there. :L/
Try code below
private void WaveTag()
{
string fileName = "in.wav";
WaveReadWriter wrw = new WaveReadWriter(File.Open(fileName, FileMode.Open, FileAccess.ReadWrite));
//removes INFO tags from audio stream
wrw.WriteInfoTag(null);
//writes INFO tags into audio stream
Dictionary<WaveInfo, string> tag = new Dictionary<WaveInfo, string>();
tag[WaveInfo.Comments] = "Comments...";
wrw.WriteInfoTag(tag);
wrw.Close();
//reads INFO tags from audio stream
WaveReader wr = new WaveReader(File.OpenRead(fileName));
Dictionary<WaveInfo, string> dir = wr.ReadInfoTag();
wr.Close();
if (dir.Count > 0)
{
foreach (string val in dir.Values)
{
Console.WriteLine(val);
}
}
}
from http://alvas.net/alvas.audio,articles.aspx#id3-tags-for-wave-files
If you examine the wave file spec you'll see that there does not seem to be room for annotations of any kind. An option would be to wrap the wave file with your own format that includes custom information but you would in effect be creating a whole new format that would not be readable by users who do not have your app. But you might be ok with that.

Playing a Sound With Monotouch

No matter what I try (build -> content, NSUrl, filename) I get a 'null exception': file not found when I try to play a .caf sound file in monotouch.
//var path = NSBundle.MainBundle.PathForResource("MatchGame", "caf");
//var gameSong = SystemSound.FromFile( new NSUrl(path, false));
var gameSong = SystemSound.FromFile("MatchGame.caf");
gameSong.PlaySystemSound();
I also try combinations using the folder name "images/MatchGame.caf" and moving MatchGame.caf into the root folder.
What am I missing? Thanks a lot.
Here is a link to a video of adding the sound in monotouch. http://www.screencast.com/t/MmE0ZmFh What is wrong?
Bryan,
From looking at your screencast - you are trying to play a mp3 file and not a caf file. Mp3 files are encoded differently and will not play with the SystemSound class that's there (I can't remember if you can do this in Obj-C or not.)
You'll want to use the AVFoundation Namespace and AVAudioPlayer class.
using Monotouch.AVFoundation;
var mediaFile = NSUrl.FromFilename("myMp3.mp3");
var audioPlayer = AVAudioPlayer.FromUrl(mediaFile);
audioPlayer.FinishedPlaying += delegate { audioPlayer.Dispose(); };
audioPlayer.Play();
You might need to tweak the code above - I don't have MonoDevelop to hand but that should help you a little further.
Cheers,
ChrisNTR
You need the first line:
var path = NSBundle.MainBundle.PathForResource("MatchGame", "caf");
Then make sure that your audio file is included in the application by making sure that your CAF file is flagged as "Content" in the Properties pane, otherwise the file is not copied to the resulting application package (your .app)
You can follow the steps documented here (they are for images, but apply the same to audio files):
http://wiki.monotouch.net/HowTo/Images/Add_an_Image_to_your_Project
Additionally, this is a good resource for where you should store files:
http://wiki.monotouch.net/HowTo/Files/HowTo%3a_Store_Files