Audio Metering levels with AVPlayer - iphone

Is there a way to have the audio metering levels from the AVPlayer class?
I know that AVAudioPlayer does that, but it doesn't play streamed http .m3u8 urls , like AVPlayer does.

I do not know, specifically, if they privatized what you're looking for, but I do know the various high-level classes are usually just the tops of the pyramid for the lower level AV items. In any case, I do have good news...
Remember an AVPlayer uses an AVPlayerItem. The AVPlayerItem has assets (and an AVAudioMix). The AVAssets have the properties for modifying audio and video.
Unfortunately, the properties indicate they are "suggestions", not absolutes, but try this:
preferredVolume
If you reference the docs, AVPlayer->AVPlayerItem->AVAsset and voila. It might be what you're after.
Hope it's enough

There is a fork of audioStream link ,does what you want.
I had try avplayer for days ,but the end shows it can't. Likely i find the audioStream fork version.This code is 4 years old ,but still works good at ios7(change the demo target from ios3 to ios7).

I've recently found this GitHub project called SCWaveformView that helped me a lot, and I hope someone else out there could benefit from it as well.
Edit:
Actually the extension suggested by dev in the comments is quite nice, and you can find it here, the ACBAVPlayerExtension.

Related

Wwise, Resonance Audio and Unity Integration. Configure Wwise Resonance Audio Plugin

I have tried to get a response on Github but with no activity about this issue there I will ask here.
I have been following the documentation and I am stuck when I have imported the WwiseResonanceAudioRoom mixer effect on the bus in Wwise and I do not see anything in the properties. I am not sure if I am supposed to? Right after that part of the documentation is says "Note: If room properties are not configured, the room effects bus outputs silence." I was wondering if this was the case and yes it outputs silence. I even switched the effect to see if it will just pass audio and it does just not with the Room effect, so at least I know my routing is correct.
So now this leads up to my actual question. How do you configure the plugin?? I know there is some documentation but there is not one tutorial or a step by step for us non code savvy audio folk. I have spent the better half of my week trying to figure this out b/c frankly for the time being this is the only audio spatialization plugin that features both audio occlusion, obstruction and propagation within Wwise.
Any help is appreciated,
Thank you.
I had Room Effects with Resonance Audio working in another project last year, under its former name, GVR. There are no properties on the Room Effect itself. These effect settings and properties reside in the Unity Resonance prefabs.
I presume you've follow the latter tutorial on Room Effect here:
https://developers.google.com/resonance-audio/develop/wwise/getting-started
Then what you need to do is to add the Room Effect assets into your Unity project. The assets are found in the Resonance Audio zip package, next to the authoring and SDK files. Unzip the Unity stuff into your project, add a room Effect in your scene and you should be able to see the properties in the inspector of the room object?
Figured it out thanks to Egil Sandfeld Here ! https://github.com/resonance-audio/resonance-audio-wwise-sdk/issues/2#issuecomment-367225550
To elaborate I had the SDKs implemented but I went ahead and replaced them anyways and it worked!

How to get frame data in AppRTC iOS app for video modifications?

I am currently trying to make some modifications to the incoming WebRTC video stream in the AppRTC app for iOS in Swift (which in turn is based on this Objective-C version). To do so, I need access to the data which is stored in the frame objects of class RTCI420Frame (which is a basic class for the Objective-C implementation of libWebRTC). In particular, I need an array of bytes: [UInt8] and Size of the frames. This data is to be used for further processing & addition of some filters.
The problem is, all the operations on RTCVideoTrack / RTCEAGLVideoView are done under the hood of pre-compiled libWebRTC.a, it is compiled from the official WebRTC repository linked above and it's fairly complicated to get a custom build of it, so I'd prefer to go with the build available in the example iOS project; in my understanding it's supposed to have all the available functionality in it.
I was looking into RTCVideoChatViewController class and in particular, remoteView / remoteVideoTrack, but had no success in accessing the frames themselves, spent a lot of time researching the libWebRTC sources in official repo but still can't wrap my head around the problem of accessing the frames data for own manipulations with it. Would be glad for any help!
Just after posting the question I had a luck in finding the sneaky data!
You have to add the following property to RTCEAGLVideoView.h file:
#property(atomic, strong) RTCI420Frame* i420Frame;
In the original implementation file there is the i420Frame property but it wasn't exposed in the iOS project's header file for the class. Adding the property allows you to get view's current frame.
I'm still in search of a more elegant way of getting the stream data directly, without the need to look into remoteView contents, will update the answer once I find it.

Writing id3 metadata to mp3 file in application bundle

I've asked this question twice before, with no real progress. After trolling countless forum after forum, I've come to ask again. The only progress so far with this is accessing the metadata of an AVURLAsset (using [AVURLAsset commonMetadata]). There are C libraries out there for writing id3 tags (such as id3lib) but none are configured for Cocoa Touch or even Xcode. Any sort of help would be appreciated, even a suggestion of the right direction to go would be greatly appreciated. If you solve the problem, you'll get an honorable mention in my finished app...
I just ported idlib3 to iOS and you can use it to modify the ID3 tag. An example project is also included. Check it here https://github.com/rjyo/libid3-ios

AudioUnit render callback called on iPhone simulator, but not on phone

I have a fairly straightforward setup in which a RemoteIO unit is taking input, doing a bit of processing, sending it out the output, and writing the output to a file. Right now, I'm just generating test signals inside of my RemoteIO render callback, so I don't really care about anything coming from the 'actual' input. My render callback is called and works a treat in the simulator, but is never called at all when run on the phone. Any ideas where I should start looking? Am happy to post code--just not sure what everyone would like to see...
I knew that things had worked in the past, so I started digging through the repo. Foolishly, I had changed the kAudioSessionProperty_AudioCategory of my AudioSession from kAudioSessionCategory_PlayAndRecord to kAudioSessionCategory_RecordAudio and forgotten to change it back. Hope this helps someone else avoid the same stupid mistake...
Just an hour ago I was solving the same. The problem was that I had defined AudioUnit type variable in the header file, so after I used AudioComponentInstance instead of AudioUnit it started to work on my devices as well.
So possibly could be this.

Iphone - Writing a media player with lyrics

I want to write a simple media player which displays lyrics that are retrieved from the web.
I know once LyricWiki was such a source, but now no longer exists.
Does a new API or source for lyrics exist that I can use ?
When I do get the lyrics, how do I sync them with song ?
I know the MPMediaItem class has the MPMediaItemPropertyLyrics but this is cleary not enough for me cause this is only for songs from Itunes and not all of them have lyrics available.
I would appreciate any help or links that can I use to sort this issue.
A little Googling and I found a few options that might solve your problem:
First, LyricWiki does still exist. It's moved to lyrics.wikia.com. It seems that their API, however crops the lyrics, due to licensing.
LyricsFly
ChartLyrics. Looks the most promising to me, though I haven't actually tried any of the services myself (yet).
I'd like to hear which works for you the best, should any of them.
https://developer.musixmatch.com
it's an Official and Authorized lyrics api
The synced lyrics api are expensive. following do the job but dont know how expensive they are.
http://www.lyricfind.com
http://developer.echonest.com/sandbox/lyricfind.html