AudioKit: How to change tempo on the fly? - swift

I am new to AudioKit and programming music app. I'm building a metronome app and using AudioKit's AKMetronome. I want to have a feature where a user can specify a sequence of beat patterns with different tempo. But I find it is inaccurate to use apple's DispatchQueue.
I'm thinking of rewriting the metronome using AKSequencer. Is there a way to use AudioKit's sequencer to change tempo on the fly or generate a sequence with multiple different tempo? (Sequencer example: https://github.com/AudioKit/Cookbook/blob/main/Cookbook/Cookbook/Recipes/Shaker.swift)
metronome.tempo = 120
let first_interval = 60.0 / 120.0
let switchTime1 = DispatchTime.now() + (first_interval * 4.0)
metronome.play()
DispatchQueue.main.asyncAfter(deadline: switchTime1, execute: {
self.metronome.tempo = 200
})
let second_inter = 60.0 / 200.0
let switchTime2 = switchTime1 + (second_inter * 8.0)
DispatchQueue.main.asyncAfter(deadline: switchTime2, execute: {
self.metronome.tempo = 120
})
Update:
I figured out that you can assign a callback function to AKMetronome using AKMetronome.callback. (https://audiokit.io/docs/Classes/AKMetronome.html#/s:8AudioKit11AKMetronomeC8callbackyycvp) You can then update the tempo at the start of a new sequence.

A possible solution would be to create a tempo track, which would contain tempo events that when processed change the sequencer's tempo.
This is an outline of what should be done:
Create a track to contain the tempo events, using AKSequencer's addTrack method. Connect this track to an AKCallbackInstrument. Please see this answer on how to connect an AKCallbackInstrument to an AKSequencer track.
Add the tempo events to the track, at the time positions where there are tempo changes. As far as I know, there are no standard MIDI events for indicating tempo changes (such as a control change for tempo). But as you will be interpreting the events yourself with a callback function, it doesn't really matter what type of event you use. I explain below how to represent the tempo.
Process the events in the callback function and set AKSequencer's tempo to the indicated tempo.
It’s a little difficult to represent the tempo value inside a MIDI event because usually, MIDI parameters go from 0 to 127. What I would do is use a Note On event, in the note's pitch I would store tempo div 128 and in the note's velocity, tempo % 128.
This is what your callback function would look like:
func tempoCallback(status:UInt8, note:MIDINoteNumber, vel:MIDIVelocity) -> () {
guard let status = AKMIDIStatus(byte: status),
let type = status.type,
type == .noteOn else { return }
let tempo: Int = note * 128 + vel
sequencer.tempo = Double(tempo)
}

Related

Bevy - Have multiple Sprites Sheets per Entity

I have a Entity containing a Srite Sheet and class instance
let texture_handle = asset_server.load("turret_idle.png");
let texture_atlas: TextureAtlas = TextureAtlas::from_grid(texture_handle, ...);
let texture_atlas_handle = texture_atlases.add(texture_atlas);
let mut turret = Turret::create(...);
commands.spawn_bundle(SpriteSheetBundle {
texture_atlas: texture_atlas_handle,
transform: Transform::from_xyz(pos),
..default()
})
.insert(turret)
.insert(AnimationTimer(Timer::from_seconds(0.04, true)));
The AnimationTimer will then be used in a query together with the Texture Atlas Handle to render the next sprite
fn animate_turret(
time: Res<Time>,
texture_atlases: Res<Assets<TextureAtlas>>,
mut query: Query<(
&mut AnimationTimer,
&mut TextureAtlasSprite,
&Handle<TextureAtlas>,
)>,
) {
for (mut timer, mut sprite, texture_atlas_handle) in &mut query {
timer.tick(time.delta());
if timer.just_finished() {
let texture_atlas = texture_atlases.get(texture_atlas_handle).unwrap();
sprite.index = (sprite.index + 1) % texture_atlas.textures.len();
}
}
}
This works perfectly fine as long as the tower is idle thus plays the idle animation. As soon as a target is found and attacked, I want to display another sprite sheet instead.
let texture_handle_attack = asset_server.load("turret_attack.png");
Unfortunately, It seems that I cannot add multiple TextureAtlas Handles to a Sprite Sheet Bundle and decide later which one to render. How do I solve this? I thought about merging all animations into one Sprite Sheet already but this is very messy as they have different frames.
Maybe create a struct with all the different handles you need and add it as a resource? Then you need a component for the enum states "idle", "attacked" etc.. and a system that handles setting the correct handle in texture_atlas from your resource handles.

AudioKit tap skips over time intervals

I am building an app that uses microphone input to detect sounds and trigger events. I based my code on AKAmplitudeTap, but I when I ran it, I found that I was only obtaining sample data for intervals with missing sections.
The tap code looks like this (with the guts ripped out and simply keeping track of how many samples would have been processed):
open class MyTap {
// internal let bufferSize: UInt32 = 1_024 // 8-9 kSamples/sec
internal let bufferSize: UInt32 = 4096 // 39.6 kSamples/sec
// internal let bufferSize: UInt32 = 16536 // 43.3 kSamples/sec
public init(_ input: AKNode?) {
input?.avAudioNode.installTap(onBus: 0, bufferSize: bufferSize, format: nil ) { buffer, _ in
sampleCount += self.bufferSize
}
}
I initialize the tap with:
func afterLoad() {
assert(!loaded)
AKSettings.audioInputEnabled = true
do {
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP)
} catch {
print("Could not set session category.")
}
mic = AKMicrophone()
myTap = MyTap(mic) // seriously, can it be that easy?
loaded = true
}
The original tap code was capturing samples to a buffer, but I saw that big chunks of time were missing with a buffer size of 1024. I suspected that the processing time for the sample buffer might be excessive, so...
I simplified the code to simply keep track of how many samples were being passed to the tap. In another part of the code, I simply print out sampleCount/elapsedTime and, as noted in the comments after 'bufferSize' I get different amounts of samples per second.
The sample rate converges on 43.1 KSamples/sec with a 16K buffer, and only collects about 20% of the samples with a 1K buffer. I would prefer to use the small buffer size to obtain near real-time response to detected sounds. As I've been writing this, the 4K buffer version has been running and has stabilized at 39678 samples/sec.
Am I missing something? Can a tap with a small buffer size actually capture 44.1 Khz sample data?
Problem resolved... the tap requires this line of code
buffer.frameLength = self.bufferSize
... and suddenly all the samples appear. I obviously stripped out a bit too much code from the code I obviously didn't understand.

How can i synchronize AVCaptureDevice setFocusModeLockedWithLensPosition call

I want to synchronize setFocusModeLockedWithLensPosition, setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains and setExposureModeCustomWithDuration calls.
Is there a logic order to call thoses functions ?
What i want to do is to start Running session when i am sure that focus, balance and exposure are properly set (i want to set values, not in automatic)
I have tried to lock the configuration, then call the 3 functions, then unlock, then startRunning on session. I put nil in the 3 completion handler parameters.
What i see in this case is that my image preview is not pretty (kind of blue filter). I have to wait before having a good image quality. What i want is to display the image only when it is good. I want do be notified.
So i tried to cascade my 3 calls with completion handler. in some cases, the completion handler is not called. I suppose this is when i want to put my lens position to 0.4 and the current lens position is 0.4.
So i don't know which is the best method.
Thanks
You can set your camera options in completion handler like this. It will wait till focus has been set to set exposure and the same principle will work with white balance and exposure. You can read more about camera setting here.
var AVCGains:AVCaptureWhiteBalanceGains = AVCaptureWhiteBalanceGains()
AVCGains.redGain = 1.0;
AVCGains.greenGain = 1.0;
AVCGains.blueGain = 1.0;
self.camera?.focusMode = .locked
self.camera?.exposureMode = .locked
self.camera?.whiteBalanceMode = .locked
self.camera?.setFocusModeLockedWithLensPosition(focus_point, completionHandler: {(timestamp:CMTime) -> Void in
print("Focus applied")
self.camera?.setExposureModeCustomWithDuration(CMTimeMake(1,10), iso: 100, completionHandler: {(timestamp:CMTime) -> Void in
print("Exposure applied")
self.camera?.setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains(AVCGains, completionHandler: {(timestamp:CMTime) -> Void in
print("White Balance applied")
// All settings have been applied, start running session
})
})
})

AVPlayer seekToTime not working properly

Im having a problem with seeking with AVPlayer.seekToTime, I have the time index that I want to seek to inside a scrollViewDidScroll method like this:
func scrollViewDidScroll(scrollView: UIScrollView) {
let offsetTime = scrollView.contentOffset.y * 0.1
self.playerController.player?.seekToTime(CMTime(seconds: Double(offsetTime), preferredTimescale: 10), toleranceBefore: kCMTimePositiveInfinity, toleranceAfter: kCMTimeZero)
}
But the video does not flow nice. For example, when you scroll I want the video to only to move forward like 0.01 of a second (the video I have is real short only about 2.0 sec long) but when you scroll far enough, instead the video moves forward almost a whole second. Its really choppy and I'm not sure why I can't seek to like say 1.00 seconds to 1.01 seconds and have the image representing the time index on the player move. Is this possible? What am I doing wrong? Please help!
PS: I never call self.playerController.player?.play() if this helps
Maybe your tolerance before is not set right. try the Following:
instead of your code:
let offsetTime = scrollView.contentOffset.y * 0.1
let seekTime : CMTime = CMTimeMake(Double(offsetTime), 1000)
self.playerController.player?.seekToTime(seekTime, toleranceBefore: kCMTimeZero, toleranceAfter: kCMTimeZero)
the Time Scale tells how many units you have per second
Also, toleranceBefore should ALSO be kCMTimeZero
hope this Helps :-)
Hey I tried the above code in a Xamarin project using Octane.Xam.Videoplayer: https://components.xamarin.com/view/video-player
It worked really well!
Here is my sample code which I put into a Custom Renderer that inherited from VideoPlayerRenderer which can be found in the namespace Octane.Xam.VideoPlayer.iOS.Renderers:
public void CustomSeek(int seekTime)
{
CMTime tm = new CMTime(seekTime, 10000);
if(NativeVideoPlayer.Player != null)
NativeVideoPlayer.Player.Seek(tm, CMTime.Zero, CMTime.Zero);
}
Keep in mind that you will have to use Version 1.1.4 of the Octane.Xam.VideoPlayer to gain access to the NativeVideoPlayer property of the VideoPlayerRenderer. In the future this property may be renamed to PlayerControl.

CorePlot real-time plotting

I was new to swift development. I recently used CorePlot to help me draw the ECG data from sensor in real-time. The sampling rate of sensor is 250 points/sec. At first, I read every 5 points to draw one time then refresh the view till the scatter filled with the view(about 1000 points). But I found that was inefficient and the data would lose.
Afterward I saw that I could insert the update points to change the graph by using -insertDataAtIndex:numberOfRecords. I have referenced to RealTimePlot example in CorePlot, but I don't know how exactly to do it in Swift. Can you give me RealTimePlot example in Swift or tell me how to do it?
Please help me.
typealias plotDataType = [CPTScatterPlotField : Double]
private var scatterGraph : CPTXYGraph? = nil
func plotChart(ecg:[Int]){
let ecgLinePlot = CPTScatterPlot(frame: CGRectZero)
var ecgContentArray = [plotDataType]()
for i in 0 ..< ecg.count { //ecg.count=1000
let y = Double(ecg[i])/65535.0 + 0.05
let x = Double(i) * 0.1
let dataPoint: plotDataType = [.X: x, .Y:y]
ecgContentArray.append(dataPoint)
}
ecgDataForPlot = ecgContentArray
// MARK: - Plot Data Source Methods
func numberOfRecordsForPlot(plot: CPTPlot) -> UInt
{
return UInt(self.ecgDataForPlot.count)
}
func numberForPlot(plot: CPTPlot, field: UInt, recordIndex: UInt) -> AnyObject?
{
let plotField = CPTScatterPlotField(rawValue: Int(field))
if let ecgNum = self.respDataForPlot[Int(recordIndex)][plotField!]{
return ecgNum as NSNumber
}
}
You follow the same process the "Real Time Plot" demo uses. The update action takes place in the -newData: method. The demo calls this method periodically using a timer, but you can supply new data from anywhere, e.g., a callback or delegate from the sensor device.
These are the basic steps to follow:
Remove old points from the beginning of the dataset if adding new ones will make the total dataset larger than what you want to display. Remove them from both the plot and from your local data cache (ecgDataForPlot). You can keep them around if you want the user to be able to scroll back into the history. The demo doesn't need to keep the old data since it scrolls automatically and there is no way to go back and see data from the past.
Update the plot range to include the new data points. The demo animates the change, but this part is optional.
Add the new data points to the local data cache (ecgDataForPlot).
Call -insertDataAtIndex:numberOfRecords: to tell the plot to load the new data. The demo always add new data at the end of the existing dataset which makes sense for a time-series, but you can insert new data anywhere.