I am using Swift 5 with Charts 3.6.0 ( Line Chart - cubic lines ) to plot real-time watchOS core motion. The goal is to display watch movement as quickly as possible. Since my sample rate is high, I suspect there will be a bottleneck in updating the view, and as such, would only like to display N most recent items.
Here is the watch function that sends the data immediately, as mentioned in Apple docs, and numerous tutorials:
motion.deviceMotionUpdateInterval = 1.0 / 120.0
motion.startDeviceMotionUpdates(using: .xArbitraryZVertical, to: queue) { [self] (deviceMotion: CMDeviceMotion?, _ : Error?) in
guard let motion = deviceMotion else { return }
self.sendDataToPhone(quaternion: motion.attitude.quaternion, time: Double(Date().timeIntervalSince1970))
}
private func sendDataToPhone(quaternion: CMQuaternion, time: Double) {
if WCSession.default.isReachable {
WCSession.default.sendMessageData(try! NSKeyedArchiver.archivedData(withRootObject: [quaternion.x, quaternion.y, quaternion.z, quaternion.w, time], requiringSecureCoding: false), replyHandler: nil, errorHandler: nil);
}
}
Once received, the packets are interpreted by the session() function on the iPhone:
onViewDidLoad() {
self.lineChartView.leftAxis.axisMinimum = -1;
self.lineChartView.leftAxis.axisMaximum = 1;
}
func session(_ session: WCSession, didReceiveMessageData messageData: Data) {
let record : [Double] = try! NSKeyedUnarchiver.unarchivedObject(ofClasses: [NSArray.self], from: messageData) as! [Double]
laggyFunction(qaternions: [simd_quatd.init(ix: record[0], iy: record[1], iz: record[2], r: record[3])], quaternionTimes: [record[4]])
}
private func laggyFunction(qaternions: [simd_quatd], quaternionTimes: [Double]) {
DispatchQueue.main.sync {
let dataset = self.lineChartView.data!.getDataSetByIndex(0)!
var x = dataset.entryCount + 1;
for quaternion in qaternions {
let _ = dataset.addEntry(ChartDataEntry(x: Double(x), y: quaternion.vector.w))
x += 1;
}
// limit the amount of points
while (dataset.entryCount > 5) {
let _ = dataset.removeFirst()
}
// - re index so entries start from 1
for startIdx in 1..<dataset.entryCount {
dataset.entryForIndex(startIdx - 1)!.x = Double(startIdx);
}
self.lineChartView.data!.notifyDataChanged()
self.lineChartView.notifyDataSetChanged()
}
}
Logic flow:
As packets come in, new entries are added to the initial dataset from the lineChartView. In the event there are more than 5, first N are removed. Then, the x values are re-indexed on the chart to ensure a sequential flow.
The problem:
The delay in updating the UI chart is very high. The elapsed time of both functions to complete is plotted below. At the 73rd percentile, the laggy function is able to accommodate the sample rate of incoming packets ( < 1/120 = 0.008) . The session function seems to accommodate the sample rate throughout. The CDF plot, in my opinion, does not do it justice. Visually, the chart is very "sluggish". As an experiment, if I throw the watch against the wall, I can observe it hitting the concrete well before the chart is updated.
My goal is to update the Chart as quickly as possible to observe watch motion and discard new entries until the UI is updated. What is the correct way to do this with my chart choice?
Related
I am trying to figure out how to use Apple's Core Audio APIs to record and play back linear PCM audio without any file I/O. (The recording side seems to work just fine.)
The code I have is pretty short, and it works somewhat. However, I am having trouble with identifying the source of clicks and pops in the output. I've been beating my head against this for many days with no success.
I have posted a git repo here, with a command-line program program that shows where I'm at: https://github.com/maxharris9/AudioRecorderPlayerSwift/tree/main/AudioRecorderPlayerSwift
I put in a couple of functions to prepopulate the recording. The tone generator (makeWave) and noise generator (makeNoise) are just in here as debugging aids. I'm ultimately trying to identify the source of the messed up output when you play back a recording in audioData:
// makeWave(duration: 30.0, frequency: 441.0) // appends to `audioData`
// makeNoise(frameCount: Int(44100.0 * 30)) // appends to `audioData`
_ = Recorder() // appends to `audioData`
_ = Player() // reads from `audioData`
Here's the player code:
var lastIndexRead: Int = 0
func outputCallback(inUserData: UnsafeMutableRawPointer?, inAQ: AudioQueueRef, inBuffer: AudioQueueBufferRef) {
guard let player = inUserData?.assumingMemoryBound(to: Player.PlayingState.self) else {
print("missing user data in output callback")
return
}
let sliceStart = lastIndexRead
let sliceEnd = min(audioData.count, lastIndexRead + bufferByteSize - 1)
print("slice start:", sliceStart, "slice end:", sliceEnd, "audioData.count", audioData.count)
if sliceEnd >= audioData.count {
player.pointee.running = false
print("found end of audio data")
return
}
let slice = Array(audioData[sliceStart ..< sliceEnd])
let sliceCount = slice.count
// doesn't fix it
// audioData[sliceStart ..< sliceEnd].withUnsafeBytes {
// inBuffer.pointee.mAudioData.copyMemory(from: $0.baseAddress!, byteCount: Int(sliceCount))
// }
memcpy(inBuffer.pointee.mAudioData, slice, sliceCount)
inBuffer.pointee.mAudioDataByteSize = UInt32(sliceCount)
lastIndexRead += sliceCount + 1
// enqueue the buffer, or re-enqueue it if it's a used one
check(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil))
}
struct Player {
struct PlayingState {
var packetPosition: UInt32 = 0
var running: Bool = false
var start: Int = 0
var end: Int = Int(bufferByteSize)
}
init() {
var playingState: PlayingState = PlayingState()
var queue: AudioQueueRef?
// this doesn't help
// check(AudioQueueNewOutput(&audioFormat, outputCallback, &playingState, CFRunLoopGetMain(), CFRunLoopMode.commonModes.rawValue, 0, &queue))
check(AudioQueueNewOutput(&audioFormat, outputCallback, &playingState, nil, nil, 0, &queue))
var buffers: [AudioQueueBufferRef?] = Array<AudioQueueBufferRef?>.init(repeating: nil, count: BUFFER_COUNT)
print("Playing\n")
playingState.running = true
for i in 0 ..< BUFFER_COUNT {
check(AudioQueueAllocateBuffer(queue!, UInt32(bufferByteSize), &buffers[i]))
outputCallback(inUserData: &playingState, inAQ: queue!, inBuffer: buffers[i]!)
if !playingState.running {
break
}
}
check(AudioQueueStart(queue!, nil))
repeat {
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, BUFFER_DURATION, false)
} while playingState.running
// delay to ensure queue emits all buffered audio
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, BUFFER_DURATION * Double(BUFFER_COUNT + 1), false)
check(AudioQueueStop(queue!, true))
check(AudioQueueDispose(queue!, true))
}
}
I captured the audio with Audio Hijack, and noticed that the jumps are indeed correlated with the size of the buffer:
Why is this happening, and what can I do to fix it?
I believe you were beginning to zero in on, or at least suspect, the cause of the popping you are hearing: it's caused by discontinuities in your waveform.
My initial hunch was that you were generating the buffers independently (i.e. assuming that each buffer starts at time=0), but I checked out your code and it wasn't that. I suspect some of the calculations in makeWave were at fault. To check this theory I replaced your makeWave with the following:
func makeWave(offset: Double, numSamples: Int, sampleRate: Float64, frequency: Float64, numChannels: Int) -> [Int16] {
var data = [Int16]()
for sample in 0..<numSamples / numChannels {
// time in s
let t = offset + Double(sample) / sampleRate
let value = Double(Int16.max) * sin(2 * Double.pi * frequency * t)
for _ in 0..<numChannels {
data.append(Int16(value))
}
}
return data
}
This function removes the double loop in the original, accepts an offset so it knows which part of the wave is being generated and makes some changes to the sampling of the sine wave.
When Player is modified to use this function you get a lovely steady tone. I'll add the changes to player soon. I can't in good conscience show the quick and dirty mess it is now to the public.
Based on your comments below I refocused on your player. The issue was that the audio buffers expect byte counts but the slice count and some other calculations were based on Int16 counts. The following version of outputCallback will fix it. Concentrate on the use of the new variable bytesPerChannel.
func outputCallback(inUserData: UnsafeMutableRawPointer?, inAQ: AudioQueueRef, inBuffer: AudioQueueBufferRef) {
guard let player = inUserData?.assumingMemoryBound(to: Player.PlayingState.self) else {
print("missing user data in output callback")
return
}
let bytesPerChannel = MemoryLayout<Int16>.size
let sliceStart = lastIndexRead
let sliceEnd = min(audioData.count, lastIndexRead + bufferByteSize/bytesPerChannel)
if sliceEnd >= audioData.count {
player.pointee.running = false
print("found end of audio data")
return
}
let slice = Array(audioData[sliceStart ..< sliceEnd])
let sliceCount = slice.count
print("slice start:", sliceStart, "slice end:", sliceEnd, "audioData.count", audioData.count, "slice count:", sliceCount)
// need to be careful to convert from counts of Ints to bytes
memcpy(inBuffer.pointee.mAudioData, slice, sliceCount*bytesPerChannel)
inBuffer.pointee.mAudioDataByteSize = UInt32(sliceCount*bytesPerChannel)
lastIndexRead += sliceCount
// enqueue the buffer, or re-enqueue it if it's a used one
check(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil))
}
I did not look at the Recorder code, but you may want to check if the same sort of error crept in there.
I am writing an application that needs to detect a frequency in the audio stream. I have read about a million articles and am having problems crossing the finish line. I have my audio data coming to me in this function via the AVFoundation Framework from Apple.
I am using Swift 4.2 and have tried playing with the FFT functions, but they are a little over my head at the current moment.
Any thoughts?
// get's the data as a call back for the AVFoundation framework.
public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
// prints the whole sample buffer and tells us alot of information about what's inside
print(sampleBuffer);
// create a buffer, ready out the data, and use the CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer method to put
// it into a buffer
var buffer: CMBlockBuffer? = nil
var audioBufferList = AudioBufferList(mNumberBuffers: 1,
mBuffers: AudioBuffer(mNumberChannels: 1, mDataByteSize: 0, mData: nil))
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout<AudioBufferList>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment), blockBufferOut: &buffer);
let abl = UnsafeMutableAudioBufferListPointer(&audioBufferList)
var sum:Int64 = 0
var count:Int = 0
var bufs:Int = 0
var max:Int64 = 0;
var min:Int64 = 0
// loop through the samples and check for min's and maxes.
for buff in abl {
let samples = UnsafeMutableBufferPointer<Int16>(start: UnsafeMutablePointer(OpaquePointer(buff.mData)),
count: Int(buff.mDataByteSize)/MemoryLayout<Int16>.size)
for sample in samples {
let s = Int64(sample)
sum = (sum + s*s)
count += 1
if(s > max) {
max = s;
}
if(s < min) {
min = s;
}
print(sample)
}
bufs += 1
}
// debug
print("min - \(min), max = \(max)");
// update the interface
DispatchQueue.main.async {
self.frequencyDataOutLabel.text = "min - \(min), max = \(max)";
}
// stop the capture session
self.captureSession.stopRunning();
}
After much research I found that the answer is to use an FFT method (Fast Fourier Transform). It takes the raw input from the iPhone's code above and converts it into an array of values representing magnitude of each frequency in bands.
Much props to the open code here https://github.com/jscalo/tempi-fft that created a visualizer that captures the data, and displays it. From there, it was a matter of manipulating it to meet the needs. In my case I was looking for frequencies very high above human hearing (20kHz range). By scanning the later half of the array in the tempi-fft code I was able to determine if frequencies I was looking for were present and loud enough.
I updated to latest version of Audiokit 4.5
and my Audiokit class that is meant to listen to microphone amplitude is now printing: 55: EXCEPTION (-1): "" infinitely on the console. the app doesnt crash or anything.. but is logging that.
My app is a video camera app that records using GPUImage library.
The logs appear only when I start recording for some reason.
In addition to that. My onAmplitudeUpdate callback method no longer outputs anything, just 0.0 values. This didnt happen before updating Audiokit. Any ideas here ?
Here is my class:
// G8Audiokit.swift
// GenerateToolkit
//
// Created by Omar Juarez Ortiz on 2017-08-03.
// Copyright © 2017 All rights reserved.
//
import Foundation
import AudioKit
class G8Audiokit{
//Variables for Audio audioAnalysis
var microphone: AKMicrophone! // Device Microphone
var amplitudeTracker: AKAmplitudeTracker! // Tracks the amplitude of the microphone
var signalBooster: AKBooster! // boosts the signal
var audioAnalysisTimer: Timer? // Continuously calls audioAnalysis function
let amplitudeBuffSize = 10 // Smaller buffer will yield more amplitude responiveness and instability, higher value will respond slower but will be smoother
var amplitudeBuffer: [Double] // This stores a rolling window of amplitude values, used to get average amplitude
public var onAmplitudeUpdate: ((_ value: Float) -> ())?
static let sharedInstance = G8Audiokit()
private init(){ //private because that way the class can only be initialized by itself.
self.amplitudeBuffer = [Double](repeating: 0.0, count: amplitudeBuffSize)
startAudioAnalysis()
}
// public override init() {
// // Initialize the audio buffer with zeros
//
// }
/**
Set up AudioKit Processing Pipeline and start the audio analysis.
*/
func startAudioAnalysis(){
stopAudioAnalysis()
// Settings
AKSettings.bufferLength = .medium // Set's the audio signal buffer size
do {
try AKSettings.setSession(category: .playAndRecord)
} catch {
AKLog("Could not set session category.")
}
// ----------------
// Input + Pipeline
// Initialize the built-in Microphone
microphone = AKMicrophone()
// Pre-processing
signalBooster = AKBooster(microphone)
signalBooster.gain = 5.0 // When video recording starts, the signal gets boosted to the equivalent of 5.0, so we're setting it to 5.0 here and changing it to 1.0 when we start video recording.
// Filter out anything outside human voice range
let highPass = AKHighPassFilter(signalBooster, cutoffFrequency: 55) // Lowered this a bit to be more sensitive to bass-drums
let lowPass = AKLowPassFilter(highPass, cutoffFrequency: 255)
// At this point you don't have much signal left, so you balance it against the original signal!
let rebalanced = AKBalancer(lowPass, comparator: signalBooster)
// Track the amplitude of the rebalanced signal, we use this value for audio reactivity
amplitudeTracker = AKAmplitudeTracker(rebalanced)
// Mute the audio that gets routed to the device output, preventing feedback
let silence = AKBooster(amplitudeTracker, gain:0)
// We need to complete the chain, routing silenced audio to the output
AudioKit.output = silence
// Start the chain and timer callback
do{ try AudioKit.start(); }
catch{}
audioAnalysisTimer = Timer.scheduledTimer(timeInterval: 0.01,
target: self,
selector: #selector(audioAnalysis),
userInfo: nil,
repeats: true)
// Put the timer on the main thread so UI updates don't interrupt
RunLoop.main.add(audioAnalysisTimer!, forMode: RunLoopMode.commonModes)
}
// Call this when closing the app or going to background
public func stopAudioAnalysis(){
audioAnalysisTimer?.invalidate()
AudioKit.disconnectAllInputs() // Disconnect all AudioKit components, so they can be relinked when we call startAudioAnalysis()
}
// This is called on the audioAnalysisTimer
#objc func audioAnalysis(){
writeToBuffer(val: amplitudeTracker.amplitude) // Write an amplitude value to the rolling buffer
let val = getBufferAverage()
onAmplitudeUpdate?(Float(val))
}
// Writes amplitude values to a rolling window buffer, writes to index 0 and pushes the previous values to the right, removes the last value to preserve buffer length.
func writeToBuffer(val: Double) {
for (index, _) in amplitudeBuffer.enumerated() {
if (index == 0) {
amplitudeBuffer.insert(val, at: 0)
_ = amplitudeBuffer.popLast()
}
else if (index < amplitudeBuffer.count-1) {
amplitudeBuffer.rearrange(from: index-1, to: index+1)
}
}
}
// Returns the average of the amplitudeBuffer, resulting in a smoother audio reactivity signal
func getBufferAverage() -> Double {
var avg:Double = 0.0
for val in amplitudeBuffer {
avg = avg + val
}
avg = avg / amplitudeBuffer.count
return avg
}
}
So below is the code that I've got thus far, cannot figure out why I'm getting inaccurate data.
Not accounting for the pause events yet that should not affect the first two kilometre inaccuracies...
So the output would be the distance 1km and the duration that km took.
Any ideas for improvement, please help?
func getHealthKitWorkouts(){
print("HealthKit Workout:")
/* Boris here: Looks like we need some sort of Health Kit manager */
let healthStore:HKHealthStore = HKHealthStore()
let durationFormatter = NSDateComponentsFormatter()
var workouts = [HKWorkout]()
// Predicate to read only running workouts
let predicate = HKQuery.predicateForWorkoutsWithWorkoutActivityType(HKWorkoutActivityType.Running)
// Order the workouts by date
let sortDescriptor = NSSortDescriptor(key:HKSampleSortIdentifierStartDate, ascending: false)
// Create the query
let sampleQuery = HKSampleQuery(sampleType: HKWorkoutType.workoutType(), predicate: predicate, limit: 0, sortDescriptors: [sortDescriptor])
{ (sampleQuery, results, error ) -> Void in
if let queryError = error {
print( "There was an error while reading the samples: \(queryError.localizedDescription)")
}
workouts = results as! [HKWorkout]
let target:Int = 0
print(workouts[target].workoutEvents)
print("Energy ", workouts[target].totalEnergyBurned)
print(durationFormatter.stringFromTimeInterval(workouts[target].duration))
print((workouts[target].totalDistance!.doubleValueForUnit(HKUnit.meterUnit())))
self.coolMan(workouts[target])
self.coolManStat(workouts[target])
}
// Execute the query
healthStore.executeQuery(sampleQuery)
}
func coolMan(let workout: HKWorkout){
let expectedOutput = [
NSTimeInterval(293),
NSTimeInterval(359),
NSTimeInterval(359),
NSTimeInterval(411),
NSTimeInterval(810)
]
let healthStore:HKHealthStore = HKHealthStore()
let distanceType = HKObjectType.quantityTypeForIdentifier(HKQuantityTypeIdentifierDistanceWalkingRunning)
let workoutPredicate = HKQuery.predicateForObjectsFromWorkout(workout)
let startDateSort = NSSortDescriptor(key: HKSampleSortIdentifierStartDate, ascending: true)
let query = HKSampleQuery(sampleType: distanceType!, predicate: workoutPredicate,
limit: 0, sortDescriptors: [startDateSort]) {
(sampleQuery, results, error) -> Void in
// Process the detailed samples...
if let distanceSamples = results as? [HKQuantitySample] {
var count = 0.00, countPace = 0.00, countDistance = 0.0, countPacePerMeterSum = 0.0
var countSplits = 0
var firstStart = distanceSamples[0].startDate
let durationFormatter = NSDateComponentsFormatter()
print("🕒 Time Splits: ")
for (index, element) in distanceSamples.enumerate() {
count += element.quantity.doubleValueForUnit(HKUnit.meterUnit())
/* Calculate Pace */
let duration = ((element.endDate.timeIntervalSinceDate(element.startDate)))
let distance = distanceSamples[index].quantity
let pacePerMeter = distance.doubleValueForUnit(HKUnit.meterUnit()) / duration
countPace += duration
countPacePerMeterSum += pacePerMeter
if count > 1000 {
/* Account for extra bits */
let percentageUnder = (1000 / count)
//countPace = countPace * percentageUnder
// 6.83299013038 * 2.5
print("👣 Reached Kilometer \(count) ")
// MARK: Testing
let testOutput = durationFormatter.stringFromTimeInterval(NSTimeInterval.init(floatLiteral: test)),
testOutputExpected = durationFormatter.stringFromTimeInterval(expectedOutput[countSplits])
print(" Output Accuracy (", round(test - expectedOutput[countSplits]) , "): expected \(testOutputExpected) versus \(testOutput)")
print(" ", firstStart, " until ", element.endDate)
/* Print The Split Time Taken */
firstStart = distanceSamples[index].endDate;
count = (count % 1000) //0.00
countPace = (count % 1000) * pacePerMeter
countSplits++
/* Noise
\(countSplits) – \(count) – Pace \(countPace) – Pace Per Meter \(pacePerMeter) – Summed Pace Per Meter \(countPacePerMeterSum) – \(countPacePerMeterSum / Double.init(index))"
*/
}
/* Account for the last entry */
if (distanceSamples.count - 1 ) == index {
print("We started a kilometer \(countSplits+1) – \(count)")
let pacePerKM = (count / countPace) * 1000
print(durationFormatter.stringFromTimeInterval(NSTimeInterval.init(floatLiteral: (pacePerKM ))))
}
}
}else {
// Perform proper error handling here...
print("*** An error occurred while adding a sample to " + "the workout: \(error!.localizedDescription)")
abort()
}
}
healthStore.executeQuery(query)
}
func coolManStat(let workout: HKWorkout){
let healthStore:HKHealthStore = HKHealthStore()
let stepsCount = HKQuantityType.quantityTypeForIdentifier(HKQuantityTypeIdentifierDistanceWalkingRunning)
let sumOption = HKStatisticsOptions.CumulativeSum
let statisticsSumQuery = HKStatisticsQuery(quantityType: stepsCount!, quantitySamplePredicate: HKQuery.predicateForObjectsFromWorkout(workout),
options: sumOption)
{ (query, result, error) in
if let sumQuantity = result?.sumQuantity() {
let numberOfSteps = Int(sumQuantity.doubleValueForUnit(HKUnit.meterUnit()))/1000
print("👣 Right -O: ",numberOfSteps)
}
}
healthStore.executeQuery(statisticsSumQuery)
}
I'm sure you're past this problem by now, more than two years later! But I'm sure someone else will come across this thread in the future, so I thought I'd share the answer.
I started off with a version of your code (many thanks!!) and encountered the same problems. I had to make a few changes. Not all of those changes are related to the issues you were seeing, but in any case, here are all of the considerations I've thought of so far:
Drift
You don't handle the 'drift', although this isn't what's causing the big inaccuracies in your output. What I mean is that your code is saying:
if count > 1000
But you don't do anything with the remainder over 1000, so your kilometre time isn't for 1000m, it's for, let's say, 1001m. So your time both is inaccurate for the current km, and it's including some of the running from the next km, so that time will be wrong too. Over a long run, this could start to cause noticeable problems. But it's not a big deal over short runs as I don't think the difference is significant enough at small distances. But it's definitely worth fixing. In my code I'm assuming that the runner was moving at a constant pace during the current sample (which is obviously not perfect, but I don't think there's a better way), and I'm then simply finding the fraction of the current sample distance that puts the split distance over 1000m, and getting that same fraction of the current sample's duration and removing it from the current km's time, and adding it (and the distance) to the next split.
GPS drops
The real problem with your results is that you don't handle GPS drops. The way I'm currently handling this is to compare the startDate of the current sample with the endDate of the previous sample. If they're not the same then there was a GPS drop. You need to add the difference between the previous endDate and the current startDate to the current split. Edit: you also need to do this with the startDate of the activity and the startDate of the first sample. There will be a gap between these 2 dates while GPS was connecting.
Pauses
There's a slight complication to the above GPS dropping problem. If the user has paused the workout then there will also be a difference between the current sample's startDate and the previous sample's endDate. So you need to be able to detect that and not adjust the split in that case. However, if the user's GPS dropped and they also paused during that time then you'll need to subtract the pause time from the missing time before adding it to the split.
Unfortunately, my splits are still not 100% in sync with the Apple Workouts app. But they've gone from being potentially minutes off to being mostly within 1 second. The worst I've seen is 3 seconds. I've only been working on this for a couple of hours, so I plan to continue trying to get 100% accuracy. I'll update this answer if I get that. But I believe I've covered the major problems here.
I'm working on a task to determine when the iPhone 6 is moving (smallest possible move not even a shake!) at any direction (x,y or Z) .
what is the best way to achieve that?
I used this code and found it useful, it contains four functions :
- Start motion manager
- Stop motion manager
- Update motion manager
- magnitudeFromAttitude
import CoreMotion
let motionManager: CMMotionManager = CMMotionManager()
var initialAttitude : CMAttitude!
//start motion manager
func StartMotionManager () {
if !motionManager.deviceMotionActive {
motionManager.deviceMotionUpdateInterval = 1
motionManager.startDeviceMotionUpdates()
}
}
//stop motion manager
func stopMotionManager ()
{
if motionManager.deviceMotionActive
{
motionManager.stopDeviceMotionUpdates()
}
}
//update motion manager
func updateMotionManager (var x : UIViewController)
{
if motionManager.deviceMotionAvailable {
//sleep(2)
initialAttitude = motionManager.deviceMotion.attitude
motionManager.startDeviceMotionUpdatesToQueue(NSOperationQueue.currentQueue(), withHandler:{
[weak x] (data: CMDeviceMotion!, error: NSError!) in
data.attitude.multiplyByInverseOfAttitude(initialAttitude)
// calculate magnitude of the change from our initial attitude
let magnitude = magnitudeFromAttitude(data.attitude) ?? 0
let initMagnitude = magnitudeFromAttitude(initialAttitude) ?? 0
if magnitude > 0.1 // threshold
{
// Device has moved !
// put the code which should fire upon device moving write here
initialAttitude = motionManager.deviceMotion.attitude
}
})
println(motionManager.deviceMotionActive) // print false
}
}
// get magnitude of vector via Pythagorean theorem
func magnitudeFromAttitude(attitude: CMAttitude) -> Double {
return sqrt(pow(attitude.roll, 2) + pow(attitude.yaw, 2) + pow(attitude.pitch, 2))
}