Calculate the quality of the GPS signal iPhone - iphone

I am developing an application very similar to Runtastic.
I have a problem to understand what is the quality of the GPS signal.
If I understand it you need to use the property horizontalAccuracy but I don't understand what are the range for the quality of the signal can be considered as excellent, good, or no signal.
Can you help me ?
I have found several examples on the internet but then when I go to apply the values I don't get what I want.
This is my code:
dispatch_async(dispatch_get_main_queue()) {
//assign new image
if(newLocation.horizontalAccuracy < 0){
//No Signal
self.qualitaSegnale.text = "\(newLocation.horizontalAccuracy)";
self.imageGPS.image = UIImage(named: "gps_signal_ko");
}
else if(newLocation.horizontalAccuracy > 163){
self.qualitaSegnale.text = "\(newLocation.horizontalAccuracy)";
self.imageGPS.image = UIImage(named: "gps_signal_peer");
}
else if (newLocation.horizontalAccuracy > 48){
self.qualitaSegnale.text = "\(newLocation.horizontalAccuracy)";
self.imageGPS.image = UIImage(named: "gps_signal_peer");
}
else {
self.qualitaSegnale.text = "\(newLocation.horizontalAccuracy)";
self.imageGPS.image = UIImage(named: "gps_signal_ok");
}
}
Thanks to all for your availability.
Vincenzo

So anything below 48 is considered good. I'd say anything below 10 is excellent.
Source: I've been messing with similar stuff recently, and it appears the best reading I can get is a radius of 5 metres. I would consider 5 metres to be the best possible reading at this point in time.
In addition to:
if(newLocation.horizontalAccuracy < 0){
I'd also check:
func locationManager(_ manager: CLLocationManager, didFailWithError error: Error) {
Because there are probably going to be errors if horizontalAccuracy is less than 0.

Related

AudioKit AKMicrophone not outputting any data

I am trying to capture FFT data from a microphone. I've managed to get it to work before with a similar codebase but since macOS Mojave it's broken - the fft data constantly stays 0.
Relevant Code:
var fft: AKFFTTap?
var inputDevice: AKDevice? {
didSet {
inputNode = nil
updateAudioNode()
}
}
var inputNode: AKNode? {
didSet {
if fft != nil {
// According to AKFFTTap class reference, it will always be on tap 0
oldValue?.avAudioNode.removeTap(onBus: 0)
}
fft = inputNode.map { AKFFTTap($0) }
}
}
[...]
guard let device = inputDevice else {
inputNode = ViewController.shared.player.mixer
return
}
do {
try AudioKit.setInputDevice(device)
}
catch {
print("Error setting input device: \(error)")
return
}
let microphoneNode = AKMicrophone()
do {
try microphoneNode.setDevice(device)
}
catch {
print("Failed setting node input device: \(error)")
return
}
microphoneNode.start()
microphoneNode.volume = 3
print("Switched Node: \(microphoneNode), started: \(microphoneNode.isStarted)")
inputNode = microphoneNode
try! AudioKit.start()
All the code is called, no errors are output, but the fft simply stays blank. With some code reordering I get varying errors.
A full version of the class, for completeness, is here.
Finally, I also tried implementing one to one the examples from the playground. Since XCode playgrounds seem to crash with AudioKit, I tried it in my own codebase, but there's no difference there either. AKFrequencyTracker, for example, gets 0s for both amplitude and frequency.
I am not 100% positive of this, but I'd like you to try AudioKit v4.5.1 out. We definitely fixed a bug in AKMicrophone, and that could have downstream consequences. I'll withdraw this answer and keep looking if it is not fixed. Let me know.

CIDetector face detection in real-time However memory consumption increases linearly how to avoid this issue?

I have a question how to correctly call CIDetector correctly I'm trying to run the face detection in real-time this works very well. However the memory consumption of the app increases linearly with time how you can see in the image below I'm thinking this is due to objects being created but they're not released can anyone advise how to do it correctly.
I have pinpointed the issue down to this function as every time it's invoked memory increases linearly when it terminated it quickly drops down to almost 80 MB instead of 11 GB rising also check for memory leaks however none were found.
My target development platform is Mac OS I'm trying to extractthe mouth position from the CA detector and then use it to compute a Delta in the mouse function for a Game.
I also Looked that this post however I have tried their approach but it did not work for me
CIDetector isn't releasing memory
fileprivate func faceDetection(){
// setting up dispatchQueue
dispatchQueue.async {
// checking if sample buffer is equal to nil if not assign its value to sample
if let sample = self.sampleBuffers {
// if allfeatures is not equal to nil. if yes assign allfeatures to features otherwise return
guard let features = self.allFeatures(sample: sample) else { return }
// loop to cycle through all features
for feature in features {
// checks if the feature is a CIFaceFeature if yes assign feature to face feature and go on.
if let faceFeature = feature as? CIFaceFeature {
if !self.hasEnded {
if self.calX.count > 30 {
self.sens.append((self.calX.max()! - self.calX.min()!))
self.sens.append((self.calY.max()! - self.calY.min()!))
print((self.calX.max()! - self.calX.min()!))
self.hasEnded = true
} else {
self.calX.append(faceFeature.mouthPosition.x)
self.calY.append(faceFeature.mouthPosition.y)
}
} else {
self.mouse(position: CGPoint(x: (faceFeature.mouthPosition.x - 300 ) * 2, y: (faceFeature.mouthPosition.y + 20 ) * 2), faceFeature: faceFeature)
}
}
}
}
if !self.faceTrackingEnds {
self.faceDetection()
}
}
}
This problem was caused by repeatedly calling the function without waiting for its completion the fix was implementing a dispatch group and then calling the function on its completion
like this Now the CIdetector runs comfortably at 200 MB memory
fileprivate func faceDetection(){
let group = DispatchGroup()
group.enter()
// setting up dispatchQueue
dispatchQueue.async {
// checking if sample buffer is equal to nil if not assign its value to sample
if let sample = self.sampleBuffers {
// if allfeatures is not equal to nil. if yes assign allfeatures to features otherwise return
guard let features = self.allFeatures(sample: sample) else { return }
// loop to cycle through all features
for feature in features {
// checks if the feature is a CIFaceFeature if yes assign feature to face feature and go on.
if let faceFeature = feature as? CIFaceFeature {
self.mouse(position: faceFeature.mouthPosition, faceFeature: faceFeature)
}
}
}
group.leave()
}
group.notify(queue: .main) {
if !self.faceTrackingEnds {
self.faceDetection()
}
}
}

How to read byte array of AKMicrophone

My project currently uses AudioKit to manipulate the audio recording side. Today I am able to use the AKAmplitudeTracker for the amplitude and it's working fine.
I was trying to attach a tap to get a determined range of bytes to send using UDP to a server but I don't know how to do it. Any idea?
AKAudioFile.cleanTempDirectory()
AKSettings.bufferLength = .medium
AKSettings.defaultToSpeaker = true
mic = AKMicrophone()
guard let unwrappedMic: AKMicrophone = self.mic else {
return
}
fft = AKFFTTap(unwrappedMic)
amplitudeTracker = AKAmplitudeTracker(unwrappedMic)
guard let unwrappedAmplitudeTracker: AKAmplitudeTracker = self.amplitudeTracker else {
return
}
// Turn the volume all the way down on the output of amplitude tracker
let noAudioOutput = AKMixer(unwrappedAmplitudeTracker)
noAudioOutput.volume = 0
AudioKit.output = noAudioOutput
AudioKit.start()
-
Could it be done using a limit of buffer size to get this value flushed and delegates?
Thank you.

How to create files with different names using the least amount of processing power in Swift?

I am writing a video app that records video only when triggered and at the end of all the recordings merges the recordings together into one at the end.
I was just wondering if there is process in swift to make sure the name of the next file of a recording is different than the previous one? I know ways of doing this that are fine, but I am a bit of a memory freak and was wondering if swift has a built in answer to this problem?
Edit:
This works with the variable filenamechanger. I just was wondering if there is an even better way.
var filenamechanger = 0
if motiondetected == true {
do{
let documentsDir = try FileManager.default.url(for:.documentDirectory, in:.userDomainMask, appropriateFor:nil, create:true)
filenamechanger += 1 //name changer
let fileURL = URL(string:"test\(filenamechanger).mp4", relativeTo:documentsDir)!
do {
try FileManager.default.removeItem(at:fileURL)
} catch {
}
self.movieOutput = try MovieOutput(URL:fileURL, size:Size(width:480, height:640), liveVideo:true)
self.camera.audioEncodingTarget = self.movieOutput
self.camera --> self.movieOutput!
self.movieOutput!.startRecording()
sleep(3)
self.camera.audioEncodingTarget = nil
self.movieOutput = nil
motiondetected = false
}
catch{
"recording didn't work"
}
}

CoreMotion deviceMotionUpdateInterval ignored

I was playing around with the deviceMotionUpdateInterval and I can't really see any change when I set it. The handler is getting called at maximum. Am I doing something wrong? Is that a bug?
var counter = 0
if (motionManager.accelerometerAvailable == true)
{
self.motionManager.deviceMotionUpdateInterval = 1
let handler:CMAccelerometerHandler = {(data: CMAccelerometerData?, error: NSError?) -> Void in
counter++
print(counter)
}
self.motionManager.startAccelerometerUpdatesToQueue(NSOperationQueue.currentQueue()!, withHandler: handler)
}
The problem is that you're setting the wrong property. The deviceMotionUpdateInterval is for device motion updates. That's not what you've asked for; you've asked for accelerometer updates. For accelerometer updates, you would want to set the accelerometerUpdateInterval.