CoreMotion - Recognizing a jump (Swift) - swift

I'm pretty new to CoreMotion but I'm trying to build an app that uses CoreMotion to see if a user is jumping up and down (think skipping rope). It also need to allow the user to be able to hold their phone (no iWatch) however they want (landscape, portrait, strange tilted way, etc.) and still be able to tell if they are traveling up and down regardless. I guess it's a bit like "throw your phone" app, but without measuring the distance.
So I'm using userAcceleration and gravity to see which way is "down" by checking the value of the different gravity axis and it works okey, but it feels like a clumsy way of going about it.
Is there a better way of doing what I'm doing? Basically all the app currently needs is to be able to tell if there is an acceleration perpendicular to the ground (regardless of how you'r holding the phone).
import UIKit
import CoreMotion
class TestViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
startJump()
}
func startJump() {
if CMMotionManager.shared.isDeviceMotionAvailable {
CMMotionManager.shared.deviceMotionUpdateInterval = 0.01
CMMotionManager.shared.startDeviceMotionUpdates(to: OperationQueue.current!) { (data, error) in
guard let myData = data else {
return
}
var gravX = abs(myData.gravity.x)
var gravY = abs(myData.gravity.y)
var gravZ = abs(myData.gravity.z)
var userX = abs(myData.userAcceleration.x)
var userY = abs(myData.userAcceleration.y)
var userZ = abs(myData.userAcceleration.z)
if gravZ > 0.9 && userZ > 1 {
print("Jump; Z is up")
}
if gravX > 0.9 && userX > 1 {
print("Jump; X is up ")
}
if gravY > 0.9 && userY > 1 {
print("Jump; Y is up")
}
}
}
}
override func viewDidDisappear(_ animated: Bool) {
CMMotionManager.shared.stopDeviceMotionUpdates()
}
}

Related

AudioKit v5 - Why is the audible 'Release Duration' longer than the Amplitude Envelope's programmed Release Duration?

Using the AudioKit Cookbook as a reference, I am trying to build a simple application to test AudioKit features before building a larger application.
There are 2 buttons, 2 oscillators, 2 amplitude envelopes for each oscillator, 2 faders for each envelope, and 1 mixer.
Using the "prepare_audio" function, I instantiate the objects, assign oscillator frequencies, assign envelope parameters, create the mixer, set the engine output, and start the engine.
Inside the two gesture recognizer handler functions, I open and close the gates of the amplitude envelopes when the gesture begins and ends, respectively.
When I run the application and press the buttons for the first time, there is an unwanted pause in the application, then the sound starts.
However, the release duration is programmed to be 1, but it is possible to hear the sound for much longer, say, 4 seconds.
Any help would be greatly appreciated - either on the extended release, or the unwanted pause. Thanks.
class ViewController {
var engine = AudioEngine()
var oscillator1: Oscillator?
var oscillator2: Oscillator?
var envelope1: AmplitudeEnvelope?
var envelope2: AmplitudeEnvelope?
var fader1: Fader?
var fader2: Fader?
var mixer: Mixer?
var button = UIView()
var button2 = UIView()
...
}
func prepare_audio {
self.oscillator1 = Oscillator()
self.oscillator1?.frequency = 440
self.oscillator1?.start()
self.oscillator2 = Oscillator()
self.oscillator2?.frequency = 550
self.oscillator2?.start()
self.envelope1 = AmplitudeEnvelope(self.oscillator1!)
self.envelope1?.attackDuration = 0.25
self.envelope1?.decayDuration = 0
self.envelope1?.sustainLevel = 1
self.envelope1?.releaseDuration = 1
self.envelope2 = AmplitudeEnvelope(self.oscillator2!)
self.envelope2?.attackDuration = 0.25
self.envelope2?.decayDuration = 0
self.envelope2?.sustainLevel = 1
self.envelope2?.releaseDuration = 1
self.fader1 = Fader(self.envelope1!)
self.fader2 = Fader(self.envelope2!)
let faders = [self.fader1!, self.fader2!]
self.mixer = Mixer(faders)
self.engine.output = self.mixer
try? self.engine.start()
}
#objc func handlebutton1(gesture: UIGestureRecognizer) {
if gesture.state == .began {
self.envelope1?.openGate()
}
if gesture.state == .ended {
self.envelope1?.closeGate()
}
}
#objc func handlebutton2(gesture: UIGestureRecognizer) {
if gesture.state == .began {
self.envelope2?.openGate()
}
if gesture.state == .ended {
self.envelope2?.closeGate()
}
}
UPDATE: I was able to eliminate the unwanted pause by immediately opening and closing the envelope gates after I started the audio engine. See below:
...
try? self.engine.start()
self.envelope1?.openGate()
self.envelope2?.openGate()
self.envelope1?.closeGate()
self.envelope2?.closeGate()

Slider value doesn't update the value unless it's moved

I've been playing around with a basic betting app and I can't seem to figure out this problem.
The slider is a percentage of the bank unless it's moved at which point it will update the value based on the value. If it isn't moved and kept at the previous value it also uses the previous value.
For example:
My bank is 1000. I move the slider to 10% to bet 100. I win and now my bank is 1100.
I want to bet 10% again but it doesn't update to the new value which should be 110. It will keep the same value of 100 until the slider is moved? How can I fix it so that even if it doesn't move the value of 10% of the bank is true.
#IBAction func slider(_ sender: UISlider)
{
betAmount.text = String(format: "%.0f%%",sender.value)
wager = Int(sender.value) * bank / 100
}
#IBAction func flipCoin(_ sender: UIButton)
{
let number = [1,2]
winnings = wager + ((wager * 90) / 100)
if let coin = coins.randomElement()
{
if(coin == 1 && wager <= bank && wager > 0)
{
youBet.text = ("$\(wager)")
bank += winnings
bankLabel.text = ("$\(bank)")
}
else if(coin == 2 && wager <= bank && wager > 0)
{
youBet.text = ("$\(wager)")
bank -= wager
bankLabel.text = ("$\(bank)")
}
}
}
The easiest way is to do the following:-
If you don't already have an IBOutlet for the UISlider, create one (e.g. called sliderOutlet.
Extract the contents of the slider func to a new private function, called (e.g.) updateWager(). Reference sliderOutlet instead of sender.
Change the slider func to call your new function.
Also call this function from the end of flipCoin (i.e. after the bank has been updated).
This will make the wager calculation occur automatically when the flip is complete. You can also call the new function should any other event (now or in the future) update the bank.
I.e.:
#IBOutlet weak var sliderOutlet: UISlider! // Create this from the storyboard
func updateWager()
{
betAmount.text = String(format: "%.0f%%", sliderOutlet.value)
wager = Int(sliderOutlet.value) * bank / 100.0
}
#IBAction func slider(_ sender: UISlider)
{
updateWager()
}
#IBAction func flipCoin(_ sender: UIButton)
{
let number = [1,2]
winnings = wager + ((wager * 90) / 100)
if let coin = coins.randomElement()
{
if(coin == 1 && wager <= bank && wager > 0)
{
youBet.text = ("$\(wager)")
bank += winnings
bankLabel.text = ("$\(bank)")
}
else if(coin == 2 && wager <= bank && wager > 0)
{
youBet.text = ("$\(wager)")
bank -= wager
bankLabel.text = ("$\(bank)")
}
updateWager()
}
}

Observer is causing lag in AVFoundation captureOutput method

I have quite a specific problem but hopefully someone can help me. I'm using AVFoundation to create a video camera with a live preview. I use AVCaptureVideoDataOutput to get individual frames and AVCaptureMetadataOutput to detect a face. I'm also using Dlib's facial landmarks predictor to show the landmark points on the users face and measure the interocular distance between their eyes. Finally I'm using AVAssetWriter so that a video can be recorded.
The view controller has an ellipse shape on it so the user knows where to put their face. When the interocular distance is between a certain distance I want the ellipse to turn blue so the user knows their face is in the right place.
At the minute I've achieved this by sending a notification from my SessionHandler class to the View Controller. This works, however it's causing the frames per second in the video to drop badly. I was getting 25fps (manually set by me) and now it's ranging between 8-16.
Is there another way to notify the view controller that the ellipse should be turned green?
Here's my code where the problem is occurring. I know there's a lot going on.
// MARK: AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioOutputSampleBufferDelegate
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if !currentMetadata.isEmpty {
let boundsArray = currentMetadata
.compactMap { $0 as? AVMetadataFaceObject }
.map { (faceObject) -> NSValue in
let convertedObject = output.transformedMetadataObject(for: faceObject, connection: connection)
return NSValue(cgRect: convertedObject!.bounds)
}
if user.hasDlib {
wrapper?.doWork(on: sampleBuffer, inRects: boundsArray)
// Get the interocular distance so face is the correct place in the oval
let interocularDistance = wrapper?.getEyeDistance()
//print("Interocular Distance: \(interocularDistance)")
if user.hasInterocularDistance {
if interocularDistance! < 240 || interocularDistance! > 315 {
let name = Notification.Name(rawValue: setRemoveGreenEllipse)
NotificationCenter.default.post(name: name, object: nil)
//print("face not correct distance")
if videoRecorder.isRecording {
eyeDistanceCounter += 1
//print(eyeDistanceCounter)
if eyeDistanceCounter == 30 {
cancelledByUser = false
cancelledByEyeDistance = true
videoRecorder.cancel()
eyeDistanceCounter = 0
}
}
} else {
//print("face correct distance")
eyeDistanceCounter = 0
let name = Notification.Name(rawValue: setGreenEllipse)
NotificationCenter.default.post(name: name, object: nil)
}
}
}
} else {
// Check if face is detected during recording. If it isn't, then cancel recording
if videoRecorder.isRecording {
noFaceCount += 1
if noFaceCount == 50 {
cancelledByUser = false
videoRecorder.cancel()
noFaceCount = 0
}
}
}
if layer.status == .failed {
layer.flush()
}
layer.enqueue(sampleBuffer)
let writable = videoRecorder.canWrite()
if writable {
if videoRecorder.sessionAtSourceTime == nil {
// Start Writing
videoRecorder.sessionAtSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
videoRecorder.videoWriter.startSession(atSourceTime: videoRecorder.sessionAtSourceTime!)
print("video session started")
}
if videoRecorder.videoWriterInput.isReadyForMoreMediaData {
// write video buffer
videoRecorder.videoWriterInput.append(sampleBuffer)
//print("video buffering")
}
}
}
You could probably call the notification once per 30 frames, for example, instead of every frame.
You could also call the color changing function directly if it's in the same view controller. If not, you could define a delegate method and call it directly as oppose to sending notifications.

Detect object proximity to walls

I am trying to write something really simple - and use SpriteKit on this instance.
The way I did it on other platforms, is by having an invisible child "stick" sticking out a bit. By detecting collision between the invisible "sticks" I can tell wherether the object is close to the wall or not.
I am trying to replicate the same thing using SpriteKit. Of course, I'd prefer to have an invisible "beam" coming out of the object and giving me distance - but that's probably too much hassle.
I'd appriciate any ways to improve on my silly project I got so far.
My Project so far
Thanks.
Here is what I came up with that doesn't involve physics...
Drag the mouse to move the car, and the label in the center updates telling you the distance to the closest wall. Release mouse to reset car.
Very simple example, can be updated to give more accurate measurements.
class GameScene: SKScene {
let car = SKSpriteNode(color: .blue, size: CGSize(width: 50, height: 100))
let label = SKLabelNode(text: "")
func findNearestWall() -> CGFloat {
// you can make this more advanced by using the CGPoint of the frame borders of the car, or even 6+ points for more accuracy
let closestX: CGFloat = {
if car.position.x < 0 { // left wall
return abs(frame.minX - car.position.x)
} else { // right wall
return abs(frame.maxX - car.position.x)
}
}()
let closestY: CGFloat = {
if car.position.y < 0 { // bottom wall
return abs(frame.minY - car.position.y)
} else { // top wall
return abs(frame.maxY - car.position.y)
}
}()
if closestX < closestY {
return closestX.rounded() // as closest wall distance
} else {
return closestY.rounded() // as closest wall distance
}
}
override func didMove(to view: SKView) {
removeAllChildren()
label.fontSize *= 2
addChild(car)
addChild(label)
}
override func mouseDown(with event: NSEvent) {
}
override func mouseDragged(with event: NSEvent) {
let location = event.location(in: self)
car.position = location
}
override func mouseUp(with event: NSEvent) {
car.position = CGPoint.zero
}
override func didEvaluateActions() {
label.text = String(describing: findNearestWall())
}
}

How to activate This Class to another

I have a question about using another class in my main GameplayScene. What I am trying to do is make the motions of the X-axis of the phone move the character left and right. Here is what I have in my MotionClass.swift
import SpriteKit
import CoreMotion
class MotionClass: SKScene {
var player: Player?
var motionManager = CMMotionManager()
var destX: CGFloat = 0.0
override func sceneDidLoad() {
motionManager.accelerometerUpdateInterval = 0.2
motionManager.startAccelerometerUpdates(to: OperationQueue.current!) { (data, error) in
if let myData = data {
let currentX = self.player?.position.x
if myData.acceleration.x > 0.2 {
self.destX = currentX! + CGFloat(myData.acceleration.x * 100)
print("Tilted Right")
} else {
if myData.acceleration.x < -0.2 {
self.destX = currentX! + CGFloat(myData.acceleration.x * 100)
print("Tilted Left")
}
}
}
}
}
override func update(_ currentTime: TimeInterval) {
let action = SKAction.moveTo(x: destX, duration: 1)
self.player?.run(action)
}
}
Now I'm trying to call this class in my GameplayScene.swift in the motionBegan function, but I don't know how to go about doing that. I have the variable 'grapple' as MotionClass? but I don't know where to go from there. Could anyone give a good an example on doing this?
I think you may be confused about the purpose of an SKScene subclass, which is what your MotionClass currently is. (The main idea) is to only use one SKScene at a time: if you need stuff from MotionClass then you should just make it a plain class, not an SKScene subclass.
I think you may also need to familiarize yourself a bit more with OOP as well... Outside of static properties / functions, you don't "call" a class, you instantiate it ( you make an object :] )
So, if you have goodies in MotionClass that you want to access in GamePlayClass, you need a reference to a MotionClass object
This can be done with a simple global variable... I suggest putting it into your GameViewController.swift:
// Here is a global reference to an 'empty' motion class object..
var global_motionClassObject = MotionClass()
class GameViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// ...
if let view = self.view as! SKView? else {
let scene = MotionClass(size: view.frame.size)
// Assign our global to the new scene just made:
global_motionClassObject = scene
scene.scaleMode = .aspectFit
view.presentScene(scene)
}
// ...
}
Now inside of your GamePlayClass, or wherever else, you can access MotionClass by calling global_motionClassObject
However, this may not give the desired results because I'm worried you may need to restructure your MotionClass into something other than an SKScene :)