TERMINATION Stack is not empty at destruction. Error Swift - swift

I am using Google Mobile Vision to detect face landmarks and the distances between them. The weird thing about this error is that it is not consistent and does not show up all the time I run the same function.
Here is the error message...
TERMINATION
ert_Error ebs_ObjectStack::~ebs_ObjectStack():
Stack is not empty at destruction.
This can be an indiaction for a stack leak.
Please check the code where this instance was used.
libc++abi.dylib: terminating with uncaught exception of type ert_Error
(lldb)
Usually the first time I run the app in 24 hours everything goes smoothly and occasionally afterwards. For the most part though it appears. As I said earlier I am using Google Mobile Vision to detect and store certain values.
Here is the function that is causing the error...
func train2(image2: UIImage) {
print("TRAIN2 has been called")
// let options1 = [GMVDetectorFaceLandmarkType: GMVDetectorFaceLandmark.all.rawValue, GMVDetectorFaceClassificationType: GMVDetectorFaceClassification.all.rawValue, GMVDetectorFaceTrackingEnabled: true, GMVDetectorFaceMinSize: 0.09] as [String : Any]
var deviceOrientation = UIDevice.current.orientation
var devicePostion = AVCaptureDevice.Position.back //.Position(rawValue: (currentCameraPosition?.hashValue)!)
if CameraController().currentCameraPosition == .rear {
devicePostion = AVCaptureDevice.Position.back
}else if CameraController().currentCameraPosition == .front {
devicePostion = AVCaptureDevice.Position.front
}
var lastKnownDeviceOrientation = UIDeviceOrientation(rawValue: 0)
var orientation = GMVUtility.imageOrientation(from: deviceOrientation, with: devicePostion, defaultDeviceOrientation: lastKnownDeviceOrientation!)
var options3 = [GMVDetectorImageOrientation:orientation.rawValue]
// var sampbuff = cvPixelBufferRef(from: image2)
// var AImage = GMVUtility.sampleBufferTo32RGBA(sampbuff as! CMSampleBuffer)
var happyFaces = GfaceDetector.features(in: image2, options: options3) as! [GMVFaceFeature] // THE ERROR SEEMS TO ORIGINATE FROM THIS LINE
print("The Amount Of Happy Faces is: \(happyFaces.count)")
for face:GMVFaceFeature in happyFaces {
print(face.smilingProbability)
print("Go Google")
var YAngle = Float(face.headEulerAngleY)
var ZAngle = Float(face.headEulerAngleZ)
print("Y Angle \(YAngle), Z Angle \(ZAngle)")
if YAngle > -2.0 && YAngle < 4.0 && ZAngle > -2.0 && ZAngle < 2.0 {
var proportion = ((face.bounds.width * face.bounds.width) + (face.bounds.height * face.bounds.height))
var ratio = sqrtf(Float(proportion))
DBetweenEyes = (Float(distanceBetween(face.leftEyePosition, face.rightEyePosition)) * 1000) / ratio
DBetweenLEyeAndLEar = (Float(distanceBetween(face.leftEyePosition, face.leftEarPosition)) * 1000) / ratio
DBetweenREyeAndREar = (Float(distanceBetween(face.rightEyePosition, face.rightEarPosition)) * 1000) / ratio
DBetweenLEarAndREar = (Float(distanceBetween(face.leftEarPosition, face.rightEarPosition)) * 1000) / ratio
DBetweenLEyeAndNose = (Float(distanceBetween(face.leftEyePosition, face.noseBasePosition)) * 1000) / ratio
DBetweenREyeAndNose = (Float(distanceBetween(face.rightEyePosition, face.noseBasePosition)) * 1000) / ratio
DBetweenLEyeAndMouth = (Float(distanceBetween(face.leftEyePosition, face.bottomMouthPosition)) * 1000) / ratio
DBetweenREyeAndMouth = (Float(distanceBetween(face.rightEyePosition, face.bottomMouthPosition)) * 1000) / ratio
DBetweenLEarAndMouth = (Float(distanceBetween(face.leftEarPosition, face.bottomMouthPosition)) * 1000) / ratio
DBetweenREarAndMouth = (Float(distanceBetween(face.rightEarPosition, face.bottomMouthPosition)) * 1000) / ratio
print("Distances Established")
}
}
}
I have just discovered that the error is very rare when called when a button is pressed after the view loads. The error occurs far more frequently when called in the viewDidLoad method. Could this indicate some sort of pattern? How would I fix this error?
Any help or suggestions are helpful.
Thanks in advance!

Related

Metal Command Buffer Internal Error: What is Internal Error (IOAF code 2067)?

Attempting to run a compute kernel results in the following message:
Execution of the command buffer was aborted due to an error during execution. Internal Error (IOAF code 2067)
To get more specific information I query the command encoder's user info and manage to extract more details. I followed instructions from this video to yield the following message:
[Metal Diagnostics] __message__: MTLCommandBuffer execution failed: The commands
associated with the encoder were affected by an error, which may or may not have been
caused by the commands themselves, and failed to execute in full __:::__
__delegate_identifier__: GPUToolsDiagnostics
The breakpoint triggered by the API Validation and Shader Validation results in a record stack frame - not a GPU backtrace. The breakpoint does not indicate any new information apart from the above message.
I cannot find any reference to the mentioned IOAF code in documentation. The additional information printed reveals nothing of assistance. The kernel is quite divergent and I am speculating that may be causing the GPU to take too much time to complete. That may be to blame but I have nothing supporting this apart from a gut feeling.
Here is the thread setup for the group:
let threadExecutionWidth = pipeline.threadExecutionWidth
let threadgroupsPerGrid = MTLSize(width: (Int(pixelCount) + threadExecutionWidth - 1) / threadExecutionWidth, height: 1, depth: 1)
let threadsPerThreadgroup = MTLSize(width: threadExecutionWidth, height: 1, depth: 1)
commandEncoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)
The GPU commands are being committed and waited upon for completion:
commandEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
Here is my application side code in it's entirety:
import Metal
import Foundation
import simd
typealias Float4 = SIMD4<Float>
struct SimpleFileWriter {
var fileHandle: FileHandle
init(filePath: String, append: Bool = false) {
if !FileManager.default.fileExists(atPath: filePath) {
FileManager.default.createFile(atPath: filePath, contents: nil, attributes: nil)
}
fileHandle = FileHandle(forWritingAtPath: filePath)!
if !append {
fileHandle.truncateFile(atOffset: 0)
}
}
func write(content: String) {
fileHandle.seekToEndOfFile()
guard let data = content.data(using: String.Encoding.ascii) else {
fatalError("Could not convert \(content) to ascii data!")
}
fileHandle.write(data)
}
}
var imageWidth = 480
var imageHeight = 270
var sampleCount = 16
var bounceCount = 3
let device = MTLCreateSystemDefaultDevice()!
let library = try! device.makeDefaultLibrary(bundle: Bundle.module)
let primaryRayFunc = library.makeFunction(name: "ray_trace")!
let pipeline = try! device.makeComputePipelineState(function: primaryRayFunc)
var pixelData: [Float4] = (0..<(imageWidth * imageHeight)).map{ _ in Float4(0, 0, 0, 0)}
var pixelCount = UInt(pixelData.count)
let pixelDataBuffer = device.makeBuffer(bytes: &pixelData, length: Int(pixelCount) * MemoryLayout<Float4>.stride, options: [])!
let pixelDataMirrorPointer = pixelDataBuffer.contents().bindMemory(to: Float4.self, capacity: Int(pixelCount))
let pixelDataMirrorBuffer = UnsafeBufferPointer(start: pixelDataMirrorPointer, count: Int(pixelCount))
let commandQueue = device.makeCommandQueue()!
let commandBufferDescriptor = MTLCommandBufferDescriptor()
commandBufferDescriptor.errorOptions = MTLCommandBufferErrorOption.encoderExecutionStatus
let commandBuffer = commandQueue.makeCommandBuffer(descriptor: commandBufferDescriptor)!
let commandEncoder = commandBuffer.makeComputeCommandEncoder()!
commandEncoder.setComputePipelineState(pipeline)
commandEncoder.setBuffer(pixelDataBuffer, offset: 0, index: 0)
commandEncoder.setBytes(&pixelCount, length: MemoryLayout<Int>.stride, index: 1)
commandEncoder.setBytes(&imageWidth, length: MemoryLayout<Int>.stride, index: 2)
commandEncoder.setBytes(&imageHeight, length: MemoryLayout<Int>.stride, index: 3)
commandEncoder.setBytes(&sampleCount, length: MemoryLayout<Int>.stride, index: 4)
commandEncoder.setBytes(&bounceCount, length: MemoryLayout<Int>.stride, index: 5)
// We have to calculate the sum `pixelCount` times
// => amount of threadgroups is `resultsCount` / `threadExecutionWidth` (rounded up)
// because each threadgroup will process `threadExecutionWidth` threads
let threadExecutionWidth = pipeline.threadExecutionWidth;
let threadgroupsPerGrid = MTLSize(width: (Int(pixelCount) + threadExecutionWidth - 1) / threadExecutionWidth, height: 1, depth: 1)
// Here we set that each threadgroup should process `threadExecutionWidth` threads
// the only important thing for performance is that this number is a multiple of
// `threadExecutionWidth` (here 1 times)
let threadsPerThreadgroup = MTLSize(width: threadExecutionWidth, height: 1, depth: 1)
commandEncoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)
commandEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
if let error = commandBuffer.error as NSError? {
if let encoderInfo = error.userInfo[MTLCommandBufferEncoderInfoErrorKey] as? [MTLCommandBufferEncoderInfo] {
for info in encoderInfo {
print(info.label + info.debugSignposts.joined())
}
}
}
let sfw = SimpleFileWriter(filePath: "/Users/pprovins/Desktop/render.ppm")
sfw.write(content: "P3\n")
sfw.write(content: "\(imageWidth) \(imageHeight)\n")
sfw.write(content: "255\n")
for pixel in pixelDataMirrorBuffer {
sfw.write(content: "\(UInt8(pixel.x * 255)) \(UInt8(pixel.y * 255)) \(UInt8(pixel.z * 255)) ")
}
sfw.write(content: "\n")
Additionally, here is the shader being ran. I have not included all function definition for brevity's sake:
kernel void ray_trace(device float4 *result [[ buffer(0) ]],
const device uint& dataLength [[ buffer(1) ]],
const device int& imageWidth [[ buffer(2) ]],
const device int& imageHeight [[ buffer(3) ]],
const device int& samplesPerPixel [[ buffer(4) ]],
const device int& rayBounces [[ buffer (5)]],
const uint index [[thread_position_in_grid]]) {
if (index >= dataLength) {
return;
}
const float3 origin = float3(0.0);
const float aspect = float(imageWidth) / float(imageHeight);
const float3 vph = float3(0.0, 2.0, 0.0);
const float3 vpw = float3(2.0 * aspect, 0.0, 0.0);
const float3 llc = float3(-(vph / 2.0) - (vpw / 2.0) - float3(0.0, 0.0, 1.0));
float3 accumulatedColor = float3(0.0);
thread float seed = getSeed(index, index % imageWidth, index / imageWidth);
float row = float(index / imageWidth);
float col = float(index % imageWidth);
for (int aai = 0; aai < samplesPerPixel; ++aai) {
float ranX = fract(rand(seed));
float ranY = fract(rand(seed));
float u = (col + ranX) / float(imageWidth - 1);
float v = 1.0 - (row + ranY) / float(imageHeight - 1);
Ray r(origin, llc + u * vpw + v * vph - origin);
float3 color = float3(0.0);
HitRecord hr = {0.0, 0.0, false};
float attenuation = 1.0;
for (int bounceIndex = 0; bounceIndex < rayBounces; ++bounceIndex) {
testForHit(sceneDistance, r, hr);
if (hr.h) {
float3 target = hr.p + hr.n + random_f3_in_unit_sphere(seed);
attenuation *= 0.5;
r = Ray(hr.p, target - hr.p);
} else {
color = default_atmosphere_color(r) * attenuation;
break;
}
}
accumulatedColor += color / samplesPerPixel;
}
result[index] = float4(sqrt(accumulatedColor), 1.0);
}
Oddly enough, it occasionally shall run. Changing the number of samples to 16 or above will always results in the mention IOAF code. Less than 16 samples, the code will run ~25% of the time. The more samples, the more likely it is to results in the error code.
Is there anyway to get additional on IOAF code 2067?
Determining the error code with Metal API + Shader Validation was not possible.
By testing individual portions of the kernel, the particular error was narrowed down to a while loop that caused the GPU to hang.
The problem can essentially be boiled down to code that looks like:
while(true) {
// ad infinitum
}
or, in the case of the code above in the call to random_f3_in_unit_sphere(seed):
while(randNum(seed) < threshold) {
// the while loop is not "bounded"
// in any sense. Whoops.
++seed;
}

Error with Dispatch Queue Swift

I am trying to create a genetic algorithm for running race cars around a race track. Each car gets random instructions that apply a force to the car and rotate the car by a certain number of degrees. In order to space out the new instructions given to each car, I used a delay time in dispatch Queue that adds 0.2 seconds to the previous instruction.
e.g.
0 seconds - first instruction
0.2 seconds - second instruction
0.4 seconds -third instruction
and so on...
The problem I have is that after several instructions have been carried out I start to notice a longer delay between instructions, to the point where a new instruction is applied after say 2 seconds.
Here is my code below.
func carAction(newCar: [[CGFloat]], racecar: SKSpriteNode) {
var caralive = true
let max = 1000
var count = 0
let delayTime = 200000000
var deadlineTime = DispatchTime.now()
while count < max {
let angleChange = newCar[count][1]
let speedChange = newCar[count][0]
count += 1
deadlineTime = deadlineTime + .nanoseconds(delayTime)
DispatchQueue.main.asyncAfter(deadline: deadlineTime) {
if caralive == true {
print(DispatchQueue.main)
racecar.physicsBody?.angularVelocity = 0
let rotate = SKAction.rotate(byAngle: (angleChange * .pi / 180), duration: 0.2)
racecar.run(rotate)
let racecarRotation : CGFloat = racecar.zRotation
var calcRotation : Float = Float(racecarRotation) + Float(M_PI_2)
let Vx = speedChange * CGFloat(cosf(calcRotation))
let Vy = speedChange * CGFloat(sinf(calcRotation))
let force = SKAction.applyForce(CGVector(dx: Vx, dy: Vy), duration: 0.2)
racecar.run(force)
let total = self.outerTrack.count
var initial = 0
while initial < total {
if racecar.intersects(self.outerTrack[initial]) {
racecar.removeFromParent()
self.numberOfCars -= 1
initial += 1
caralive = false
break
} else {
initial += 1
}
}
} else {
// print(self.numberOfCars)
}
}
}
The 2D array newCar is a list of all the instructions.
Any help would be massively appreciated as I have been trying to figure this out for ages now!!
Many thanks in advance, any questions just feel free to ask!
You should do something like this instead:
func scheduledTimerWithTimeInterval(){
// Scheduling timer to Call the function "updateCounting" with the interval of 1 seconds
timer = NSTimer.scheduledTimerWithTimeInterval(1, target: self, selector: Selector("moveCarsFunction"), userInfo: nil, repeats: true)
}
And call scheduledTimerWithInterval once
originally answered here

Have Leaflet panTo not center

I have a trail on a map that the user can "follow" by mousing over a graph (time and speed). If the user zooms in a lot, part of the trail may not be visible. When the user wants to see the part of the trail that is not showing I use the panTo method...
The panTo method of leaflet is currently also centering. I don't want to center, I want the map to move just enough to show a point. (The problem with panTo is it causes excessive map scrolling and a harsh user experience.)
I have tried changing the bounds, but that has an (unwanted) side affect of sometimes zooming out.
Any way I can do a "minimal" panTo?
This is a (working but unpolished) solution; map is our own map wrapper utility class, lmap is a leaflet map object in typescript, and toxy() is a method to convert lat/longs to x/y values.
if (!this.lmap.getBounds().contains(latlng)) {
let target = this.map.toxy(latlng);
let nw = this.map.toxy(this.lmap.getBounds().getNorthWest());
let se = this.map.toxy(this.lmap.getBounds().getSouthEast());
let x = 0, y = 0;
let margin = 75;
if (target.y < nw.y)
y = (-1 * (nw.y - target.y)) - margin;
else if (target.y > se.y)
y = (target.y - se.y) + margin;
if (target.x < nw.x)
x = (-1 * (nw.x - target.x)) - margin;
else if (target.x > se.x)
x = (target.x - se.x) + margin;
this.lmap.panBy(new L.Point(x, y));
}
First, fetch the bounds of the map (measured in pixels from the CRS origin) with map.getPixelBounds(). Then, use map.project(latlng, map.getZoom()) to get the coordinates (in pixels from the CRS origin) of the point you're interested.
If you're confused about this "pixels from the CRS origin" thing, read the "Pixel Origin" section at http://leafletjs.com/examples/extending/extending-2-layers.html .
Once you have these pixel coordinates, it should be a simple matter of checking whether the point is inside the viewport, and if not, how far away on each direction it is.
http://jsfiddle.net/jcollin6/b131tobj/2/
Should give you what you want
function clamp(n,lower,upper) {
return Math.max(lower, Math.min(upper, n));
}
//Shamelessly stolen from https://gist.github.com/dwtkns/d5b9b60285b8b0067c53
function getNearestPointInPerimeter(l,t,w,h,x,y) {
var r = l+w,
b = t+h;
var x = clamp(x,l,r),
y = clamp(y,t,b);
var dl = Math.abs(x-l),
dr = Math.abs(x-r),
dt = Math.abs(y-t),
db = Math.abs(y-b);
var m = Math.min(dl,dr,dt,db);
return (m===dt) ? {x: x, y: t} :
(m===db) ? {x: x, y: b} :
(m===dl) ? {x: l, y: y} : {x: r, y: y};
}
L.ExtendedMap = L.Map.extend({
panInside: function (latlng, pad, options) {
var padding = pad ? pad : 0;
var pixelBounds = this.getPixelBounds();
var center = this.getCenter();
var pixelCenter = this.project(center);
var pixelPoint = this.project(latlng);
var sw = pixelBounds.getBottomLeft();
var ne = pixelBounds.getTopRight();
var topLeftPoint = L.point(sw.x + padding, ne.y + padding);
var bottomRightPoint = L.point( ne.x - padding, sw.y - padding);
var paddedBounds = L.bounds(topLeftPoint, bottomRightPoint);
if (!paddedBounds.contains(pixelPoint)) {
this._enforcingBounds = true;
var nearestPoint = getNearestPointInPerimeter(
sw.x + padding,
ne.y + padding,
ne.x - sw.x - padding * 2,
sw.y - ne.y - padding * 2,
pixelPoint.x,
pixelPoint.y
);
var nearestPixelPoint = L.point(nearestPoint.x,nearestPoint.y)
var diffPixelPoint = nearestPixelPoint.subtract(pixelPoint);
var newPixelCenter = pixelCenter.subtract(diffPixelPoint);
var newCenter = this.unproject(newPixelCenter);
if (!center.equals(newCenter)) {
this.panTo(newCenter, options);
}
this._enforcingBounds = false;
}
return this;
}
});
Use this way
map.panTo([lat, lng]);
map.setZoom(Zoom);

How can I plot data from a Swift sandbox?

I am practicing with Swift 3.x and I need to plot some data. The problem is that I only really have IBM's online Swift sandbox to work with. The purpose of the plotting is to understand how single-precision code is affected by summations:
I wrote some code to do this, but now I have no clue how to plot this. I doubt Swift can somehow bring up a window for plotting, let alone do so when run through the online sandbox.
Side note: I might be able to VNC into a Mac computer at my university to use Xcode. If I paste the same code into an Xcode project, could it make plots?
Here is the code in case you wanted to see it. I need to now run this code for N=1 to N=1,000,000.
import Foundation
func sum1(N: Int) -> Float {
var sum1_sum: Float = 0.0
var n_double: Double = 0.0
for n in 1...(2*N) {
n_double = Double(n)
sum1_sum += Float(pow(-1.0,n_double)*(n_double/(n_double+1.0)))
}
return sum1_sum
}
func sum2(N: Int) -> Float {
var sum2_sum: Float = 0.0
var n_double: Double = 0.0
var sum2_firstsum: Float = 0.0
var sum2_secondsum: Float = 0.0
for n in 1...N {
n_double = Double(n)
sum2_firstsum += Float((2.0*n_double - 1)/(2.0*n_double))
sum2_secondsum += Float((2.0*n_double)/(2.0*n_double + 1))
}
sum2_sum = sum2_secondsum - sum2_firstsum //This is where the subtractive cancellation occurs
return sum2_sum
}
func sum3(N: Int) -> Float {
var sum3_sum: Float = 0.0
var n_double: Double = 0.0
for n in 1...N {
n_double = Double(n)
sum3_sum += Float(1/(2.0*n_double*(2.0*n_double + 1)))
}
return sum3_sum
}
print("Sum 1:", sum1(N: 1000000))
print("Sum 2:", sum2(N: 1000000))
print("Sum 3:", sum3(N: 1000000))
Yes, #TheSoundDefense is right. There is no plotting output from the Swift Sandbox directly. However, I recommend that you still use the Swift Sandbox. Just run the code, and copy and paste the output in comma-delimited format to Excel or MATLAB to plot it. I did some tweaking to your sum2 as an example, while also making it a bit more functional in the process:
func sum2(N: Int) -> Float {
let a: Float = (1...N).reduce(0) {
let nDouble = Double($1)
return Float((2.0 * nDouble - 1) / (2.0 * nDouble)) + $0
}
let b: Float = (1...N).reduce(0) {
let nDouble = Double($1)
return Float((2.0 * nDouble) / (2.0 * nDouble + 1)) + $0
}
return b - a
}
let N = 10
let out = (1...N).map(){ sum2(N: $0)}
let output = out.reduce(""){$0 + "\($1), "}
print(output)
0.166667, 0.216667, 0.240476, 0.254365, 0.263456, 0.269867, 0.274629, 0.278306, 0.28123, 0.283611,

Swift for-loop wont accept index as a variable multiplied with a size.widht value

Code:
The issue: Im not allowed to operate the index to size.widht. I know its an CG value but I am allowed to operate with literals. However, when I try to use an declared int instead it doesnt allows it either.
size.widht/5 * (index + 1) says "Cannot invoke '+' with an argument list of type '($T6,($T10))'.
func addBricks(CGsize) {
for var index = 0; index < 4; index++ {
var brick = SKSpriteNode(imageNamed: "brick")
brick.physicsBody = SKPhysiscsBody(rectangleOfSize: brick.frame.size)
var xPos = size.widht/5 * (index + 1)
var yPos = size.height - 50
brick.position = CGPointMake(xPos, yPos)
self.addChild(brick)
}
}
What could possible be wrong?