Extract Float array from Data - swift

I have a function that receives a Data object together with a width and height. The data is a so called "normalised" image in binary form.
Each pixel consists out of 3 Float values for the R, G, B colors ranging from 0 -> 1.0 (instead of the typical 0-255 for UInt8 values, but here we have floats). In fact its a sort of 2 dimensional array with a width and a height like this:
How can extract the R, G, B values as Swift floats from the Data object? This is what I came up with so far:
func convert(data: Data, width: Int, height: Int) {
let sizeFloat = MemoryLayout.size(ofValue: CGFloat.self)
for x in 0...width {
for y in 0...height {
let index = ( x + (y * width) ) * sizeFloat * 3
let dataIndex = Data.Index(???)
data.copyBytes(to: <#T##UnsafeMutableBufferPointer<DestinationType>#>??, from: <#T##Range<Data.Index>?#>???)
let Red = ....
let Green = ...
let Blue = ...
}
}
}

You can use withUnsafeBytes() to access the raw data as an array of Float values:
func convert(data: Data, width: Int, height: Int) {
data.withUnsafeBytes { (floatPtr: UnsafePointer<Float>) in
for x in 0..<width {
for y in 0..<height {
let index = (x + y * width) * 3
let red = floatPtr[index]
let green = floatPtr[index+1]
let blue = floatPtr[index+2]
// ...
}
}
}
}

var r= data[0];
var g = data[1];
var b = data[2];
element.background-color =rgba(r,g,b,1)
perhaps your looking for something like this?

Related

Generate random Gaussian noise MTLTexture or MTLBuffer of size (width, height)

I am writing a real-time video filter application and for one of the algorithms I want to try out, I need to generate a random, gaussian univariate distributed buffer (or texture) based on the input source.
Coming from a Python background, the following few lines are running in about 0.15s (which is not real-time worthy but a lot faster than the Swift code I tried below):
h = 1170
w = 2532
with Timer():
noise = np.random.normal(size=w * h * 3)
plt.imshow(noise.reshape(w,h,3))
plt.show()
My Swift code try:
private func generateNoiseTextureBuffer(width: Int, height: Int) -> [Float] {
let w = Float(width)
let h = Float(height)
var noiseData = [Float](repeating: 0, count: width * height * 4)
for xi in (0 ..< width) {
for yi in (0 ..< height) {
let index = yi * width + xi
let x = Float(xi)
let y = Float(yi)
let random = GKRandomSource()
let gaussianGenerator = GKGaussianDistribution(randomSource: random, mean: 0.0, deviation: 1.0)
let randX = gaussianGenerator.nextUniform()
let randY = gaussianGenerator.nextUniform()
let scale = sqrt(2.0 * min(w, h) * (2.0 / Float.pi))
let rx = floor(max(min(x + scale * randX, w - 1.0), 0.0))
let ry = floor(max(min(y + scale * randY, h - 1.0), 0.0))
noiseData[index * 4 + 0] = rx + 0.5
noiseData[index * 4 + 1] = ry + 0.5
noiseData[index * 4 + 2] = 1
noiseData[index * 4 + 3] = 1
}
}
return noiseData
}
...
let noiseData = self.generateNoiseTextureBuffer(width: context.sourceColorTexture.width, height: context.sourceColorTexture.height)
let noiseDataSize = noiseData.count * MemoryLayout.size(ofValue: noiseData[0])
self.noiseBuffer = device.makeBuffer(bytes: noiseData, length: noiseDataSize)
How can I accomplish this fast and easily in Swift?

Procedural mesh not rendering lighting [SceneKit - Xcode]

I am quite new to swift and Xcode however, I have been programming in other languages for several years. I am trying to procedurally create a 3D mesh in SceneKit (iOS). My code works as expected however, when running the application the generated object renders a flat black colour, ignoring all lighting. I have also added a cube to the scene to show that the scene lighting is working.
I would imagine that there is either a problem with the shader or that I need to define the normals of the geometry to fix this. I have tried playing around with a few properties of the SCNMaterial, but they don't seem to change anything.
If it is just a case of defining the normals, please could you advise how I would do this in Swift / SceneKit. Or perhaps I have missed something else, any help would be much appreciated.
Screenshot below:
My code below:
public static func CreateMesh (size: CGFloat, resolution: CGFloat) -> SCNNode? {
let axisCount = Int(floor(size / resolution))
let bottomLeft = CGVector(
dx: CGFloat(-(axisCount / 2)) * resolution,
dy: CGFloat(-(axisCount / 2)) * resolution
)
var verts = Array(
repeating: Array(
repeating: (i: Int(0), pos: SCNVector3.init(x: 0, y: 0, z: 0)),
count: axisCount),
count: axisCount
)
var vertsStream = [SCNVector3]()
var i : Int = 0
for x in 0...axisCount-1 {
for y in 0...axisCount-1 {
verts[x][y] = (
i,
SCNVector3(
x: Float(bottomLeft.dx + CGFloat(x) * resolution),
y: Float.random(in: 0..<0.1),
z: Float(bottomLeft.dy + CGFloat(y) * resolution)
)
)
vertsStream.append(verts[x][y].pos)
i += 1
}
}
var tris = [(a: Int, b: Int, c: Int)]()
var trisStream = [UInt16]()
for x in 0...axisCount - 2 {
for y in 0...axisCount - 2 {
// Quad
tris.append((
a: verts[x][y].i,
b: verts[x][y+1].i,
c: verts[x+1][y+1].i
))
tris.append((
a: verts[x+1][y+1].i,
b: verts[x+1][y].i,
c: verts[x][y].i
))
}
}
for t in tris {
trisStream.append(UInt16(t.a))
trisStream.append(UInt16(t.b))
trisStream.append(UInt16(t.c))
}
// Create scene element
let geometrySource = SCNGeometrySource(vertices: vertsStream)
let geometryElement = SCNGeometryElement(indices: trisStream, primitiveType: .triangles)
let geometryFinal = SCNGeometry(sources: [geometrySource], elements: [geometryElement])
let node = SCNNode(geometry: geometryFinal)
////////////////////////
// FIX MATERIAL
////////////////////////
let mat = SCNMaterial()
mat.diffuse.intensity = 1
mat.lightingModel = .blinn
mat.blendMode = .replace
node.geometry?.materials = [mat]
return node
}
After a lot of searching I managed to find a post with a line of code that looks something like this:
let gsNormals = SCNGeometrySource(normals: normalStream)
So from there I managed to work out how to set the surface normals. It seems like there really isn't a lot of online content / learning material when it comes to the more advanced topics like this in Xcode / Swift, which is quite unfortunate.
I have set it up to create a parabolic shape plane, just for testing. But this code will be used to generate a mesh from a height map, which should now be easy to implement. I think it's pretty useful code, so I have included it below incase anyone else ever has the same issue that I did.
public static func CreateMesh (size: CGFloat, resolution: CGFloat) -> SCNNode? {
let axisCount = Int(floor(size / resolution))
let bottomLeft = CGVector(
dx: CGFloat(-(axisCount / 2)) * resolution,
dy: CGFloat(-(axisCount / 2)) * resolution
)
/// Verticies ///
var verts = Array(
repeating: Array(
repeating: (i: Int(0), pos: SCNVector3.init(x: 0, y: 0, z: 0)),
count: axisCount),
count: axisCount
)
var vertsStream = [SCNVector3]()
var i = 0
for x in 0...axisCount - 1 {
for y in 0...axisCount - 1 {
var dx = axisCount / 2 - x
dx = dx * dx
var dy = axisCount / 2 - y
dy = dy * dy
let yVal = Float(Double(dx + dy) * 0.0125)
verts[x][y] = (
i: i,
pos: SCNVector3(
x: Float(bottomLeft.dx + CGFloat(x) * resolution),
//y: Float.random(in: 0..<0.1),
y: yVal,
z: Float(bottomLeft.dy + CGFloat(y) * resolution)
)
)
vertsStream.append(verts[x][y].pos)
i += 1
}
}
///
/// Triangles ///
var tris = [(a: Int, b: Int, c: Int)]()
var trisStream = [UInt32]()
for x in 0...axisCount - 2 {
for y in 0...axisCount - 2 {
// Quad
tris.append((
a: verts[x][y].i,
b: verts[x][y+1].i,
c: verts[x+1][y].i
))
tris.append((
a: verts[x+1][y].i,
b: verts[x][y+1].i,
c: verts[x+1][y+1].i
))
}
}
for t in tris {
trisStream.append(UInt32(t.a))
trisStream.append(UInt32(t.b))
trisStream.append(UInt32(t.c))
}
///
/// Normals ///
var normalStream = [SCNVector3]()
for x in 0...axisCount - 1 {
for y in 0...axisCount - 1 {
// calculate normal vector perp to average plane
let leftX = x == 0 ? 0 : x - 1
let rightX = x == axisCount - 1 ? axisCount - 1 : x + 1
let leftY = y == 0 ? 0 : y - 1
let rightY = y == axisCount - 1 ? axisCount - 1 : y + 1
let avgXVector = float3(verts[rightX][y].pos) - float3(verts[leftX][y].pos)
let avgYVector = float3(verts[x][rightY].pos) - float3(verts[x][leftY].pos)
// If you are unfamiliar with how to calculate normals
// search for vector cross product, this is used to find
// a vector that is orthogonal to two other vectors, in our
// case perpendicular to the surface
let normal = cross(
normalize(avgYVector),
normalize(avgXVector)
)
normalStream.append(SCNVector3(normal))
}
}
///
// Create scene element
let gsGeometry = SCNGeometrySource(vertices: vertsStream)
let gsNormals = SCNGeometrySource(normals: normalStream)
let geometryElement = SCNGeometryElement(indices: trisStream, primitiveType: .triangles)
let geometryFinal = SCNGeometry(sources: [gsGeometry, gsNormals], elements: [geometryElement])
let node = SCNNode(geometry: geometryFinal)
let mat = SCNMaterial()
mat.isDoubleSided = true
mat.lightingModel = .blinn
node.geometry?.materials = [mat]
return node
}

Swift Inverse FFT (IFFT) Via Chirp Z-Transform (CZT)

For arbitrary sample sizes (samples not equal to 2^N), I have been able to implement the FFT via the chirp Z-transform (CZT) using iOS Accelerate's FFT function (that only works for samples equal to 2^N).
The results are good and match the Matlab FFT output for any arbitrary length sequence (signal). I paste the code below.
My next challenge is to use iOS Accelerate's FFT function (that only works for samples equal to 2^N) for accomplishing an inverse FFT on arbitrary sample sizes (samples not equal to 2^N).
Since my CZT accomplishes arbitrary length FFT now (see below), I am hoping that an inverse CZT (ICZT) would accomplish an arbitrary length IFFT using iOS Accelerate's FFT function (that only works for samples equal to 2^N).
Any suggestions/guidence?
// FFT IOS ACCELERATE FRAMEWORK (works only for 2^N samples)
import Accelerate
public func fft(x: [Double], y: [Double], type: String) -> ([Double], [Double]) {
var real = [Double](x)
var imaginary = [Double](y)
var splitComplex = DSPDoubleSplitComplex(realp: &real, imagp: &imaginary)
let length = vDSP_Length(floor(log2(Float(real.count))))
let radix = FFTRadix(kFFTRadix2)
let weights = vDSP_create_fftsetupD(length, radix)
switch type.lowercased() {
case ("fft"): // CASE FFT
vDSP_fft_zipD(weights!, &splitComplex, 1, length, FFTDirection(FFT_FORWARD))
vDSP_destroy_fftsetup(weights)
case ("ifft"): // CASE INVERSE FFT
vDSP_fft_zipD(weights!, &splitComplex, 1, length, FFTDirection(FFT_INVERSE))
vDSP_destroy_fftsetup(weights)
real = real.map({ $0 / Double(x.count) }) // Normalize IFFT by sample count
imaginary = imaginary.map({ $0 / Double(x.count) }) // Normalize IFFT by sample count
default: // DEFAULT CASE (FFT)
vDSP_fft_zipD(weights!, &splitComplex, 1, length, FFTDirection(FFT_FORWARD))
vDSP_destroy_fftsetup(weights)
}
return (real, imaginary)
}
// END FFT IOS ACCELERATE FRAMEWORK (works only for 2^N samples)
// DEFINE COMPLEX NUMBERS
struct Complex<T: FloatingPoint> {
let real: T
let imaginary: T
static func +(lhs: Complex<T>, rhs: Complex<T>) -> Complex<T> {
return Complex(real: lhs.real + rhs.real, imaginary: lhs.imaginary + rhs.imaginary)
}
static func -(lhs: Complex<T>, rhs: Complex<T>) -> Complex<T> {
return Complex(real: lhs.real - rhs.real, imaginary: lhs.imaginary - rhs.imaginary)
}
static func *(lhs: Complex<T>, rhs: Complex<T>) -> Complex<T> {
return Complex(real: lhs.real * rhs.real - lhs.imaginary * rhs.imaginary,
imaginary: lhs.imaginary * rhs.real + lhs.real * rhs.imaginary)
}
}
extension Complex: CustomStringConvertible {
var description: String {
switch (real, imaginary) {
case (_, 0):
return "\(real)"
case (0, _):
return "\(imaginary)i"
case (_, let b) where b < 0:
return "\(real) - \(abs(imaginary))i"
default:
return "\(real) + \(imaginary)i"
}
}
}
// DEFINE COMPLEX NUMBERS
// DFT BASED ON CHIRP Z TRANSFORM (CZT)
public func dft(x: [Double]) -> ([Double], [Double]) {
let m = x.count // number of samples
var N: [Double] = Array(stride(from: Double(0), through: Double(m - 1), by: 1.0))
N = N.map({ $0 + Double(m) })
var NM: [Double] = Array(stride(from: Double(-(m - 1)), through: Double(m - 1), by: 1.0))
NM = NM.map({ $0 + Double(m) })
var M: [Double] = Array(stride(from: Double(0), through: Double(m - 1), by: 1.0))
M = M.map({ $0 + Double(m) })
let nfft = Int(pow(2, ceil(log2(Double(m + m - 1))))) // fft pad
var p1: [Double] = Array(stride(from: Double(-(m - 1)), through: Double(m - 1), by: 1.0))
p1 = (zip(p1, p1).map(*)).map({ $0 / Double(2) }) // W = WR + j*WI has to be raised to power p1
var WR = [Double]()
var WI = [Double]()
for i in 0 ..< p1.count { // Use De Moivre's formula to raise to power p1
WR.append(cos(p1[i] * 2.0 * M_PI / Double(m)))
WI.append(sin(-p1[i] * 2.0 * M_PI / Double(m)))
}
var aaR = [Double]()
var aaI = [Double]()
for j in 0 ..< N.count {
aaR.append(WR[Int(N[j] - 1)] * x[j])
aaI.append(WI[Int(N[j] - 1)] * x[j])
}
let la = nfft - aaR.count
let pad: [Double] = Array(repeating: 0, count: la) // 1st zero padding
aaR += pad
aaI += pad
let (fgr, fgi) = fft(x: aaR, y: aaI, type: "fft") // 1st FFT
var bbR = [Double]()
var bbI = [Double]()
for k in 0 ..< NM.count {
bbR.append((WR[Int(NM[k] - 1)]) / (((WR[Int(NM[k] - 1)])) * ((WR[Int(NM[k] - 1)])) + ((WI[Int(NM[k] - 1)])) * ((WI[Int(NM[k] - 1)])))) // take reciprocal
bbI.append(-(WI[Int(NM[k] - 1)]) / (((WR[Int(NM[k] - 1)])) * ((WR[Int(NM[k] - 1)])) + ((WI[Int(NM[k] - 1)])) * ((WI[Int(NM[k] - 1)])))) // take reciprocal
}
let lb = nfft - bbR.count
let pad2: [Double] = Array(repeating: 0, count: lb) // 2nd zero padding
bbR += pad2
bbI += pad2
let (fwr, fwi) = fft(x: bbR, y: bbI, type: "fft") // 2nd FFT
let fg = zip(fgr, fgi).map { Complex<Double>(real: $0, imaginary: $1) } // complexN 1
let fw = zip(fwr, fwi).map { Complex<Double>(real: $0, imaginary: $1) } // complexN 2
let cc = zip(fg, fw).map { $0 * $1 } // multiply above 2 complex numbers fg * fw
var ccR = cc.map { $0.real } // real part (vector) of complex multiply
var ccI = cc.map { $0.imaginary } // imag part (vector) of complex multiply
let lc = nfft - ccR.count
let pad3: [Double] = Array(repeating: 0, count: lc) // 3rd zero padding
ccR += pad3
ccI += pad3
let (ggr, ggi) = fft(x: ccR, y: ccI, type: "ifft") // 3rd FFT (IFFT)
var GGr = [Double]()
var GGi = [Double]()
var W2r = [Double]()
var W2i = [Double]()
for v in 0 ..< M.count {
GGr.append(ggr[Int(M[v] - 1)])
GGi.append(ggi[Int(M[v] - 1)])
W2r.append(WR[Int(M[v] - 1)])
W2i.append(WI[Int(M[v] - 1)])
}
let ggg = zip(GGr, GGi).map { Complex<Double>(real: $0, imaginary: $1) }
let www = zip(W2r, W2i).map { Complex<Double>(real: $0, imaginary: $1) }
let y = zip(ggg, www).map { $0 * $1 }
let yR = y.map { $0.real } // FFT real part (output vector)
let yI = y.map { $0.imaginary } // FFT imag part (output vector)
return (yR, yI)
}
// END DFT BASED ON CHIRP Z TRANSFORM (CZT)
// CHIRP DFT (CZT) TEST
let x: [Double] = [1, 2, 3, 4, 5] // arbitrary sample size
let (fftR, fftI) = dft(x: x)
print("DFT Real Part:", fftR)
print(" ")
print("DFT Imag Part:", fftI)
// Matches Matlab FFT Output
// DFT Real Part: [15.0, -2.5000000000000018, -2.5000000000000013, -2.4999999999999991, -2.499999999999996]
// DFT Imag Part: [-1.1102230246251565e-16, 3.4409548011779334, 0.81229924058226477, -0.81229924058226599, -3.4409548011779356]
// END CHIRP DFT (CZT) TEST
Posting my comment as an answer to close this question—
If you’re sure you want to use an ICZT as an equivalent of IFFT, then make your dft function accept a type: String argument like your fft. When type is ifft, all you need is to flip the sign here:
WI.append(sin(-p1[i] * 2.0 * M_PI / Double(m)))
Leave it negative for forward FFT, and positive for inverse FFT (IFFT).
Here’s some Octave/Matlab code I wrote to demonstrate CZT: gist.github.com/fasiha/42a21405de92ea46f59e. The demo shows how to use czt2 to do fft. The third argument to czt2 (called w in the code) is exp(-2j * pi / Nup) for FFT. Just conjugate it to exp(+2j * pi / Nup) to get IFFT.
That’s what flipping the sign in the sin in WI does.

Cannot invoke '_' with an argument list of type '_' - Which of the two choices should I use?

I have this error. I think I have two choices. Which one is best for my code? What do the differences mean?
cannot invoke 'RGBtoHSV' with an argument list of type '(Float,Float,Float)'
RGBtoHSV(CGFloat(r), CGFloat(g), CGFloat(b))
RGBtoHSV(CGFloat(), CGFloat(), CGFloat())
Also if you take a look at the screen shot if you could give me some pointers regarding the other couple of errors that would be great too. I know I have to match the types but I don't know the syntax order. http://i.imgur.com/sAckG6h.png
Thanks
func RGBtoHSV(r : CGFloat, g : CGFloat, b : CGFloat) -> (h : CGFloat, s : CGFloat, v : CGFloat) {
var h : CGFloat = 0.0
var s : CGFloat = 0.0
var v : CGFloat = 0.0
let col = UIColor(red: r, green: g, blue: b, alpha: 1.0)
col.getHue(&h, saturation: &s, brightness: &v, alpha: nil)
return (h, s, v)
}
// process the frame of video
func captureOutput(captureOutput:AVCaptureOutput, didOutputSampleBuffer sampleBuffer:CMSampleBuffer, fromConnection connection:AVCaptureConnection) {
// if we're paused don't do anything
if currentState == CurrentState.statePaused {
// reset our frame counter
self.validFrameCounter = 0
return
}
// this is the image buffer
var cvimgRef:CVImageBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)
// Lock the image buffer
CVPixelBufferLockBaseAddress(cvimgRef, 0)
// access the data
var width: size_t = CVPixelBufferGetWidth(cvimgRef)
var height:size_t = CVPixelBufferGetHeight(cvimgRef)
// get the raw image bytes
let buf = UnsafeMutablePointer<UInt8>(CVPixelBufferGetBaseAddress(cvimgRef))
var bprow: size_t = CVPixelBufferGetBytesPerRow(cvimgRef)
var r:Float = 0.0
var g:Float = 0.0
var b:Float = 0.0
for var y = 0; y < height; y++ {
for var x:UInt8 = 0; x < width * 4; x += 4 { // error: '<' cannot be applied to operands of type 'UInt8' and 'Int'
b += buf[x]
g += buf[x + 1]
r += buf[x + 2]
}
buf += bprow(UnsafeMutablePointer(UInt8)) // error: '+=' cannot be applied to operands of type 'UnsafeMutablePointer<UInt8>' and 'size_t'
}
r /= 255 * (width*height)
g /= 255 * (width*height)
b /= 255 * (width*height)
//}
// convert from rgb to hsv colourspace
var h:Float = 0.0
var s:Float = 0.0
var v:Float = 0.0
RGBtoHSV(r, g, b) // error
You have a lot of type mismatch error.
The type of x should not be UInt8 because x to increase until the value of the width.
for var x:UInt8 = 0; x < width * 4; x += 4 { // error: '<' cannot be applied to operands of type 'UInt8' and 'Int'
So fix it like below:
for var x = 0; x < width * 4; x += 4 {
To increment the pointer address, you can use advancedBy() function.
buf += bprow(UnsafeMutablePointer(UInt8)) // error: '+=' cannot be applied to operands of type 'UnsafeMutablePointer<UInt8>' and 'size_t'
Like below:
var pixel = buf.advancedBy(y * bprow)
And this line,
RGBtoHSV(r, g, b) // error
There are no implicit casts in Swift between CGFloat and Float unfortunately. So you should cast explicitly to CGFloat.
RGBtoHSV(CGFloat(r), g: CGFloat(g), b: CGFloat(b))
The whole edited code is here:
func RGBtoHSV(r: CGFloat, g: CGFloat, b: CGFloat) -> (h: CGFloat, s: CGFloat, v: CGFloat) {
var h: CGFloat = 0.0
var s: CGFloat = 0.0
var v: CGFloat = 0.0
let col = UIColor(red: r, green: g, blue: b, alpha: 1.0)
col.getHue(&h, saturation: &s, brightness: &v, alpha: nil)
return (h, s, v)
}
// process the frame of video
func captureOutput(captureOutput:AVCaptureOutput, didOutputSampleBuffer sampleBuffer:CMSampleBuffer, fromConnection connection:AVCaptureConnection) {
// if we're paused don't do anything
if currentState == CurrentState.statePaused {
// reset our frame counter
self.validFrameCounter = 0
return
}
// this is the image buffer
var cvimgRef = CMSampleBufferGetImageBuffer(sampleBuffer)
// Lock the image buffer
CVPixelBufferLockBaseAddress(cvimgRef, 0)
// access the data
var width = CVPixelBufferGetWidth(cvimgRef)
var height = CVPixelBufferGetHeight(cvimgRef)
// get the raw image bytes
let buf = UnsafeMutablePointer<UInt8>(CVPixelBufferGetBaseAddress(cvimgRef))
var bprow = CVPixelBufferGetBytesPerRow(cvimgRef)
var r: Float = 0.0
var g: Float = 0.0
var b: Float = 0.0
for var y = 0; y < height; y++ {
var pixel = buf.advancedBy(y * bprow)
for var x = 0; x < width * 4; x += 4 { // error: '<' cannot be applied to operands of type 'UInt8' and 'Int'
b += Float(pixel[x])
g += Float(pixel[x + 1])
r += Float(pixel[x + 2])
}
}
r /= 255 * Float(width * height)
g /= 255 * Float(width * height)
b /= 255 * Float(width * height)
//}
// convert from rgb to hsv colourspace
var h: Float = 0.0
var s: Float = 0.0
var v: Float = 0.0
RGBtoHSV(CGFloat(r), g: CGFloat(g), b: CGFloat(b)) // error
}

Bezier and b-spline arc-length algorithm giving me problems

I'm having a bit of a problem calculating the arc-length of my bezier and b-spline curves. I've been banging my head against this for several days, and I think I'm almost there, but can't seem to get it exactly right. I'm developing in Swift, but I think its syntax is clear enough that anyone who knows C/C++ would be able to read it. If not, please let me know and I'll try to translate it into C/C++.
I've checked my implementations against several sources over and over again, and, as far as the algorithms go, they seem to be correct, although I'm not so sure about the B-spline algorithm. Some tutorials use the degree, and some use the order, of the curve in their calculations, and I get really confused. In addition, in using the Gauss-Legendre quadrature, I understand that I'm supposed to sum the integration of the spans, but I'm not sure I'm understanding how to do that correctly. From what I understand, I should be integrating over each knot span. Is that correct?
When I calculate the length of a Bezier curve with the following control polygon, I get 28.2842712474619, while 3D software (Cinema 4D and Maya) tells me the length should be 30.871.
let bezierControlPoints = [
Vector(-10.0, -10.0),
Vector(0.0, -10.0),
Vector(0.0, 10.0),
Vector(10.0, 10.0)
]
The length of the b-spline is similarly off. My algorithm produces 5.6062782185353, while it should be 7.437.
let splineControlPoints = [
Vector(-2.0, -1.0),
Vector(-1.0, 1.0),
Vector(-0.25, 1.0),
Vector(0.25, -1.0),
Vector(1.0, -1.0),
Vector(2.0, 1.0)
]
I'm not a mathematician, so I'm struggling with the math, but I think I have the gist of it.
The Vector class is pretty straight-forwared, but I've overloaded some operators for convenience/legibility which makes the code quite lengthy, so I'm not posting it here. I'm also not including the Gauss-Legendre weights and abscissae. You can download the source and Xcode project from here (53K).
Here's my bezier curve class:
class Bezier
{
var c0:Vector
var c1:Vector
var c2:Vector
var c3:Vector
init(ic0 _ic0:Vector, ic1 _ic1:Vector, ic2 _ic2:Vector, ic3 _ic3:Vector) {
c0 = _ic0
c1 = _ic1
c2 = _ic2
c3 = _ic3
}
// Calculate curve length using Gauss-Legendre quadrature
func curveLength()->Double {
let gl = GaussLegendre()
gl.order = 3 // Good enough for a quadratic polynomial
let xprime = gl.integrate(a:0.0, b:1.0, closure:{ (t:Double)->Double in return self.dx(atTime:t) })
let yprime = gl.integrate(a:0.0, b:1.0, closure:{ (t:Double)->Double in return self.dy(atTime:t) })
return sqrt(xprime*xprime + yprime*yprime)
}
// I could vectorize this, but correctness > efficiency
// The derivative of the x-component
func dx(atTime t:Double)->Double {
let tc = (1.0-t)
let r0 = (3.0 * tc*tc) * (c1.x - c0.x)
let r1 = (6.0 * tc*t) * (c2.x - c1.x)
let r2 = (3.0 * t*t) * (c3.x - c2.x)
return r0 + r1 + r2
}
// The derivative of the y-component
func dy(atTime t:Double)->Double {
let tc = (1.0-t)
let r0 = (3.0 * tc*tc) * (c1.y - c0.y)
let r1 = (6.0 * tc*t) * (c2.y - c1.y)
let r2 = (3.0 * t*t) * (c3.y - c2.y)
return r0 + r1 + r2
}
}
Here is my b-spline class:
class BSpline
{
var spanLengths:[Double]! = nil
var totalLength:Double = 0.0
var cp:[Vector]
var knots:[Double]! = nil
var o:Int = 4
init(controlPoints:[Vector]) {
cp = controlPoints
calcKnots()
}
// Method to return length of the curve using Gauss-Legendre numerical integration
func cacheSpanLengths() {
spanLengths = [Double]()
totalLength = 0.0
let gl = GaussLegendre()
gl.order = o-1 // The derivative should be quadratic, so o-2 would suffice?
// Am I doing this right? Piece-wise integration?
for i in o-1 ..< knots.count-o {
let t0 = knots[i]
let t1 = knots[i+1]
let xprime = gl.integrate(a:t0, b:t1, closure:self.dx)
let yprime = gl.integrate(a:t0, b:t1, closure:self.dy)
let spanLength = sqrt(xprime*xprime + yprime*yprime)
spanLengths.append(spanLength)
totalLength += spanLength
}
}
// The b-spline basis function
func basis(i:Int, _ k:Int, _ x:Double)->Double {
var r:Double = 0.0
switch k {
case 0:
if (knots[i] <= x) && (x <= knots[i+1]) {
r = 1.0
} else {
r = 0.0
}
default:
var n0 = x - knots[i]
var d0 = knots[i+k]-knots[i]
var b0 = basis(i,k-1,x)
var n1 = knots[i+k+1] - x
var d1 = knots[i+k+1]-knots[i+1]
var b1 = basis(i+1,k-1,x)
var left = Double(0.0)
var right = Double(0.0)
if b0 != 0 && d0 != 0 { left = n0 * b0 / d0 }
if b1 != 0 && d1 != 0 { right = n1 * b1 / d1 }
r = left + right
}
return r
}
// Method to calculate and store the knot vector
func calcKnots() {
// The number of knots in the knot vector = number of control points + order (i.e. degree + 1)
let knotCount = cp.count + o
knots = [Double]()
// For an open b-spline where the ends are incident on the first and last control points,
// the first o knots are the same and the last o knots are the same, where o is the order
// of the curve.
var k = 0
for i in 0 ..< o {
knots.append(0.0)
}
for i in o ..< cp.count {
k++
knots.append(Double(k))
}
k++
for i in cp.count ..< knotCount {
knots.append(Double(k))
}
}
// I could vectorize this, but correctness > efficiency
// Derivative of the x-component
func dx(t:Double)->Double {
var p = Double(0.0)
let n = o
for i in 0 ..< cp.count-1 {
let u0 = knots[i + n + 1]
let u1 = knots[i + 1]
let fn = Double(n) / (u0 - u1)
let thePoint = (cp[i+1].x - cp[i].x) * fn
let b = basis(i+1, n-1, Double(t))
p += thePoint * b
}
return Double(p)
}
// Derivative of the y-component
func dy(t:Double)->Double {
var p = Double(0.0)
let n = o
for i in 0 ..< cp.count-1 {
let u0 = knots[i + n + 1]
let u1 = knots[i + 1]
let fn = Double(n) / (u0 - u1)
let thePoint = (cp[i+1].y - cp[i].y) * fn
let b = basis(i+1, n-1, Double(t))
p += thePoint * b
}
return Double(p)
}
}
And here is my Gauss-Legendre implementation:
class GaussLegendre
{
var order:Int = 5
init() {
}
// Numerical integration of arbitrary function
func integrate(a _a:Double, b _b:Double, closure f:(Double)->Double)->Double {
var result = 0.0
let wgts = gl_weights[order-2]
let absc = gl_abscissae[order-2]
for i in 0..<order {
let a0 = absc[i]
let w0 = wgts[i]
result += w0 * f(0.5 * (_b + _a + a0 * (_b - _a)))
}
return 0.5 * (_b - _a) * result
}
}
And my main logic:
let bezierControlPoints = [
Vector(-10.0, -10.0),
Vector(0.0, -10.0),
Vector(0.0, 10.0),
Vector(10.0, 10.0)
]
let splineControlPoints = [
Vector(-2.0, -1.0),
Vector(-1.0, 1.0),
Vector(-0.25, 1.0),
Vector(0.25, -1.0),
Vector(1.0, -1.0),
Vector(2.0, 1.0)
]
var bezier = Bezier(controlPoints:bezierControlPoints)
println("Bezier curve length: \(bezier.curveLength())\n")
var spline:BSpline = BSpline(controlPoints:splineControlPoints)
spline.cacheSpanLengths()
println("B-Spline curve length: \(spline.totalLength)\n")
UPDATE: PROBLEM (PARTIALLY) SOLVED
Thanks to Mike for his answer!
I verified that I am correctly remapping the numerical integration from the interval a..b to -1..1 for the purposes of Legendre-Gauss quadrature. The math is here (apologies to any real mathematicians out there, it's the best I could do with my long-forgotten calculus).
I've increased the order of the Legendre-Gauss quadrature from 5 to 32 as Mike suggested.
Then after a lot of floundering around in Mathematica, I came back and re-read Mike's code and discovered that my code was NOT equivalent to his.
I was taking the square root of the sums of the squared integrals of the derivative components:
when I should have been taking the integral of the magnitudes of the derivative vectors:
In terms of code, in my Bezier class, instead of this:
// INCORRECT
func curveLength()->Double {
let gl = GaussLegendre()
gl.order = 3 // Good enough for a quadratic polynomial
let xprime = gl.integrate(a:0.0, b:1.0, closure:{ (t:Double)->Double in return self.dx(atTime:t) })
let yprime = gl.integrate(a:0.0, b:1.0, closure:{ (t:Double)->Double in return self.dy(atTime:t) })
return sqrt(xprime*xprime + yprime*yprime)
}
I should have written this:
// CORRECT
func curveLength()->Double {
let gl = GaussLegendre()
gl.order = 32
return = gl.integrate(a:0.0, b:1.0, closure:{ (t:Double)->Double in
let x = self.dx(atTime:t)
let y = self.dy(atTime:t)
return sqrt(x*x + y*y)
})
}
My code calculates the arc length as: 3.59835872777095
Mathematica: 3.598358727834686
So, my result is pretty close. Interestingly, there is a discrepancy between a plot in Mathematica of my test Bezier curve, and the same rendered by Cinema 4D, which would explain why the arc lengths calculated by Mathematica and Cinema 4D are different as well. I think I trust Mathematica to be more correct, though.
In my B-Spline class, instead of this:
// INCORRECT
func cacheSpanLengths() {
spanLengths = [Double]()
totalLength = 0.0
let gl = GaussLegendre()
gl.order = o-1 // The derivative should be quadratic, so o-2 would suffice?
// Am I doing this right? Piece-wise integration?
for i in o-1 ..< knots.count-o {
let t0 = knots[i]
let t1 = knots[i+1]
let xprime = gl.integrate(a:t0, b:t1, closure:self.dx)
let yprime = gl.integrate(a:t0, b:t1, closure:self.dy)
let spanLength = sqrt(xprime*xprime + yprime*yprime)
spanLengths.append(spanLength)
totalLength += spanLength
}
}
I should have written this:
// CORRECT
func cacheSpanLengths() {
spanLengths = [Double]()
totalLength = 0.0
let gl = GaussLegendre()
gl.order = 32
// Am I doing this right? Piece-wise integration?
for i in o-1 ..< knots.count-o {
let t0 = knots[i]
let t1 = knots[i+1]
let spanLength = gl.integrate(a:t0, b:t1, closure:{ (t:Double)->Double in
let x = self.dx(atTime:t)
let y = self.dy(atTime:t)
return sqrt(x*x + y*y)
})
spanLengths.append(spanLength)
totalLength += spanLength
}
}
Unfortunately, the B-Spline math is not as straight-forward, and I haven't been able to test it in Mathematica as easily as the Bezier math, so I'm not entirely sure my code is working, even with the above changes. I will post another update when I verify it.
UPDATE 2: PROBLEM SOLVED
Eureka, I discovered an off-by one error in my code to calculate the B-Spline derivative.
Instead of
// Derivative of the x-component
func dx(t:Double)->Double {
var p = Double(0.0)
let n = o // INCORRECT (should be one less)
for i in 0 ..< cp.count-1 {
let u0 = knots[i + n + 1]
let u1 = knots[i + 1]
let fn = Double(n) / (u0 - u1)
let thePoint = (cp[i+1].x - cp[i].x) * fn
let b = basis(i+1, n-1, Double(t))
p += thePoint * b
}
return Double(p)
}
// Derivative of the y-component
func dy(t:Double)->Double {
var p = Double(0.0)
let n = o // INCORRECT (should be one less_
for i in 0 ..< cp.count-1 {
let u0 = knots[i + n + 1]
let u1 = knots[i + 1]
let fn = Double(n) / (u0 - u1)
let thePoint = (cp[i+1].y - cp[i].y) * fn
let b = basis(i+1, n-1, Double(t))
p += thePoint * b
}
return Double(p)
}
I should have written
// Derivative of the x-component
func dx(t:Double)->Double {
var p = Double(0.0)
let n = o-1 // CORRECT
for i in 0 ..< cp.count-1 {
let u0 = knots[i + n + 1]
let u1 = knots[i + 1]
let fn = Double(n) / (u0 - u1)
let thePoint = (cp[i+1].x - cp[i].x) * fn
let b = basis(i+1, n-1, Double(t))
p += thePoint * b
}
return Double(p)
}
// Derivative of the y-component
func dy(t:Double)->Double {
var p = Double(0.0)
let n = o-1 // CORRECT
for i in 0 ..< cp.count-1 {
let u0 = knots[i + n + 1]
let u1 = knots[i + 1]
let fn = Double(n) / (u0 - u1)
let thePoint = (cp[i+1].y - cp[i].y) * fn
let b = basis(i+1, n-1, Double(t))
p += thePoint * b
}
return Double(p)
}
My code now calculates the length of the B-Spline curve as 6.87309971722132.
Mathematica: 6.87309884638438.
It's probably not scientifically precise, but good enough for me.
The Legendre-Gauss procedure is specifically defined for the interval [-1,1], whereas Beziers and B-Splines are defined over [0,1], so that's a simple conversion and at least while you're trying to make sure your code does the right thing, easy to bake in instead of supplying a dynamic interval (as you say, accuracy over efficiency. Once it works, we can worry about optimising)
So, given weights W and abscissae A (both of same length n), you'd do:
z = 0.5
for i in 1..n
w = W[i]
a = A[i]
t = z * a + z
sum += w * arcfn(t, xpoints, ypoints)
return z * sum
with the pseudo-code assuming list indexing from 1. The arcfn would be defined as:
arcfn(t, xpoints, ypoints):
x = derive(xpoints, t)
y = derive(ypoints, t)
c = x*x + y*y
return sqrt(c)
But that part looks right already.
Your derivatives look correct too, so the main question is: "are you using enough slices in your Legendre-Gauss quadrature?". Your code suggests you're using only 5 slices, which isn't nearly enough to get a good result. Using http://pomax.github.io/bezierinfo/legendre-gauss.html as term data, you generally want a set for n of 16 or higher (for cubic Bezier curves, 24 is generally safe, although still underperformant for curves with cusps or lots of inflections).
I can recommend taking the "unit test" approach here: test your bezier and bspline code (separately) for known base and derivative values. Do those check out? One problem ruled out. On to your LG code: if you perform Legendre-Gauss on a parametric function for a straight line using:
fx(t) = t
fy(t) = t
fx'(t) = 1
fy'(t) = 1
over interval t=[0,1], we know the length should be exactly the square root of 2, and the derivatives are the simplest possible. If those work, do a non-linear test using:
fx(t) = sin(t)
fy(t) = cos(t)
fx'(t) = cos(t)
fy'(t) = -sin(t)
over interval t=[0,1]; we know the length should be exactly 1. Does your LG implementation yield the correct value? Another problem ruled out. If it doesn't, check your weights and abscissae. Do they match the ones from the linked page (generated with a verifiably correct Mathematica program, so pretty much guaranteed to be correct)? Are you using enough slices? Bump the number up to 10, 16, 24, 32; increasing the number of slices will show a stabilising summation, where adding more slices doesn't change digits before the 2nd, 3rd, 4th, 5th, etc decimal point as you increase the count.
Are the curves you're testing with known to be problematic curves? Plot them, do they have cusps or lots of inflections? That's going to be a problem for LG, try simpler curves to see if the values you get back for those, at least, are correct.
Finally, check your types: Are you using the highest precision possible datatype? 32 bit floats are going to run into mysteriously disappearing FPU and wonderful rounding errors at the values we need to use when doing LG with a reasonable number of slices.