I try to compute the autospectrum of a recorded signal, I use the function vDSP_zaspec. Normally the output array is real.
class func autoSpectrum (input: [Float])->[Float] {
var real = [Float](input)
var imaginary = [Float](repeating: 0.0, count : input.count)
var Output:[Float] = [Float](repeating:0 , count: input.count)
let length = vDSP_Length(real.count/2)
real.withUnsafeMutableBufferPointer {realBP in
imaginary.withUnsafeMutableBufferPointer {imaginaryBP in
var splitComplex = DSPSplitComplex(realp: realBP.baseAddress!, imagp: imaginaryBP.baseAddress!)
vDSP_zaspec(&splitComplex, &Output, vDSP_Length(input.count))
}
}
let value = vDSP.rootMeanSquare(Output)
Output = vDSP.divide(Output, value)
return Output
}
I did a test with a 500 Hz sin wave and this is what the Output array looks like :
The chart is far from the expected result...The result looks like the absolute value of the recorded audio file...
If someone could help me, it will be great !
vDSP_zaspec returns the sum of each real part squared and the corresponding imaginary part squared. The Apple documentation describes it as "Compute the element-wise sum of the squares of the real and imaginary parts of a complex vector".
The following pieces of code calculate the same results:
var real: [Float] = [-2, 7, -3]
var imaginary: [Float] = [4, -1, -4]
var vDSPresult: [Float] = [0, 0, 0]
var scalarResult: [Float] = [0, 0, 0]
real.withUnsafeMutableBufferPointer {realBP in
imaginary.withUnsafeMutableBufferPointer {imaginaryBP in
var splitComplex = DSPSplitComplex(realp: realBP.baseAddress!,
imagp: imaginaryBP.baseAddress!)
vDSP_zaspec(&splitComplex,
&vDSPresult,
vDSP_Length(vDSPresult.count))
}
}
print(vDSPresult)
for i in 0 ..< scalarResult.count {
scalarResult[i] = pow(real[i], 2) + pow(imaginary[i], 2)
}
print(scalarResult)
Related
I've got my RGB data of the CGImage. but the processing to get avrage color is so slow.
any idea? apreciated.
let screenShot:CGImage = CGDisplayCreateImage(activeDisplays[Int(index)],rect: myrect)!
let dp: UnsafePointer<UInt8> = CFDataGetBytePtr(screenShot.dataProvider?.data)
var bsum:Int = 0
var rsum:Int = 0
var gsum:Int = 0
for j in 0..<(oneH*oneW){
rsum += Int(dp[j*4])
gsum += Int(dp[j*4+1])
bsum += Int(dp[j*4+2])
}
rsum/=onepack
gsum/=onepack
bsum/=onepack
https://gist.github.com/jeffrafter/ad8516d4ed7221a5cfd4b66d2f7f4ca1
this is the right answer.
but I found it coast too much time.
so I dicided to sample the pixel by jumping a few rows.it's faster.
I'm just trying to render a red square using metal, and I'm creating a vertex buffer from an array of Vertex structures that look like this:
struct Vertex {
var position: SIMD3<Float>
var color: SIMD4<Float>
}
This is where I'm rendering the square:
var vertices: [Vertex] = [
Vertex(position: [-0.5, -0.5, 0], color: [1, 0, 0, 1]),
Vertex(position: [-0.5, 0.5, 0], color: [1, 0, 0, 1]),
Vertex(position: [0.5, -0.5, 0], color: [1, 0, 0, 1]),
Vertex(position: [0.5, 0.5, 0], color: [1, 0, 0, 1])
]
var vertexBuffer: MTLBuffer?
func render(using renderCommandEncoder: MTLRenderCommandEncoder) {
if self.vertexBuffer == nil {
self.vertexBuffer = self.device.makeBuffer(
bytes: self.vertices,
length: MemoryLayout<Vertex>.stride * self.vertices.count,
options: []
)
}
if let vertexBuffer = self.vertexBuffer {
renderCommandEncoder.setRenderPipelineState(RenderPipelineStates.defaultState)
renderCommandEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
renderCommandEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: vertexBuffer.length / MemoryLayout<Vertex>.stride)
}
}
This is what my render pipeline state looks like:
let library = device.makeDefaultLibrary()!
let vertexShader = library.makeFunction(name: "basicVertexShader")
let fragmentShader = library.makeFunction(name: "basicFragmentShader")
let renderPipelineDescriptor = MTLRenderPipelineDescriptor()
renderPipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
renderPipelineDescriptor.vertexFunction = vertexShader
renderPipelineDescriptor.fragmentFunction = fragmentShader
renderPipelineDescriptor.sampleCount = 4
let vertexDescriptor = MTLVertexDescriptor()
vertexDescriptor.attributes[0].format = .float3
vertexDescriptor.attributes[0].bufferIndex = 0 // Position
vertexDescriptor.attributes[0].offset = 0
vertexDescriptor.attributes[1].format = .float4
vertexDescriptor.attributes[1].bufferIndex = 0 // Color
vertexDescriptor.attributes[1].offset = MemoryLayout<SIMD3<Float>>.stride
vertexDescriptor.layouts[0].stride = MemoryLayout<Vertex>.stride
renderPipelineDescriptor.vertexDescriptor = vertexDescriptor
self.defaultState = try! device.makeRenderPipelineState(descriptor: renderPipelineDescriptor)
The vertex and fragment shaders just pass through the position and color. For some reason, when this is rendered the first float of the color of the first vertex comes into the vertex shader as an extremely small value, effectively showing black. It only happens for the red value of the first vertex in the array.
Red square with one black vertex
I can see from debugging the GPU frame that the first vertex has a red color component of 5E-41 (essentially 0).
I have no idea why this is the case, it happens some time when the vertices are added to the vertex buffer. I'm guessing it has something to do with my render pipeline vertex descriptor, but I haven't been able to figure out what's wrong. Thanks for any help!
This is, with high likelihood, a duplicate of this question. I'd encourage you to consider the workarounds there, and also to file your own feedback to raise visibility of this bug. - warrenm
Correct, this appears to be a driver bug of some sorts. I fixed it by adding the cpuCacheModeWriteCombined option to makeBuffer and have filed feedback.
self.vertexBuffer = self.device.makeBuffer(
bytes: self.vertices,
length: MemoryLayout<Vertex>.stride * self.vertices.count,
options: [.cpuCacheModeWriteCombined]
)
I need to create a script that will calculate the overlap orbital of two 1s orbitals. The integral is given by
I tried calculating this using code but my answer is nowhere near the analytic result of S=(1+R+R^2/3)exp(-R). Could someone help me figure where I went wrong?
The code:
import Foundation
var sum: Double = 0.0 //The integral result
var step_size: Double = 0.0000025
var a: Double = 0.0
var R: Double = 5.0
var next_point: Double = 0.0
var midpoint: Double = 0.0
var height: Double = 0.0
var r_val: Double = 0.0
func psi_func(r_val: Double) -> Double {
return exp(-r_val)
}
//Integration
while next_point < R {
next_point = a + step_size
midpoint = a + step_size/2
height = psi_func(r_val: midpoint)
sum += psi_func(r_val: midpoint)*step_size
a = a + step_size
}
print("S = ", 2*3.14159*3.14159*sum) // This is a 3-D orbital, so I multiply by 2*pi*pi
For R = 5.0
My answer: 19.61
Analytic answer: 0.097
Two problems I can see:
I see a single wavefunction and not the product of two in your code
It is incorrect to just do a 1d integral and multiply by 2pi^2 in the end
Try doing a proper 3d integral with the correct integrand.
I'm trying to create a game in which a projectile is launched at a random angle.
To do this I need to be able to generate two random Int's. I looked up some tutorials and came up with this:
var random = CGFloat(Int(arc4random()) % 1500)
var random2 = CGFloat(Int(arc4random()) % -300)
self.addChild(bullet)
bullet.physicsBody!.velocity = CGVectorMake((random2), (random))
It worked for a while but now it just crashes.
Any help would be appreciated.
What I find I use the most is arc4random_uniform(upperBound) which returns a random Int ranging from zero to upperBound -1.
let random = arc4random_uniform(1500)
let random2 = arc4random_uniform(300) * -1
//pick a number between 1 and 10
let pick = arc4random_uniform(10)+1
The lowdown on the arc4 functions: arc4random man page
GameplayKit has a nice class wrapping the arc4 functions: GameplayKit Randomness
and a handy reference: Random Hipster
I dont understand very well your issue but I think it could be useful:
func getRandomPointFromCircle(radius:Float, center:CGPoint) -> CGPoint {
let randomAngle = Float(arc4random())/Float(UInt32.max-1) * Float(M_PI) * 2.0
// polar => cartesian
let x = radius * cosf(theta)
let y = radius * sinf(theta)
return CGPointMake(CGFloat(x)+center.x,CGFloat(y)+center.y)
}
I am trying to implement this particular command in MATLAB to opencv.I am working in Linux Ubuntu.Can you help me figure out the code in opencv for this.
out_vector=hist(G_vector,0:17:255);
G_vector is an array of size 1X100 which represents 1 component of an image.
I am using the following code
vector<Mat> rgb_planes;
split(image,rgb_planes);
int histSize = 255;
/// Set the ranges ( for R,G,B) )
float range[] = { 0,17, 255 } ;
const float* histRange = { range };
bool uniform = true; bool accumulate = false;
Mat r_hist, g_hist, b_hist,g_hist1;
/// Compute the histograms:
calcHist( &rgb_planes[0], 1, 0, Mat(), r_hist, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &rgb_planes[1], 1, 0, Mat(), g_hist, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &rgb_planes[2], 1, 0, Mat(), b_hist, 1, &histSize, &histRange, uniform, accumulate );
// Draw the histograms for R, G and B
int hist_w = 400; int hist_h = 400;
int bin_w = cvRound( (double) hist_w/histSize );
Mat histImage( hist_w, hist_h, CV_8UC3, Scalar( 0,0,0) );
/// Normalize the result to [ 0, histImage.rows ]
normalize(r_hist, r_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
normalize(g_hist, g_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
normalize(b_hist, b_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
Can you suggest where should I make changes here
histSize controls the number of bins in each dimension. If you want 16 bins in those one dimensional histograms, use:
int histSize[] = { 16 };
or I guess just int histSize = 16; would work.
Set the bin boundaries to be from 0 to 255 like this:
float intensity_range = { 0, 256 }; //upper bound is exclusive
const float * histRange[] = { intensity_range };
OpenCV's calcHist provides a configurability that few people will ever need. Check out the method description and example code here.