I am trying to implement this code which I got from an apple WWDC video. However the video is from 2016 and I think the syntax has changed. How do I call sizeof(Float)? This produces an error.
func render(buffer:AudioBuffer){
let nFrames = Int(buffer.mDataByteSize) / sizeof(Float)
var ptr = UnsafeMutableRawPointer(buffer.mData)
var j = self.counter
let cycleLength = self.sampleRate / self.frequency
let halfCycleLength = cycleLength / 2
let amp = self.amplitude, minusAmp = -amp
for _ in 0..<nFrames{
if j < halfCycleLength{
ptr.pointee = amp
} else {
ptr.pointee = minusAmp
}
ptr = ptr.successor()
j += 1.0
if j > cycleLength {
}
}
self.counter = j
}
The sizeof() function is no longer supported in Swift.
As Leo Dabus said in his comment, you want MemoryLayout<Type>.size, or in your case, MemoryLayout<Float>.size.
Note that tells you the abstract size of an item of that type. However, due to alignment, you should not assume that structs containing different types of items will be the sums of the sizes of the other elements. Also, you need to consider the device it's running on. On a 64 bit device, Int is 8 bytes. On a 32 bit device, it's 4 bytes.
See the article on MemoryLayout at SwiftDoc.org for more information.
Related
Hi I need to decompose a number into powers of 2 in swift 5 for an iOS app I'm writing fro a click and collect system.
The backend of this system is written in c# and uses the following to save a multi-pick list of options as a single number in the database eg:
choosing salads for a filled roll on an order system works thus:
lettuce = 1
cucumber = 2
tomato = 4
sweetcorn = 8
onion = 16
by using this method it saves the options into the database for the choice made as (lettuce + tomato + onion) = 21 (1+4+16)
at the other end I use a c# function to do this thus:
for(int j = 0; j < 32; j++)
{
int mask = 1 << j;
}
I need to convert this function into a swift 5 format to integrate the decoder into my iOS app
any help would be greatly appreciated
In Swift, these bit fields are expressed as option sets, which are types that conform to the OptionSet protocol. Here is an example for your use case:
struct Veggies: OptionSet {
let rawValue: UInt32
static let lettuce = Veggies(rawValue: 1 << 0)
static let cucumber = Veggies(rawValue: 1 << 1)
static let tomato = Veggies(rawValue: 1 << 2)
static let sweetcorn = Veggies(rawValue: 1 << 3)
static let onion = Veggies(rawValue: 1 << 4)
}
let someVeggies: Veggies = [.lettuce, .tomato]
print(someVeggies) // => Veggies(rawValue: 5)
print(Veggies.onion.rawValue) // => 16
OptionSets are better than just using their raw values, for two reasons:
1) They standardize the names of the cases, and gives a consistent and easy way to interact with these values
2) OptionSet derives from the SetAlgebra protocol, and provides defaulted implementations for many useful methods like union, intersection, subtract, contains, etc.
I would caution against this design, however. Option sets are useful only when there's a really small number of flags (less than 64), that you can't forsee expanding. They're really basic, can't store any payload besides "x exists, or it doesn't", and they're primarily intended for use cases that have very high sensitivity for performance and memory use, which quick rare these days. I would recommend using regular objects (Veggie class, storing a name, and any other relevant data) instead.
You can just use a while loop, like this :
var j = 0
while j < 32 {
var mask = 1 << j
j += 1
}
Here is a link about loops and control flow in Swift 5.
Hi I figured it out this is my final solution:
var salads = "" as String
let value = 127
var j=0
while j < 256 {
let mask=1 << j
if((value & mask) != 0) {
salads.append(String(mask) + ",")
}
j += 1
}
salads = String(salads.dropLast()) // removes the final ","
print(salads)
This now feeds nicely into the in clause in my SQL query, thanks you all for your help! :)
i have two values in bytes in two different variables . i want to perform a certain action whenever values are nearly equal to each other.
I there any method in swift in which i can perform any action on variables values nearly equal to.
If recommend me some code , tutorial or article to achieve this.
I am new to swift so please avoid down voting.
let string1 = "Hello World"
let string2 = "Hello"
let byteArrayOfString1: [UInt8] = string1.utf8.map{UInt8($0)} //Converting HELLO WORLD into Byte Type Array
let byteArrayOfString2: [UInt8] = string2.utf8.map{UInt8($0)} //Converting HELLO into Byte Type Array
if byteArrayOfString1 == byteArrayOfString2 {
print("Match")
}else {
print("Not Match")
}
For more Help, Visit https://medium.com/#gorjanshukov/working-with-bytes-in-ios-swift-4-de316a389a0c
well exactly i don't think so there is such method that compare approx values but if you discuss what exactly you want to do we can find a better alternative solution.
Here is the Solution:
func nearlyEqual(a: Float, b: Float, epsilon: Float) -> Bool {
let absA = abs(a)
let absB = abs(b)
let diff = abs(a - b)
if a == b {
return true
} else if (a == 0 || b == 0 || absA + absB < Float.leastNonzeroMagnitude) {
// a or b is zero or both are extremely close to it
// relative error is less meaningful here
return diff < (epsilon * Float.leastNonzeroMagnitude)
} else {
return diff / (absA + absB) < epsilon
}
}
Then you can use it like :
print(nearlyEqual(a: 1.2, b: 1.4, epsilon: 0.2))
This will return true.
The code below shows two ways of building a spreadsheet :
by using:
str = str + "\(number) ; "
or
str.append("\(number)");
Both are really slow because, I think, they discard both strings and make a third one which is the concatenation of the first two.
Now, If I repeat this operation hundreds of thousands of times to grow a spreadsheet... that makes a lot of allocations.
For instance, the code below takes 11 seconds to execute on my MacBook Pro 2016:
let start = Date()
var str = "";
for i in 0 ..< 86400
{
for j in 0 ..< 80
{
// Use either one, no difference
// str = str + "\(Double(j) * 1.23456789086756 + Double(i)) ; "
str.append("\(Double(j) * 1.23456789086756 + Double(i)) ; ");
}
str.append("\n")
}
let duration = Date().timeIntervalSinceReferenceDate - start.timeIntervalSinceReferenceDate;
print(duration);
How can I solve this issue without having to convert the doubles to string myself ? I have been stuck on this for 3 days... my programming skills are pretty limited, as you can probably see from the code above...
I tried:
var str = NSMutableString(capacity: 86400*80*20);
but the compiler tells me:
Variable 'str' was never mutated; consider changing to 'let' constant
despite the
str.append("\(Double(j) * 1.23456789086756 + Double(i)) ; ");
So apparently, calling append does not mutate the string...
I tried writing it to an array and the limiting factor seems to be the conversion of a double to a string.
The code below takes 13 seconds or so on my air
doing this
arr[i][j] = "1.23456789086756"
drops the execution time to 2 seconds so 11 seconds is taken up in converting Double to String. You might be able to shave off some time by writing your own conversion routine but that seems the limiting factor. I tried using memory streams and that seems even slower.
var start = Date()
var arr = Array(repeating: Array(repeating: "1.23456789086756", count: 80), count: 86400 )
var duration = Date().timeIntervalSinceReferenceDate - start.timeIntervalSinceReferenceDate;
print(duration); //0.007
start = Date()
var a = 1.23456789086756
for i in 0 ..< 86400
{
for j in 0 ..< 80
{
arr[i][j] = "\(a)" // "1.23456789086756" //String(a)
}
}
duration = Date().timeIntervalSinceReferenceDate - start.timeIntervalSinceReferenceDate;
print(duration); //13.46 or 2.3 with the string
This is a LeetCode question. I wrote 4 answers in different versions of that question. When I tried to use "Bit manipulation", I got the error. Since no one in LeetCode can answer my question, and I can't find any Swift doc about this. I thought I would try to ask here.
The question is to get the majority element (>n/2) in a given array. The following code works in other languages like Java, so I think it might be a general question in Swift.
func majorityElement(nums: [Int]) -> Int {
var bit = Array(count: 32, repeatedValue: 0)
for num in nums {
for i in 0..<32 {
if (num>>(31-i) & 1) == 1 {
bit[i] += 1
}
}
}
var ret = 0
for i in 0..<32 {
bit[i] = bit[i]>nums.count/2 ? 1 : 0
ret += bit[i] * (1<<(31-i))
}
return ret
}
When the input is [-2147483648], the output is 2147483648. But in Java, it can successfully output the right negative number.
Swift doc says :
Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.
Well, it is 2,147,483,647, the input is 1 larger than that number. When I ran pow(2.0, 31.0) in playground, it shows 2147483648. I got confused. What's wrong with my code or what did I miss about Swift Int?
A Java int is a 32-bit integer. The Swift Int is 32-bit or 64-bit
depending on the platform. In particular, it is 64-bit on all OS X
platforms where Swift is available.
Your code handles only the lower 32 bits of the given integers, so that
-2147483648 = 0xffffffff80000000
becomes
2147483648 = 0x0000000080000000
So solve the problem, you can either change the function to take 32-bit integers as arguments:
func majorityElement(nums: [Int32]) -> Int32 { ... }
or make it work with arbitrary sized integers by computing the
actual size and use that instead of the constant 32:
func majorityElement(nums: [Int]) -> Int {
let numBits = sizeof(Int) * 8
var bit = Array(count: numBits, repeatedValue: 0)
for num in nums {
for i in 0..<numBits {
if (num>>(numBits-1-i) & 1) == 1 {
bit[i] += 1
}
}
}
var ret = 0
for i in 0..<numBits {
bit[i] = bit[i]>nums.count/2 ? 1 : 0
ret += bit[i] * (1<<(numBits-1-i))
}
return ret
}
A more Swifty way would be to use map() and reduce()
func majorityElement(nums: [Int]) -> Int {
let numBits = sizeof(Int) * 8
let bitCounts = (0 ..< numBits).map { i in
nums.reduce(0) { $0 + ($1 >> i) & 1 }
}
let major = (0 ..< numBits).reduce(0) {
$0 | (bitCounts[$1] > nums.count/2 ? 1 << $1 : 0)
}
return major
}
I'm back again with what is likely a simple issue, however its got me stumped.
I've written very small, very basic piece of code in an xcode playground.
My code simply iterates over a function 10 times, printing the output each time.
var start = 0
var x = 0
var answer = 2 * x
func spin() {
print(answer)
}
while start < 10 {
spin()
x++
start++
}
Now for my issue, It seems my code properly increments the 'start' variable.... running and printing 10 times. However it prints out a list of 0's. For some reason the 'x' variable isn't incrementing.
I've consulted the few ebooks I have for swift, aswell as the documentation, and as far as i can see my code should work.
Any ideas?
P.s. As per the documentation I have also tried ++x, to no avail.
edit
Updated, working code thanks to answers below:
var start = 0
var x = 0
var answer = 2 * x
func spin() {
print("The variable is", x, "and doubled it is", answer)
}
while start <= 10 {
spin()
x++
start++
answer = 2 * x
}
You have just assigned 2 * x to answer at the beginning of the program, when x == 0, and the value of answer remains its initial value through out the program. That's how Value Types work in Swift as well as in almost any other languages
If you wish to always have answer to be 2 times of x, you should write like this
var start = 0
var x = 0
var answer = 2 * x
func spin() {
print(answer)
}
while start < 10 {
spin()
x++
start++
answer = 2 * x
}
And thanks to Leo Dabus's answer, you may also define a Computed Property to caculate the value of 2 * x each time you try to get the value of answer. In this way, answer becomes readonly and you cannot assign other values to it. And each time you try to get the value of answer, it performs the 2 * x calculation.
var start = 0
var x = 0
var answer: Int {
return 2 * x
}
func spin() {
print(answer)
}
while start < 10 {
spin()
x++
start++
}
What you need is a read only computed property. Try like this:
var answer: Int { return 2 * x }