How can I read system input in Swift easily - swift

I'm beginner in Swift and am having a hard time dealing with Swift String.
It has many differences from other languages I think.
So, can somebody tell me why is this statement incorrect?
I want to read a Line and insert each one Integer to variable n, l
in C, like this -> scanf("%d %d", &n, &l);
var n, l : Int?
var read : String = readLine()!
n = Int(read[read.startIndex])
l = read[read.index(read.startIndex, offsetBy : 2)]

The best way to handle input for a cli tool in Swift is probably by using the official ArgumentParser library.
But a super naive implementation would involve something like:
Read the input
Split it using spaces
Try to parse into Ints
The following example is of course not something that could be used for anything other than learning...:
print("Please input 2 numbers separated by space:")
let read = readLine()
if let inputs = read?.split(separator: " ") // Split using space
.map(String.init) // Convert substring to string
.compactMap(Int.init), // Try to convert to Ints (get rid of nils)
inputs.count > 1 { // Ensure that we got at least 2 elements
let (n, l) = (inputs[0], inputs[1])
print(n, l)
} else {
// Handle the case
}

Related

Adding elements to a MLMultiArray

I have a CoreML model (created using TF and converted to CoreML). For it
input is: MultiArray (Double 1 x 40 x 3)
output is: MultiArray (Double)
I will be getting these [a,b,c] tuples and need to collect 40 of them before sending into to the model for prediction. I am looking through the MLMultiArray documentation and am stuck. May be because of I am new to Swift.
I have a variable called modelInput that I want to initialize and then as the tuples come in, add them to the modelInput variable.
modelInput = MLMultiArray(shape:[1,40,3], dataType:MLMultiArrayDataType.double))
The modelInput.count is 120 after this call. So I am guessing an empty array is created.
However now I want to add the tuples as they come in. I am not sure how to do this.
For this I have a currCount variable which is updated after every call. The following code however gives me an error.
"Value of type 'UnsafeMutableRawPointer' has no subscripts"
var currPtr : UnsafeMutableRawPointer = modelInput.dataPointer + currCount
currPtr[0] = a
currPtr[1] = b
currPtr[2] = c
currCount = currCount + 3
How do I update the multiArray?
Is my approach even correct? Is this the correct way to create a multi array for the prediction input?
I would also like to print the contents of the MLMultiArray. There doesn't appear to be any helper functions to do that though.
You can use pointers, but you have to change the raw pointer into a typed one. For example:
let ptr = UnsafeMutablePointer<Float>(OpaquePointer(multiArray.dataPointer))
ptr[0] = a
ptr[1] = b
ptr[2] = c
I figured it out. I have to this --
modelInput[currCount+0] = NSNumber(floatLiteral: a)
modelInput[currCount+1] = NSNumber(floatLiteral: b)
modelInput[currCount+2] = NSNumber(floatLiteral: c)
I cannot use the raw pointer to access elements.

convert number string into float with specific precision (without getting rounding errors)

I have a vector of cells (say, size of 50x1, called tokens) , each of which is a struct with properties x,f1,f2 which are strings representing numbers. for example, tokens{15} gives:
x: "-1.4343429"
f1: "15.7947111"
f2: "-5.8196158"
and I am trying to put those numbers into 3 vectors (each is also 50x1) whose type is float. So I create 3 vectors:
x = zeros(50,1,'single');
f1 = zeros(50,1,'single');
f2 = zeros(50,1,'single');
and that works fine (why wouldn't it?). But then when I try to populate those vectors: (L is a for loop index)
x(L)=tokens{L}.x;
.. also for the other 2
I get :
The following error occurred converting from string to single:
Conversion to single from string is not possible.
Which I can understand; implicit conversion doesn't work for single. It does work if x, f1 and f2 are of type 50x1 double.
The reason I am doing it with floats is because the data I get is from a C program which writes the some floats into a file to be read by matlab. If I try to convert the values into doubles in the C program I get rounding errors...
So, (after what I hope is a good question,) how might I be able to get the numbers in those strings, at the right precision? (all the strings have the same number of decimal places: 7).
The MCVE:
filedata = fopen('fname1.txt','rt');
%fname1.txt is created by a C program. I am quite sure that the problem isn't there.
scanned = textscan(filedata,'%s','Delimiter','\n');
raw = scanned{1};
stringValues = strings(50,1);
for K=1:length(raw)
stringValues(K)=raw{K};
end
clear K %purely for convenience
regex = 'x=(?<x>[\-\.0-9]*),f1=(?<f1>[\-\.0-9]*),f2=(?<f2>[\-\.0-9]*)';
tokens = regexp(stringValues,regex,'names');
x = zeros(50,1,'single');
f1 = zeros(50,1,'single');
f2 = zeros(50,1,'single');
for L=1:length(tokens)
x(L)=tokens{L}.x;
f1(L)=tokens{L}.f1;
f2(L)=tokens{L}.f2;
end
Use function str2double before assigning into yours arrays (and then cast it to single if you want). Strings (char arrays) must be explicitely converted to numbers before using them as numbers.

Is it possible to write a macro that expands an expression N times? (Where N is a constant) [duplicate]

This question already has answers here:
Is there a way to count with macros?
(4 answers)
Counting length of repetition in macro
(3 answers)
Using a macro to initialize a big array of non-Copy elements
(3 answers)
Closed 6 years ago.
Say we need to declare a fixed size array with values, where the size of the array is defined by a constant that may change depending on compile time settings.
So for example:
let my_array = expand_into_array!(j, ARRAY_SIZE, -v0[j] * f);
Where ARRAY_SIZE is a constant, for example:
const ARRAY_SIZE: usize = 3;
Could expand into something like...
let my_array = [
{let j = 0; {-v0[j] * f}},
{let j = 1; {-v0[j] * f}},
{let j = 2; {-v0[j] * f}},
];
Since the expression is a fixed size array, it may be possible to use pattern matching, for a limited number of items ... up to 32 for example.
Is it possible to write a macro that expands an expression N times, based on a constant integer?
Details...
Looking into this, I wrote a macro which defines an array, then fills it in, eg:
const ARRAY_SIZE: usize = 3;
macro_rules! expand_into_array {
($index_var:ident, $const_size:expr, $body:expr) => {
{
let mut tmp: [_; $const_size] = [0.0; $const_size];
for $index_var in 0..$const_size {
tmp[$index_var] = $body;
}
// TODO, check $body _never_ breaks.
tmp
}
}
}
pub fn negated_array(v0: &[f64; ARRAY_SIZE]) -> [f64; ARRAY_SIZE] {
expand_into_array!(j, ARRAY_SIZE, {
-v0[j]
})
}
This works as expected, and besides the wrinkle (that the $body expression could include a break). this works without problems.
However initializing the array to 0.0 isn't getting optimized out (changing this value shows up as changes when run with: cargo rustc --release -- --emit asm
I'd rather not use unsafe { std::mem::uninitialized }.
Update, from asking another question, it seems macros can only match against literals, and not constants.
So this is simply not possible with macros in Rust.

string format in Scala

New to Scala and see people are using sign f ahead of a string, here is an example I tried which works. Wondering what is the function of sign f? Does it need to be combined to use with %s? Tried to search some tutorials but failed. Thanks.
object HelloWorld {
def main(args: Array[String]) {
var start = "Monday";
var end = "Friday";
var palindrome = "Dot saw I was Tod";
println(f"date >= $start%s and date <= $end%s" + palindrome);
// output date >= Monday and date <= FridayDot saw I was Tod
}
}
http://docs.scala-lang.org/overviews/core/string-interpolation.html
The f Interpolator
Prepending f to any string literal allows the creation of simple
formatted strings, similar to printf in other languages. When using
the f interpolator, all variable references should be followed by a
printf-style format string, like %d.
PS. another somewhat related feature is http://docs.scala-lang.org/overviews/quasiquotes/expression-details
See the explanation here. For people coming from C the f interpolator is a printf style formatter. % is to denote the type of data and with a $ you may may refer to a previously defined variable.
The % in not mandatory. Its just that you will get a format that is decided by the compiler at compile time. Bit uyou may want to change the output format sometimes.
So if i take an example ,
var start = "Monday";
var end = "Friday";
val age = 33
var palindrome = "Dot saw I was Tod";
println(f"date >= $start and date <= $end and age<= $age%f" + palindrome);
I could omit the %f and i will see a output of 33 as it will inferred as Int. However i could use %f if i wanted to format it as a float. Also if you use a incompatible formatted you will receive a error at compile time.

Get letter corresponding to number in e (IEEE 1646)

I want to convert from integer values to string characters as follows:
0 to "a"
1 to "b"
and so forth up to
26 to "z"
Is there a way to do this in e without a big case statement?
Note: e is strongly typed and it isn't possible to do any type of arithmetic on string values. There also isn't any char-like type.
Another node: To all you C/C++ hotshots who keep down-voting my question, this isn't as easy a problem as you might think.
You can do something like this:
{0c"a"+my_num}.as_a(string)
0c"a" denotes the ASCII value of the letter 'a'. And an as_a() conversion of a list of numbers (actually, bytes) into a string creates a string where each character has the ASCII value of the corresponding list element.
You can define a new enum type to correspond to the alphabet, and use the fact that enum values are backed by int values to transform a list of ints to a list of enums, or to a string.
Consider the following example:
<'
type chars : [a, b, c, d, e, f, g];
extend sys {
run() is also {
var l : list of int[0..6];
var s: string = "";
gen l keeping {it.size() == 5};
print l;
for each in l { print it.as_a(chars); };
for each in l { s = append(s, it.as_a(chars)); };
print s;
};
};
'>
The output of this example will be:
l =
0. 4
1. 0
2. 6
3. 4
4. 5
it.as_a(chars) = e
it.as_a(chars) = a
it.as_a(chars) = g
it.as_a(chars) = e
it.as_a(chars) = f
s = "eagef"
Note that you can assign custom values to elements in the enum. In that way, you can assign standard ASCII values to enum elements.
type chars : [a=10, b=11, c=12, d=13, e=14, f=15, g=16];