I'm pretty new in Swift and I was wondering what is the difference between this (that compiles successfully, and returns "A"):
var label = "Apoel"
label[label.startIndex]
and the following, for which the compiler is complaining:
label[0]
I know that label is not an array of chars like C but using the first approach, means that the string manipulation in Swift is similar to that of C.
Also, I understand that the word finishes with something like C's "\0" because
label[label.endIndex]
gives an empty character while
label[label.endIndex.predecessor()
returns "l" which is the last letter of the String.
startIndex is of type Index which is a struct and not a simple Integer.
Related
Is there any way to convert any object to it's string representation in Matlab?
I tried
matlab.unittest.diagnostics.ConstraintDiagnostic.getDisplayableString
but sometimes it produces HTML code like this
0×0 empty char array
Is it possible to get only plain text in result?
It's not clear exactly what you want, but I use this kind of call for generating general purpose (text) error messages when the object type can vary. It calls disp() and captures the text output:
x = containers.Map({'A','B'}, [1,2]); % Example object - could be anything
s = evalc('disp(x)');
Now this uses evalc() which is rather clumsy and is never going to be quick and the 'x' is buried in a string. But it is convenient....
I'm just beginning Scala, coming from Java.
So I know that in Scala, all things are objects, and Scala matches the longest token (source: http://www.scala-lang.org/docu/files/ScalaTutorial.pdf), so if i understand correctly:
var b = 1.+(2)
then b is a Double, plus and Int, which in Java would be a Double.
But when I check its type via println(b.isInstanceOf[Int]) I see that it is an Int. Why is it not a Double like in Java?
According to the specification:
1. is not a valid floating point literal because the mandatory digit after the . is missing.
I believe it's done like that, exactly because expressions like 1.+(2) should be parsed as an integer 1, method call ., method name + and method argument (2).
The compiler would treat 1 and 2 as Ints by default. You could force either one of these to be a Double using 1.toDouble And the result (b) would be a double.
Btw - did you mean to write 1.0+2 - in which case b would be a double?
Is there any difference between the following?
var array1_OfStrings = [String]()
var array2_OfStrings: [String] = []
var array3_OfStrings: [String]
Testing in Playground shows that 1 and 2 are the same but 3 behaves differently.
Can someone explain me the difference please? And also what will be the preferred way to declare an empty array of String?
First two have the same effect.
declare a variable array1_OfStrings, let it choose the type itself. When it sees [String](), it smartly knows that's type array of string.
You set the variable array2_OfStrings as type array of string, then you say it's empty by []
This is different because you just tell you want array3_OfStrings to be type array of string, but not given it an initial value.
I think the first one is recommended as The Swift Programming Language uses it more often.
While I might be late to the party, there is one thing that needs to be said.
First option set array1_OfStrings to array of Strings
The other option tells that array1_OfStrings is array of Strings and then set it empty.
While this might be a really small difference, you will notice it while compiling. For the first option compiler will automatically try to find out what is the type of array1_OfStrings. Second option won't do that, you will let compiler know that this actually is array of Strings and done deal.
Why is this important? Take a look at the following link:
https://thatthinginswift.com/debug-long-compile-times-swift/
As you can see, if you don't declare type of your variable that might impact build performance A LOT.
I'm pulling my hair out trying to generate a valid NSRange, it doesn't seem like it should be this complicated so I'm guessing I'm using the wrong approach. Here is what I'm trying to do:
I have a string with some unicode character in it:
"The quick brown fox\n❄jumped\n❄over the lazy dog"
I want to create an NSRange from that character until the end of string, and while I can get the corresponding index for the first occurrence of the character:
text.rangeOfString("❄")?.startIndex
I can't seem to get the end of the string in a consistent format (something that I can pass to NSMakeRange) to actually generate the range. This seems like it should be pretty simple, yet I've been stuck for over an hour now trying to figure out how to get it to work, I keep ending up with Index types that I can't cast to integers to convert back to length that NSMakeRange requires for its second element.
Ideally I'd do something like this (which is invalid due to incompatible and non-castable types (Index vs Int)):
let start = text.rangeOfString("❄")?.startIndex
NSMakeRange(start, text.endIndex - start)
I am using Swift, so I have the ability to use Swift's Range<String.Index>, if it will make things easier, although it seems to be yet another range representation different from NSRange and I'm not sure how compatible the two are (don't want to run into another dimension of Index vs Int).
Cast your String as NSString.
You will be able to use Foundation's .rangeOfString instead of Swift's .rangeOfString.
The Foundation's one will return an NSRange.
Be careful though, it doesn't work the same as Swift's method with Unicode, and NSRange and Range are not compatible (although there's ways to convert them).
I see in Swift examples values like 123_456_789, numbers with underscores. What type do these values have by default?
Does it depend on the type of the variable I assign them to? They look quite funny and new to me, so I wonder, how are they treated if they are thrown just like they are, without defining a type?
From the documentation
(The Swift Programming Language -> Language Guide -> The Basics
-> Numeric Literals):
Numeric literals can contain extra formatting to make them easier to
read. Both integers and floats can be padded with extra zeros and can
contain underscores to help with readability. Neither type of
formatting affects the underlying value of the literal:
let paddedDouble = 000123.456
let oneMillion = 1_000_000
let justOverOneMillion = 1_000_000.000_000_1
So your 123_456_789 is a integer literal, and identical to 123456789.
You can insert the underscores wherever you want, not only as a
"thousands separator", such as 1_2_3_4_5_6_7_8_9 or 1_23_4567_89, if you like to write obfuscated code.
123_456_789 is an "integer literal" just like 123456789. "integer literal" is a type separate from Int or Int32 or Int8 or whatever. An "integer literal" can be assigned to any integer type (unlike for example an Int value which can only be assigned to an Int).
If you ask "can I treat them as integers", that doesn't make sense. It's a different type. For every type there are rules how it can be used. The rules for Int and "integer literal" are different.