Does Swift have an equivalent to Rust's `uninitialized`? - swift

The Rust standard library has the std::mem::uninitialized function, which allows you to manually create an undefined (and possible invalid) value of any type. This essentially maps to LLVM's undef. I was wondering if Swift had an equivalent, as I haven't been able to find one skimming through the standard library documentation.
Motivation
The primary use for this is to bypass the compiler's normal memory initialization checks when they prove imprecise. For instance, you might want to initialize some members of a structure using methods (or in Swift's case, property setters), which the compiler usually would not allow you to do. Using dummy values works, but that's potentially less efficient (if the optimizer cannot prove that the dummy is meaningless).
In Rust, uninitialized values are treated as initialized by the initialization checker, but as uninitialized by LLVM, leading to more reliable optimization. Since the Swift compiler is also built atop LLVM, it seems like something the Swift compiler should also be able to support.
My Specific Use Case
I have a struct which encapsulates a set of bit fields compacted into one machine word (an unsigned integer type). The struct type provides a safe and convenient interface for reading and modifying the individual fields, through computed properties.
I also have an initializer that takes the relevant field values as parameters. I could initialize the underlying machine word using the same bitwise operations that the computed properties use (in which case I would be repeating myself), or I could initialize it simply by setting the values of the computed properties.
Currently, Swift does not support using computed properties before self has been fully initialized. It's also unlikely Swift will ever support this case, since the computed property setters modify the existing machine word, which would require it to already be initialized, at least as far as Swift is concerned. Only I know that all my setters, in concert, will fully initialize that machine word.
My current solution is to simply initialize the machine word to 0, and then run the setters. Since a constant 0 is trivially absorbed into the bitwise | chain that combines the fields, there's no CPU time lost, but that's always going to be the case. This is the kind of situation where, in Rust, I would have used an uninitialized value to express my intent to the optimizer.

Related

What is the benefit of defining enumerators as a DUT?

The main goal of defining enumerators is to assign a variable to some numbers and their equal strings as I understand.
We can define var a as an enum everywhere in the initializing section of our Program or Function Block like this:
a:(start,stop,operate);
tough I don't know why we can't see that in tabular view but there there is a big question that:
What is the benefit of defining enumerators as a DUT?
There are 3 main benefits for me:
You can use the same enum in multiples function blocks
You can use TO_STRING on enums declared as DUTs (After enabling it with {attribute 'to_string'} Infosys
You can use refactoring on names of each component, which is impossible with local enums
When defining an enum as a DUT it is available everywhere in your code (global scope).
This is helpful in many cases, but in general it is not good programming practice to have a lot of stuff available in the global scope.
Here is a bit elaboration on the topic.
In addition to the above, one benefit is that if you are using an enumeration for something like FB states, you will be able to see the descriptive status name when the program is running (READING, WRITING, WAITING, ERROR, etc.).
You can see it in the variable declarations section, in-line with your code, or in the watch window. You don’t have to remember what status number was defined in your state machine.
This benefit comes with local enumerations or DUT (global) enumerations.
In addition to other good points already made, there is another big advantage to enumerations : you can use the enumeration as the type of a variable, and when you do that, the compiler will (if {attribute 'strict'} is used in the enumeration declaration, which it probably should) refuse an assignment to that variable of a value that is not allowed by the enumeration.
In other words, you get rid of a whole class of failure modes where the variable ends up having an invalid value due to some coding mistake the compiler cannot catch.
It takes a trivial amount of time to create an enumeration, and it has benefits on many levels. I would say the real question is why not use them whenever a fixed list of meaningful values needs to be expressed.

How does one create constants in Tcl?

I have seen the use of set keyword in tcl often. This cannot be used to create constant. How does one create constant in tcl which can then be used by other procedures?
Generally speaking, most of the uses of constants fall into several categories:
enumeration values
magical numbers
looping control factors
scaling factors
In Tcl, for the first case you'd usually just use the name instead of mapping it to an integer, with integer mappings only being applied in the cases that need them. Even bit sets can be handled that way, substituting a list of names for the set of bits (and the name being present in the list is equivalent to the bit being set). Tcl's C API has relevant functions for helping with this, specifically Tcl_GetIndexFromObj().
Magical values are usually best locked away close to the code that handles them. If I was interfacing to hardware, I'd not let the magic values appear at all at the script level (since I'd have the binding code written in C).
Looping control factors are often best represented as default values for procedure arguments, as they are things that you want to sometimes override. But they're often not as needed once custom control structures are available, and they fit a lot more into the Tcl style of working.
Scaling factors are the case where constants might be useful. I tend to simulate those by just using a global or namespace variable and plain old not assigning to it from elsewhere. I'd be quite interested in having code to allow constants (specifically variables that can't be assigned to) as a standard feature, but we don't have that right now.
Once those cases are covered, what remain tend to be unimportant constants. After all, there's almost no need to calculate the sizes of things for allocation and stuff like that, and things like positional binding in SQL statements are discouraged within TDBC in favour of binding by name (an awful lot easier to get right).
A simple way of making a constant is to put a write trace on a variable so that whenever it is written to, it is reset back to its constant value.
set CONSTANT 123
trace add variable CONSTANT write {apply {args {
global CONSTANT
# Reset to the constant value; write traces are after the fact
set CONSTANT 123
# Make the write give an error
error "may not change constant"
}}}

How to synthesize programs with multiple types in Rosette?

I am looking to write a DSL in Rosette with the goal of synthesizing a function. The DSL is based off a very small subset of Haskell, and is strongly typed. I therefore want to make sure that anything that Rosette generates is well-typed.
One way I could consider going about this is writing a "is-well-typed-code" function (that's something I've already done in Julia, which I'm porting from), and then assert that it must hold true of the synthesized function. However, I do not see with define-synthax and synthesize how to view the code both as a syntactic object (i.e., an S-expr that can have any function applied to it) and as a semantic function when synthesizing.
I could also imagine using a cond inside the define-synthax module over all possible types of the arguments (this is a finite set for this small DSL), and defining the possible forms separately for each type. However, including conds inside define-synthax does not seem to be possible.
Any ideas?
Thanks!

Subclass a Struct or Similar

I understand how structs and classes (and protocols) work on the basic level. I have a rather common situation:
I need to have generic value types with operators which really must copy on assignment.
These types have complex structure and I would like to be able to specialise by subclassing otherwise there will be copied code everywhere and it will be poor programming.
I have tried protocols and extensions but then because the protocol wasn't generic I was unable to define the (generic) operators I wanted.
If I use classes I will not copy on assignment.
Today's example is I have Matrix, and SquareMatrix under that with specific square matrix functions. There are operators and the matrices can be populated by anything conforming to my ring protocol. I tried defining almost all the functionality in a protocol with associated type, and an extension.
Edit: I am really wondering what I should be coding. In the matrix situation I need to be able to pass a square matrix as any other, so subclassing is the only option? Maybe I'm wrong. The main issue is when I have to write a function which talks about internal values, I have to know the generic type argument to do anything useful. For example when defining addition, I have to create a new matrix and declare its generic type, but where do I get that from when I only know something is a (nongeneric) protocol - it's real type is generic but despite the protocol having this associated type, I have no way of getting it out.
Solution thanks to alexander momchliov. Essentially more work was needed to move code into the protocol extension fully and use 'Self' for all the relevant types. In the extension the compiler was happy with what the generic types were.
The code was private, I am sorry I was unable to paste any during this question. Thanks for your patience and help.
Struct inheritance/polymorphism wouldn't be possible for at least 2 reasons (that I can think of).
Structs are stored and moved around by value. This requires the compiler to know, at compile time, the exact size of the struct, in order to know how many bytes to copy after the start of a struct instance.
Suppose there was a struct A, and a struct B that inherits from A. Whenever the compiler sees a variable of type A, it has no way to be sure if the runtime type will really be an A, or if B was used instead. If B added on new stored properties that A didn't have, then B's size would be different (bigger) than A. The compiler would be unable to determine the runtime type, and the size of these structs.
Polymorphism would require a function table. A function table would be stored as a static member of the struct type. But to access this static member, every struct instance would need an instance member which encodes the type of the instance. This is usually called the "isa" pointer (as in, this instance is a A type). This would be 8 bytes of overhead (on 64 bit systems) for every instance. Considering Int, Bool, Double, and many other common types are all implemented as structs, this would be an unacceptable amount of overhead. Just think, a Bool is a one byte value, which would need 8 bytes of overhead. That's 11% efficiency!
For these reasons, protocols play a huge part in Swift, because they allow you introduce inheritance-like behaviour, without these issues.
First of all, in swift the protocol can be generic if you will use associatedtype.
On the over hand, you need the Value Semantics, so you can use Copy on Write methodology. This will give the value semantic to your classes (so they will be as safe as structs).
This both ways can be used to solve your problem.

Why NOT use optionals in Swift?

I was reading up on how to program in Swift, and the concept of optionals bugged me a little bit. Not really in terms of why to use optionals, that makes sense, but more so as to in what case would you not want to use optionals. From what I understand, an optional just allows an object to be set to nil, so why would you not want that feature? Isn't setting an object to nil the way you tell ARC to release an object? And don't most of the functions in Foundation and Cocoa return optionals? So outside of having to type an extra character each time to refer to an object, is there any good reason NOT to use an optional in place of a regular type?
There are tons of reasons to NOT use optional. The main reason: You want to express that a value MUST be available. For example, when you open a file, you want the file name to be a string, not an optional string. Using nil as filename simply makes no sense.
I will consider two main use cases: Function arguments and function return values.
For function arguments, the following holds: If the argument needs to be provided, option should not be used. If handing in nothing is okay and a valid (documented!) input, then hand in an optional.
For function return values returning no optional is especially nice: You promise the caller that he will receive an object, not either an object or nothing. When you do not return an optional, the caller knows that he may use the value right away instead of checking for null first.
For example, consider a factory method. Such method should always return an object, so why should you use optional here? There are a lot of examples like this.
Actually, most APIs should rather use non-optionals instead of optionals. Most of the time, simply passing/receiving possibly nothing is just not what you want. There are rather few cases where nothing is an option.
Each case where optional is used must be thoroughly documented: In which circumstances will a method return nothing? When is it okay to hand nothing to a method and what will the consequences be? A lot of documentation overhead.
Then there is also conciseness: If you use an API that uses optional all over the place, your code will be cluttered with null-checks. Of course, if every use of optional is intentional, then these checks are fine and necessary. However, if the API only uses optional because its author was lazy and was simply using optional all over the place, then the checks are unnecessary and pure boilerplate.
But beware!
My answer may sound as if the concept of optionals is quite crappy. The opposite is true! By having a concept like optionals, the programmer is able to declare whether handing in/returning nothing is okay. The caller of a function is always aware of that and the compiler enforces safety. Compare that to plain old C: You could not declare whether a pointer could be null. You could add documentation comments that state whether it may be null, but such comments were not enforced by the compiler. If the caller forgot to check a return value for null, you received a segfault. With optionals you can be sure that noone dereferences a null pointer anymore.
So in conclusion, a null-safe type system is one of the major advances in modern programming languages.
The original idea of optionals (which existed long before Swift) is to force programmer to check value for nil before using it — or to prevent outside code from passing nil where it is not allowed. A huge part of crashes in software, maybe even most of them, happens at address 0x00000000 (or with NullPointerException, or alike) precisely because it is way too easy to forget about nil-pointer scenario. (In 2009, Tony Hoare apologized for inventing null pointers).
Not using optionals is as valid and widespread use case as using them: when the value absolutely can not be missing, there should be non-optional type; when it can, there should be an optional.
But currently, the existing frameworks are written in Obj-C without optionals in mind, so automatically generated bridges between Swift and Obj-C just have to take and return optionals, because it is impossible to automatically deeply analyze each method and figure out which arguments and return values should be optionals or not. I'm sure over time Apple will manually fix every case where they got it wrong; right now you should not use those frameworks as an example, because they are definitely not a good one. (For good examples, you could check a popular functional language like Haskell which had optionals since the beginning).