Unclear about use of undefined SDK enum constants at runtime - constants

It might be best to start with an example: In OS X the following enum constants are defined in Foundation/NSString.h:
NSCaseInsensitiveSearch = 1,
NSLiteralSearch = 2,
NSRegularExpressionSearch NS_ENUM_AVAILABLE(10_7, 3_2) = 1024
Questions:
At compile time, does the compiler simply replace NSRegularExpressionSearch with its constant value (1024)?
Or, is the constant value found at runtime, and if so, then what value is the constant when running on pre 10.7?
Is it advisable conditionally check at runtime what environment the program is running in before using the enum constant?
Is it always safe to put NSRegularExpressionSearch in my code even if it will run on a pre-10.7 runtime? (By safe I mean the presence of the constant alone won't cause a crash or exception; obviously I must account for the program's behavior when I use a constant value that an older API doesn't recognize.)

Related

Instantiation in Minizinc

I am reading through "A Minizinc Tutorial" by Kim Marriott and it says that
the combination of variable instantiation and type is called type-inst. As you start to use Minizinc, you will undoubtedly see examples of type-inst errors.
What exactly are type-inst errors?
I believe the terminology is not often used in the MiniZinc literature these days, but for every value in MiniZinc the compiler keeps track of two things: it's type (int, bool, float, etc.) and if it is a decision variable (not known at solve time) or a problem parameter (must be known when rewriting the model for the solver). Together these two things are called the Type Instantiation or type-inst.
A type-inst error is an error given by the type checker of the compiler. These error can occur in many places, such as when in a declaration the declared type instantiation doesn't match it's right hand side, or when two side of an if-then-else have a different type-instantiation, or when the arguments of a call do not match the declared type-instantiation of the function-declaration.
The mismatch that causes these errors can come from either side of the type-inst: either the types are incompatible (e.g. used float instead of bool), or you used a decision variable where only a problem parameter was allowed. These issues are usually caused by mistakes in the model and are usually resolved easily by changing the value used or using different language constructs.
Note that MiniZinc does allow sub-typing: You are allowed to use bool instead of int and it is converted to a 0/1 value. Similarly you can use a integer value instead of a float, and you can use a parameter in place of a variable.
The newest version of the MiniZinc Tutorial can be found with its documentation: https://www.minizinc.org/doc-latest/en/part_2_tutorial.html

What is the use of minizinc fix function?

i see that fix documentation says:
http://www.minizinc.org/doc-lib/doc-builtins-reflect.html#Ifunction-dd-T-cl-fix-po-var-opt-dd-T-cl-x-pc
function array [$U] of $T: fix(array [$U] of var opt $T: x)
Check if the value of every element of the array x is fixedat this point in evaluation. If all are fixed, return an array of their values, otherwise abort.
I am thinking it can be used to coerce a var to a par.
Here is the code.
array [1..num] of var int: value ;
%% generate random numbers from 0..num-1, this should fix the value of the var "value" or so i think
constraint forall(i in index_set(value))(let {int:temp_value=discrete_distribution([1|i in index_set(value)]); } in value[i]=trace(show(temp_value)++"\n", temp_value));
%%% this i was expecting to work, as "value" elements are fixed above
array [1..num] of int:value__ =[ trace(show(fix(value[i])), fix(value[i])) | i in index_set(value)] ;
But i get:
MiniZinc: evaluation error:
with i = 1
in call 'trace'
in call 'fix'
expression is not fixed
My questions are:
1) I think i should expect this error as minizinc is not sequential execution language?
2) Examples of fix in user guide is only where output statement is used. Is it the only place to use fix?
3) How would i coerce a var to a par?
By the way I am trying this var to par conversion because i am having problem with array generator expression. Here is the code
int:num__=200;
int:seed=134;
int: two_m=2097184;
%% prepare weights for generating numbers form 1..(two_m div 64), basically same weight
array [1..(two_m div 64)] of int: value_6_wt= [seed+1 | i in 1..(two_m div 64)] ;
%% generate numbers. this dose not work gives out
%% in variable declaration for 'value6'
%% parameter value out of range
array [1..num__] of int : value6 = [ discrete_distribution(value_6_wt) | j in 1..num__];
In the MiniZinc language the difference between a parameter and a variable is only the fact that a parameter must have a value at compile time. Within the compiler we turn as many variables into parameters as we can. This saves the solver from having to do some work. When we know that a variable has been turned into a parameter, then we can use the fix function to convince the type system that we really can use this variable as a parameter and see its value.
The problem here however is that fix is defined to abort when the variable is not fixed to one value. If no testing is done, this requires some (magic/)knowledge about the compilation process. In your case it seems that the second array is evaluated before the optimisation stage, in which all aliasing is resolved. This is the reason why it does not work. (This is indeed one of the things that is a consequence of a declarative language)
Although fix might only be used in the output statements in the examples (where it's guaranteed to work), it is used in many locations in the MiniZinc libraries. If we for example look at the library that is used for MIP solvers, there are many constraints that can be encoded more efficiently if one of the arguments is a parameter. Therefore, you will often see that the a constraint in this library first tests its arguments with is_fixed, and then use a better encoding if this returns true.
The output statement and when is_fixed returns true will both give the guarantee that a variable is fixed and ensure that the compilation doesn't abort. There is no other way to coerce a variable to a parameter, but unless you are dealing with dependant predicate definitions, you can just trust the MiniZinc compiler to ensure that the resulting FlatZinc will contain a parameter instead of a variable.

how single and double type variables work in the same copy of code in Matlab like template in C++

I am writing a signal processing program using matlab. I know there are two types of float-pointing variables, single and double. Considering the memory usage, I want my code to work with only single type variable when the system's memory is not large, while it can also be adapted to work with double type variables when necessary, without significant modification (simple and light modification before running is OK, i.e., I don't need runtime-check technique). I know this can be done by macro in C and by template in C++. I don't find practical techniques which can do this in matlab. Do you have any experience with this?
I have a simple idea that I define a global string containing "single" or "double", then I pass this string to any memory allocation method called in my code to indicate what type I need. I think this can work, I just want to know which technique you guys use and is widely accepted.
I cannot see how a template would help here. The type of c++ templates are still determined in compile time (std::vector vec ...). Also note that Matlab defines all variables as double by default unless something else is stated. You basically want runtime checks for your code. I can think of one solution as using a function with a persistent variable. The variable is set once per run. When you generate variables you would then have to generate all variables you want to have as float through this function. This will slow down assignment though, since you have to call a function to assign variables.
This example is somehow an implementation of the singleton pattern (but not exactly). The persistent variable type is set at the first use and cannot change later in the program (assuming that you do not do anything stupid as clearing the variable explicitly). I would recommend to go for hardcoding single in case performance is an issue, instead of having runtime checks or assignment functions or classes or what you can come up with.
function c = assignFloat(a,b)
persistent type;
if (isempty(type) & nargin==2)
type = b;
elseif (isempty(type))
type = 'single';
% elseif(nargin==2), error('Do not set twice!') % Optional code, imo unnecessary.
end
if (strcmp(type,'single'))
c = single(a);
return;
end
c = double(a);
end

Initialiser element is not a compile time constant

In my constant file, I have included the below line
NSString * ALERT_OK = NSLocalizedString(#"Ok",#"Ok");
After this, when I tried to compile I am receiving the below error
Initialiser element is not a compile time constant
How can I debug this?
The problem is that NSLocalizedString is a function which returns different values, depending on the language. It is not a constant which can be figured out until the system is running.
Instead, use:
#define ALERT_OK NSLocalizedString(#"Ok",#"Ok");
And it will now simply replace ALERT_OK with the function and you will be fine. (Note that you should be using some kind of prefix to all global values like this so that you don't accidentally create something with the same name being used somewhere else.)

What is the difference between a strongly typed language and a statically typed language?

Also, does one imply the other?
What is the difference between a strongly typed language and a statically typed language?
A statically typed language has a type system that is checked at compile time by the implementation (a compiler or interpreter). The type check rejects some programs, and programs that pass the check usually come with some guarantees; for example, the compiler guarantees not to use integer arithmetic instructions on floating-point numbers.
There is no real agreement on what "strongly typed" means, although the most widely used definition in the professional literature is that in a "strongly typed" language, it is not possible for the programmer to work around the restrictions imposed by the type system. This term is almost always used to describe statically typed languages.
Static vs dynamic
The opposite of statically typed is "dynamically typed", which means that
Values used at run time are classified into types.
There are restrictions on how such values can be used.
When those restrictions are violated, the violation is reported as a (dynamic) type error.
For example, Lua, a dynamically typed language, has a string type, a number type, and a Boolean type, among others. In Lua every value belongs to exactly one type, but this is not a requirement for all dynamically typed languages. In Lua, it is permissible to concatenate two strings, but it is not permissible to concatenate a string and a Boolean.
Strong vs weak
The opposite of "strongly typed" is "weakly typed", which means you can work around the type system. C is notoriously weakly typed because any pointer type is convertible to any other pointer type simply by casting. Pascal was intended to be strongly typed, but an oversight in the design (untagged variant records) introduced a loophole into the type system, so technically it is weakly typed.
Examples of truly strongly typed languages include CLU, Standard ML, and Haskell. Standard ML has in fact undergone several revisions to remove loopholes in the type system that were discovered after the language was widely deployed.
What's really going on here?
Overall, it turns out to be not that useful to talk about "strong" and "weak". Whether a type system has a loophole is less important than the exact number and nature of the loopholes, how likely they are to come up in practice, and what are the consequences of exploiting a loophole. In practice, it's best to avoid the terms "strong" and "weak" altogether, because
Amateurs often conflate them with "static" and "dynamic".
Apparently "weak typing" is used by some persons to talk about the relative prevalance or absence of implicit conversions.
Professionals can't agree on exactly what the terms mean.
Overall you are unlikely to inform or enlighten your audience.
The sad truth is that when it comes to type systems, "strong" and "weak" don't have a universally agreed on technical meaning. If you want to discuss the relative strength of type systems, it is better to discuss exactly what guarantees are and are not provided.
For example, a good question to ask is this: "is every value of a given type (or class) guaranteed to have been created by calling one of that type's constructors?" In C the answer is no. In CLU, F#, and Haskell it is yes. For C++ I am not sure—I would like to know.
By contrast, static typing means that programs are checked before being executed, and a program might be rejected before it starts. Dynamic typing means that the types of values are checked during execution, and a poorly typed operation might cause the program to halt or otherwise signal an error at run time. A primary reason for static typing is to rule out programs that might have such "dynamic type errors".
Does one imply the other?
On a pedantic level, no, because the word "strong" doesn't really mean anything. But in practice, people almost always do one of two things:
They (incorrectly) use "strong" and "weak" to mean "static" and "dynamic", in which case they (incorrectly) are using "strongly typed" and "statically typed" interchangeably.
They use "strong" and "weak" to compare properties of static type systems. It is very rare to hear someone talk about a "strong" or "weak" dynamic type system. Except for FORTH, which doesn't really have any sort of a type system, I can't think of a dynamically typed language where the type system can be subverted. Sort of by definition, those checks are bulit into the execution engine, and every operation gets checked for sanity before being executed.
Either way, if a person calls a language "strongly typed", that person is very likely to be talking about a statically typed language.
This is often misunderstood so let me clear it up.
Static/Dynamic Typing
Static typing is where the type is bound to the variable. Types are checked at compile time.
Dynamic typing is where the type is bound to the value. Types are checked at run time.
So in Java for example:
String s = "abcd";
s will "forever" be a String. During its life it may point to different Strings (since s is a reference in Java). It may have a null value but it will never refer to an Integer or a List. That's static typing.
In PHP:
$s = "abcd"; // $s is a string
$s = 123; // $s is now an integer
$s = array(1, 2, 3); // $s is now an array
$s = new DOMDocument; // $s is an instance of the DOMDocument class
That's dynamic typing.
Strong/Weak Typing
(Edit alert!)
Strong typing is a phrase with no widely agreed upon meaning. Most programmers who use this term to mean something other than static typing use it to imply that there is a type discipline that is enforced by the compiler. For example, CLU has a strong type system that does not allow client code to create a value of abstract type except by using the constructors provided by the type. C has a somewhat strong type system, but it can be "subverted" to a degree because a program can always cast a value of one pointer type to a value of another pointer type. So for example, in C you can take a value returned by malloc() and cheerfully cast it to FILE*, and the compiler won't try to stop you—or even warn you that you are doing anything dodgy.
(The original answer said something about a value "not changing type at run time". I have known many language designers and compiler writers and have not known one that talked about values changing type at run time, except possibly some very advanced research in type systems, where this is known as the "strong update problem".)
Weak typing implies that the compiler does not enforce a typing discpline, or perhaps that enforcement can easily be subverted.
The original of this answer conflated weak typing with implicit conversion (sometimes also called "implicit promotion"). For example, in Java:
String s = "abc" + 123; // "abc123";
This is code is an example of implicit promotion: 123 is implicitly converted to a string before being concatenated with "abc". It can be argued the Java compiler rewrites that code as:
String s = "abc" + new Integer(123).toString();
Consider a classic PHP "starts with" problem:
if (strpos('abcdef', 'abc') == false) {
// not found
}
The error here is that strpos() returns the index of the match, being 0. 0 is coerced into boolean false and thus the condition is actually true. The solution is to use === instead of == to avoid implicit conversion.
This example illustrates how a combination of implicit conversion and dynamic typing can lead programmers astray.
Compare that to Ruby:
val = "abc" + 123
which is a runtime error because in Ruby the object 123 is not implicitly converted just because it happens to be passed to a + method. In Ruby the programmer must make the conversion explicit:
val = "abc" + 123.to_s
Comparing PHP and Ruby is a good illustration here. Both are dynamically typed languages but PHP has lots of implicit conversions and Ruby (perhaps surprisingly if you're unfamiliar with it) doesn't.
Static/Dynamic vs Strong/Weak
The point here is that the static/dynamic axis is independent of the strong/weak axis. People confuse them probably in part because strong vs weak typing is not only less clearly defined, there is no real consensus on exactly what is meant by strong and weak. For this reason strong/weak typing is far more of a shade of grey rather than black or white.
So to answer your question: another way to look at this that's mostly correct is to say that static typing is compile-time type safety and strong typing is runtime type safety.
The reason for this is that variables in a statically typed language have a type that must be declared and can be checked at compile time. A strongly-typed language has values that have a type at run time, and it's difficult for the programmer to subvert the type system without a dynamic check.
But it's important to understand that a language can be Static/Strong, Static/Weak, Dynamic/Strong or Dynamic/Weak.
Both are poles on two different axis:
strongly typed vs. weakly typed
statically typed vs. dynamically typed
Strongly typed means, a variable will not be automatically converted from one type to another. Weakly typed is the opposite: Perl can use a string like "123" in a numeric context, by automatically converting it into the int 123. A strongly typed language like python will not do this.
Statically typed means, the compiler figures out the type of each variable at compile time. Dynamically typed languages only figure out the types of variables at runtime.
Strongly typed means that there are restrictions between conversions between types.
Statically typed means that the types are not dynamic - you can not change the type of a variable once it has been created.
Answer is already given above. Trying to differentiate between strong vs week and static vs dynamic concept.
What is Strongly typed VS Weakly typed?
Strongly Typed: Will not be automatically converted from one type to another
In Go or Python like strongly typed languages "2" + 8 will raise a type error, because they don't allow for "type coercion".
Weakly (loosely) Typed: Will be automatically converted to one type to another:
Weakly typed languages like JavaScript or Perl won't throw an error and in this case JavaScript will results '28' and perl will result 10.
Perl Example:
my $a = "2" + 8;
print $a,"\n";
Save it to main.pl and run perl main.pl and you will get output 10.
What is Static VS Dynamic type?
In programming, programmer define static typing and dynamic typing with respect to the point at which the variable types are checked. Static typed languages are those in which type checking is done at compile-time, whereas dynamic typed languages are those in which type checking is done at run-time.
Static: Types checked before run-time
Dynamic: Types checked on the fly, during execution
What is this means?
In Go it checks typed before run-time (static check). This mean it not only translates and type-checks code it’s executing, but it will scan through all the code and type error would be thrown before the code is even run. For example,
package main
import "fmt"
func foo(a int) {
if (a > 0) {
fmt.Println("I am feeling lucky (maybe).")
} else {
fmt.Println("2" + 8)
}
}
func main() {
foo(2)
}
Save this file in main.go and run it, you will get compilation failed message for this.
go run main.go
# command-line-arguments
./main.go:9:25: cannot convert "2" (type untyped string) to type int
./main.go:9:25: invalid operation: "2" + 8 (mismatched types string and int)
But this case is not valid for Python. For example following block of code will execute for first foo(2) call and will fail for second foo(0) call. It's because Python is dynamically typed, it only translates and type-checks code it’s executing on. The else block never executes for foo(2), so "2" + 8 is never even looked at and for foo(0) call it will try to execute that block and failed.
def foo(a):
if a > 0:
print 'I am feeling lucky.'
else:
print "2" + 8
foo(2)
foo(0)
You will see following output
python main.py
I am feeling lucky.
Traceback (most recent call last):
File "pyth.py", line 7, in <module>
foo(0)
File "pyth.py", line 5, in foo
print "2" + 8
TypeError: cannot concatenate 'str' and 'int' objects
Data Coercion does not necessarily mean weakly typed because sometimes its syntacical sugar:
The example above of Java being weakly typed because of
String s = "abc" + 123;
Is not weakly typed example because its really doing:
String s = "abc" + new Integer(123).toString()
Data coercion is also not weakly typed if you are constructing a new object.
Java is a very bad example of weakly typed (and any language that has good reflection will most likely not be weakly typed). Because the runtime of the language always knows what the type is (the exception might be native types).
This is unlike C. C is the one of the best examples of weakly typed. The runtime has no idea if 4 bytes is an integer, a struct, a pointer or a 4 characters.
The runtime of the language really defines whether or not its weakly typed otherwise its really just opinion.
EDIT:
After further thought this is not necessarily true as the runtime does not have to have all the types reified in the runtime system to be a Strongly Typed system.
Haskell and ML have such complete static analysis that they can potential ommit type information from the runtime.
One does not imply the other. For a language to be statically typed it means that the types of all variables are known or inferred at compile time.
A strongly typed language does not allow you to use one type as another. C is a weakly typed language and is a good example of what strongly typed languages don't allow. In C you can pass a data element of the wrong type and it will not complain. In strongly typed languages you cannot.
Strong typing probably means that variables have a well-defined type and that there are strict rules about combining variables of different types in expressions. For example, if A is an integer and B is a float, then the strict rule about A+B might be that A is cast to a float and the result returned as a float. If A is an integer and B is a string, then the strict rule might be that A+B is not valid.
Static typing probably means that types are assigned at compile time (or its equivalent for non-compiled languages) and cannot change during program execution.
Note that these classifications are not mutually exclusive, indeed I would expect them to occur together frequently. Many strongly-typed languages are also statically-typed.
And note that when I use the word 'probably' it is because there are no universally accepted definitions of these terms. As you will already have seen from the answers so far.
Imho, it is better to avoid these definitions altogether, not only there is no agreed upon definition of the terms, definitions that do exist tend to focus on technical aspects for example, are operation on mixed type allowed and if not is there a loophole that bypasses the restrictions such as work your way using pointers.
Instead, and emphasizing again that it is an opinion, one should focus on the question: Does the type system make my application more reliable? A question which is application specific.
For example: if my application has a variable named acceleration, then clearly if the way the variable is declared and used allows the assignment of the value "Monday" to acceleration it is a problem, as clearly an acceleration cannot be a weekday (and a string).
Another example: In Ada one can define: subtype Month_Day is Integer range 1..31;, The type Month_Day is weak in the sense that it is not a separate type from Integer (because it is a subtype), however it is restricted to the range 1..31. In contrast: type Month_Day is new Integer; will create a distinct type, which is strong in the sense that that it cannot be mixed with integers without explicit casting - but it is not restricted and can receive the value -17 which is senseless. So technically it is stronger, but is less reliable.
Of course, one can declare type Month_Day is new Integer range 1..31; to create a type which is distinct and restricted.