Why rpc-json error codes are negative integer? - json-rpc

As we can see from json-rpc specification error object has required member code that must be an integer.
But why for pre-defined errors there is reserved codes as negative integer?

Related

Instantiation in Minizinc

I am reading through "A Minizinc Tutorial" by Kim Marriott and it says that
the combination of variable instantiation and type is called type-inst. As you start to use Minizinc, you will undoubtedly see examples of type-inst errors.
What exactly are type-inst errors?
I believe the terminology is not often used in the MiniZinc literature these days, but for every value in MiniZinc the compiler keeps track of two things: it's type (int, bool, float, etc.) and if it is a decision variable (not known at solve time) or a problem parameter (must be known when rewriting the model for the solver). Together these two things are called the Type Instantiation or type-inst.
A type-inst error is an error given by the type checker of the compiler. These error can occur in many places, such as when in a declaration the declared type instantiation doesn't match it's right hand side, or when two side of an if-then-else have a different type-instantiation, or when the arguments of a call do not match the declared type-instantiation of the function-declaration.
The mismatch that causes these errors can come from either side of the type-inst: either the types are incompatible (e.g. used float instead of bool), or you used a decision variable where only a problem parameter was allowed. These issues are usually caused by mistakes in the model and are usually resolved easily by changing the value used or using different language constructs.
Note that MiniZinc does allow sub-typing: You are allowed to use bool instead of int and it is converted to a 0/1 value. Similarly you can use a integer value instead of a float, and you can use a parameter in place of a variable.
The newest version of the MiniZinc Tutorial can be found with its documentation: https://www.minizinc.org/doc-latest/en/part_2_tutorial.html

Precondition failed: Negative count not allowed

Error:
Precondition failed: Negative count not allowed: file /BuildRoot/Library/Caches/com.apple.xbs/Sources/swiftlang/swiftlang-900.0.74.1/src/swift/stdlib/public/core/StringLegacy.swift, line 49
Code:
String(repeating: "a", count: -1)
Thinking:
Well, it doesn't make sense repeating some string a negative number of times. Since we have types in Swift, why not use an UInt?
Here we have some documentation about it.
Use UInt only when you specifically need an unsigned integer type with
the same size as the platform’s native word size. If this isn’t the
case, Int is preferred, even when the values to be stored are known to
be nonnegative. A consistent use of Int for integer values aids code
interoperability, avoids the need to convert between different number
types, and matches integer type inference, as described in Type Safety
and Type Inference.
Apple Docs
Ok that Int is preferred, therefore the API is just following the rules, but why the Strings API is designed like that? Why this constructor is not private and the a public one with UInt ro something like that? Is there a "real" reason? It this some "undefined behavior" kind of thing?
Also: https://forums.developer.apple.com/thread/98594
This isn't undefined behavior — in fact, a precondition indicates the exact opposite: an explicit check was made to ensure that the given count is positive.
As to why the parameter is an Int and not a UInt — this is a consequence of two decisions made early in the design of Swift:
Unlike C and Objective-C, Swift does not allow implicit (or even explicit) casting between integer types. You cannot pass an Int to function which takes a UInt, and vice versa, nor will the following cast succeed: myInt as? UInt. Swift's preferred method of converting is using initializers: UInt(myInt)
Since Ints are more generally applicable than UInts, they would be the preferred integer type
As such, since converting between Ints and UInts can be cumbersome and verbose, the easiest way to interoperate between the largest number of APIs is to write them all in terms of the common integer currency type: Int. As the docs you quote mention, this "aids code interoperability, avoids the need to convert between different number types, and matches integer type inference"; trapping at runtime on invalid input is a tradeoff of this decision.
In fact, Int is so strongly ingrained in Swift that when Apple framework interfaces are imported into Swift from Objective-C, NSUInteger parameters and return types are converted to Int and not UInt, for significantly easier interoperability.

What is the semantics of Long.toInt in Scala?

If long.isValidInt, then obviously, it evaluates to the corresponding Int value.
But what if it's not? Is it equivalent to simply dropping the leading bits?
Is it equivalent to simply dropping the leading bits?
Yes. To verify this you can either just try it or refer to the following section of the Scala specification:
Conversion methods toByte, toShort, toChar, toInt, toLong, toFloat, toDouble which convert the receiver object to the target type, using the rules of Java's numeric type cast operation. The conversion might truncate the numeric value (as when going from Long to Int or from Int to Byte) or it might lose precision (as when going from Double to Float or when converting between Long and Float).
And the corresponding section of the Java specification:
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
Why this isn't just described in the ScalaDocs for the toInt method, I don't know.

What to_unsigned does?

Could someone please explain me how VHDL's to_unsigned works or confirm that my understanding is correct?
For example:
C(30 DOWNTO 0) <= std_logic_vector (to_unsigned(-30, 31))
Here is my understanding:
-30 is a signed value, represented in bits as 1111111111100010
all bits should be inverted and to it '1' added to build the value of C
0000000000011101+0000000000000001 == 0000000000011111
In IEEE package numeric_std, the declaration for TO_UNSIGNED:
-- Id: D.3
function TO_UNSIGNED (ARG, SIZE: NATURAL) return UNSIGNED;
-- Result subtype: UNSIGNED(SIZE-1 downto 0)
-- Result: Converts a non-negative INTEGER to an UNSIGNED vector with
-- the specified SIZE.
You won't find a declared function to_unsigned with an argument or size that are declared as type integer. What is the consequence?
Let's put that in a Minimal, Complete, and Verifiable example:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
entity what_to_unsigned is
end entity;
architecture does of what_to_unsigned is
signal C: std_logic_vector (31 downto 0);
begin
C(30 DOWNTO 0) <= std_logic_vector (to_unsigned(-30, 31));
end architecture;
A VHDL analyzer will give us an error:
ghdl -a what_to_unsigned.vhdl
what_to_unsigned.vhdl:12:53: static constant violates bounds
ghdl: compilation error
And tell us -30 (line 12:character 53) has a bounds violation. Meaning in this case the numerical literal converted to universal_integer doesn't convert to type natural in the function to_unsigned.
A different tool might tell us a bit more graphically:
nvc -a what_to_unsigned.vhdl
** Error: value -30 out of bounds 0 to 2147483647 for parameter ARG
File what_to_unsigned.vhdl, Line 12
C(30 DOWNTO 0) <= std_logic_vector (to_unsigned(-30, 31));
^^^
And actually tells us where in the source code the error is found.
It's safe to say what you think to_unsigned does is not what the analyzer thinks it does.
VHDL is a strongly typed language, you tried to provide a value to place where that value is out of range for the argument ARG in function TO_UNSIGNED declared in IEEE package numeric_std.
The type NATURAL is declared in package standard and is made visible by an inferred declaration library std; use std.standard.all; in the context clause. (See IEEE Std 1076-2008, 13.2 Design libraries):
Every design unit except a context declaration and package STANDARD is
assumed to contain the following implicit context items as part of its
context clause:
library STD, WORK; use STD.STANDARD.all;
The declaration of natural found in 16.3 Package STANDARD:
subtype NATURAL is INTEGER range 0 to INTEGER'HIGH;
A value declared as a NATURAL is a subtype of INTEGER that has a constrained range excluding negative numbers.
And about here you can see you have the ability to answer this question with access to a VHDL standard compliant tool and referencing the IEEE Std 1076-2008, IEEE Standard VHDL Language Reference Manual.
The TL:DR; detail
You could note that 9.4 Static expressions, 9.4.1 General gives permission to evaluate locally static expressions during analysis:
Certain expressions are said to be static. Similarly, certain discrete ranges are said to be static, and the type marks of certain subtypes are said to denote static subtypes.
There are two categories of static expression. Certain forms of expression can be evaluated during the analysis of the design unit in which they appear; such an expression is said to be locally static.
Certain forms of expression can be evaluated as soon as the design hierarchy in which they appear is elaborated; such an expression is said to be globally static.
There may be some standard compliant tools that do not evaluate locally static expressions during analysis. "can be" is permissive not mandatory. The two VHDL tools demonstrated on the above code example take advantage of that permission. In both tools the command line argument -a tells the tool to analyze the provided file which is if successful, inserted into the current working library (WORK by default, see 13.5 Order of analysis, 13.2 Design libraries).
Tools that evaluate bounds checking at elaboration for locally static expressions are typically purely interpretive and even that can be overcome with a separate analysis pass.
The VHDL language can be used for formal specification of a design model used in formal proofs within the bounds specified by Annex D Potentially nonportable constructs and when relying on pure functions only (See 4.Subprograms and packages, 4.1 General).
VHDL compliant tools are guaranteed to give the same results, although there is no standardization of error messages nor limitations placed on tool implementation methodology.
to_unsigned is for converting between different types:
signal i : integer := 2;
signal u : unsigned(3 downto 0);
...
u <= i; -- Error, incompatible types
u <= to_unsigned(i, 4); -- OK, conversion function does the right thing
If you try to convert a negative integer, this is an error.
u <= to_unsigned(-2, 4); -- Error, does not work with negative numbers
If you simply want to invert an integer, i.e. 2 becomes -2, 5 becomes -5, just use the - operator:
u <= to_unsigned(-i, 4); -- OK as long as `i` was negative or zero
If you want the absolute value, a function for this is provided by the numeric_std library.
u <= to_unsigned(abs(i), 4);

JSONKit changing float values on decoding JSON object from webservice in iphone

Float values are getting changed after parsing with JSONKit. The problem occurs after calling objectFromJSONString or mutableObjectFromJSONString.
The JSON response is fine before this method is triggered in JSONKit.m:
static id _NSStringObjectFromJSONString(NSString *jsonString, JKParseOptionFlags parseOptionFlags, NSError **error, BOOL mutableCollection)
Original response:
"value":"1002.65"
Response after calling objectFromJSONString:
"value":"1002.6500000001" or sometimes "value":"1002.649999999 "
Thanks.
This is not an issue.
The value 1002.65 can not be represented exactly using a IEEE 754 floating point number.
Floating-point numbers are converted to their decimal representation using the printf format conversion specifier %.17g. 
From the Docs:
The C double primitive type, or IEEE 754 Double 64-bit floating-point,
is used to represent floating-point JSON Number values. JSON that
contains floating-point Number values that can not be represented as a
double (i.e., due to over or underflow) will fail to parse and
optionally return a NSError object. The function strtod() is used to
perform the conversion. Note that the JSON standard does not allow for
infinities or NaN (Not a Number). The conversion and manipulation of
floating-point values is non-trivial. Unfortunately, RFC 4627 is
silent on how such details should be handled. You should not depend on
or expect that when a floating-point value is round tripped that it
will have the same textual representation or even compare equal. This
is true even when JSONKit is used as both the parser and creator of
the JSON, let alone when transferring JSON between different systems
and implementations.
Source: See this thread https://github.com/johnezang/JSONKit/issues/110
Solution: You can specify a precision, while converting float to string for output. NSNumberFormatter will be a better choice or use some printf solutions like in the previous answer.
use float fixed point representation like,
NSLog(#"value = %.2f",floatvalue);
now it will show value = 1002.65