Dart equivalent of long? [duplicate] - flutter

This question already has answers here:
How do I declare 64bit unsigned int in dart/flutter?
(2 answers)
Closed 3 years ago.
What is the equivalent of java's long datatype in dart ? should int or long be used?

In Java:
long: The long data type is a 64-bit two's complement integer. The signed long has a minimum value of -2^63 and a maximum value of 2^63-1. In Java SE 8 and later, you can use the long data type to represent an unsigned 64-bit long, which has a minimum value of 0 and a maximum value of 2^64-1. Use this data type when you need a range of values wider than those provided by int. The Long class also contains methods like compareUnsigned, divideUnsigned etc to support arithmetic operations for unsigned long.
In Dart:
int
Integer values no larger than 64 bits, depending on the platform. On the Dart VM, values can be from -2^63 to 2^63 - 1. Dart that’s compiled to JavaScript uses JavaScript numbers, allowing values from -2^53 to 2^53 - 1.
So, you can exactly use int in Dart for the equivalent of long in Java. But beware of the caveat when compiled to JavaScript.

Related

Memory occupied by a string variable in swift [duplicate]

This question already has answers here:
Swift: How to use sizeof?
(5 answers)
Calculate the size in bytes of a Swift String
(1 answer)
Closed 5 years ago.
I want to find the memory occupied by a string variable in bytes.
Lets suppose we have a variable named test
let test = "abvd"
I want to know how to find the size of test in runtime.
I have checked the details in Calculate the size in bytes of a Swift String
But this question is different.
According to apple, "Behind the scenes, Swift’s native String type is built from Unicode scalar values. A Unicode scalar is a unique 21-bit number for a character or modifier, such as U+0061 for LATIN SMALL LETTER A ("a"), or U+1F425 for FRONT-FACING BABY CHICK ("🐥")." This can be found in https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/StringsAndCharacters.html
So if thats the case, apple is actually using a fixed size representation for unicode code points instead of the dynamic UTF8 encoding.
I wanted to verify this claim.
Thanks in advance.
To your real goal: neither understanding is correct. Strings do not promise an internal representation. They can hold a variety of representations, depending on how they're constructed. In principle they can even take zero real memory if they are statically defined in the binary and memory mapped (I can't remember if the StaticString type makes use of this fully yet). The only way you're going to answer this question is to look at the current implementation, starting in String.swift, and then moving to StringCore.swift, and then reading the rest of the string files.
To your particular question, this is probably the beginning of the answer you're looking for (but again, this is current implementation; it's not part of any spec):
/// The core implementation of a highly-optimizable String that
/// can store both ASCII and UTF-16, and can wrap native Swift
/// _StringBuffer or NSString instances.
The end of the answer you're looking for is "it's complicated."
Note that if you ask for MemoryLayout.size(ofValue: test), you're going to get a surprising result (24), because that's just measuring the container. There is a reference type inside the container (which takes one word of storage for a pointer). There's no mechanism to determine "all the storage used by this value" because that's not very well defined when pointers get involved.
String only has one property:
var _core: _StringCore
And _StringCore has the following properties:
public var _baseAddress: UnsafeMutableRawPointer?
var _countAndFlags: UInt
public var _owner: AnyObject?
Each of those take one word (8 bytes on a 64-bit platform) of storage, so 24 bytes total. It doesn't matter how long the string is.

Using signed integers instead of unsigned integers [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In software development, it's usually a good idea to take advantage of compiler-errors. Allowing the compiler to work for you by checking your code makes sense. In strong-type languages, if a variable only has two valid values, you'd make it a boolean or define an enum for it. Swift furthers this by bringing the Optional type.
In my mind, the same would apply to unsigned integers: if you know a negative value is impossible, program in a way that enforces it. I'm talking about high-level APIs; not low-level APIs where the negative value is usually used as a cryptic error signalling mechanism.
And yet Apple suggests avoiding unsigned integers:
Use UInt only when you specifically need an unsigned integer type with the same size as the platform’s native word size. If this is not the case, Int is preferred, even when the values to be stored are known to be non-negative. [...]
Here's an example: Swift's Array.count returns an Int. How can one possibly have negative amount of items?!
Why?!
Apple's states that:
A consistent use of Int for integer values aids code interoperability, avoids the need to convert between different number types, and matches integer type inference, as described in Type Safety and Type Inference.
But I don't buy it! Using Int wouldn't aid "interoperability" anymore than UInt since Int could resolve to Int32 or Int64 (for 32-bit and 64-bit platforms respectively).
If you care about robustness at all, using signed integers where it makes no logical sense essentially forces to do an additional check (What if the value is negative?)
I can't see the act of casting between signed and unsigned as being anything other than trivial. Wouldn't that simply indicate the compiler to resolve the machine-code to use either signed or unsigned byte-codes?!
Casting back and forth between signed and unsigned integers is extremely bug-prone on one side while adds little value on the other.
One reason to have unsigned int that you suggest, being an implicit guarantee that an index never gets negative value.. well, it's a bit speculative. Where would the potential to get a negative value come from? Of course, from the code, that is, either from a static value or from a computation. But in both cases, for a static or a computed value to be able to get negative they must be handled as signed integers. Therefore, it is a language implementation responsibility to introduce all sorts of checks every time you assign signed value to unsigned variable (or vice versa). This means that we talk not about being forced "to do an additional check" or not, but about having this check implicitly made for us by the language every time we feel lazy to bother with corner cases.
Conceptually, signed and unsigned integers come into the language from low level (machine codes). In other words, unsigned integer is in the language not because it is the language that has a need, but because it is directly bridge-able to machine instructions, hence allows performance gain just for being native. No other big reason behind. Therefore, if one has just a glimpse of portability in mind, then one would say "Be it Int and this is it. Let developer write clean code, we bring the rest".
As long as we have an opinions based question...
Basing programming language mathematical operations on machine register size is one of the great travesties of Computer Science. There should be Integer*, Rational, Real and Complex - done and dusted. You need something that maps to a U8 Register for some device driver? Call it a RegisterOfU8Data - or whatever - just not 'Int(eger)'
*Of course, calling it an 'Integer' means it 'rolls over' to an unlimited range, aka BigNum.
Sharing what I've discovered which indirectly helps me understand... at least in part. Maybe it ends up helping others?!
After a few days of digging and thinking, it seems part of my problem boils down to the usage of the word "casting".
As far back as I can remember, I've been taught that casting was very distinct and different from converting in the following ways:
Converting kept the meaning but changed the data.
Casting kept the data but changed the meaning.
Casting was a mechanism allowing you to inform the compiler how both it and you would be manipulating some piece of data (No changing of data, thus no cost). " Me to the compiler: Okay! Initially I told you this byte was an number because I wanted to perform math on it. Now, lets treat it as an ASCII character."
Converting was a mechanism for transforming the data into different formats. " Me to the compiler: I have this number, please generate an ASCII string that represents that value."
My problem, it seems, is that in Swift (and most likely other languages) the line between casting and converting is blurred...
Case in point, Apple explains that:
Type casting in Swift is implemented with the is and as operators. […]
var x: Int = 5
var y: UInt = x as UInt // Casting... Compiler refuses claiming
// it's not "convertible".
// I don't want to convert it, I want to cast it.
If "casting" is not this clearly defined action it could explain why unsigned integers are to be avoided like the plague...

count(*) type compatibility error with Database.PostgreSQL.Simple?

The error is
*** Exception: Incompatible {errSQLType = "int8", errHaskellType = "Int", errMessage = "types incompatible"}
It looks like any value returned by count(*) in the query must be converted into Integer rather than Int. If I change those specific variables to type Integer, the queries work.
But this error wasn't being raised on another machine with the same exact code. The first machine was 32 bit and this other one 64-bit. That's the only difference I could discern.
Does anyone have any insight into what is going on?
The PostgreSQL count() functions returns a Bigint type, see
http://www.postgresql.org/docs/9.2/static/functions-aggregate.html
Bigint are 8 bytes
see http://www.postgresql.org/docs/9.2/static/datatype-numeric.html
Haskell int is ~ 2**29 which implies it a 4 byte integer.
http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Int.html
Then its normal that PostgreSQL or its API will not do an implicit downwards conversion in precision.
So use a Haskell int64 type or cast count(*) to integer.
As documented in the FromField module, postgresql-simple will only do client-side conversions between numerical types when there isn't any possibility of overflow or loss of precision. Note especially the list of types in the haddocks for the instance FromField Int: "int2, int4, and if compiled as 64-bit code, int8 as well. This library was compiled as 32-bit code." The latter part of that comment is of course specific to the build that hackage itself performs.
On 32-bit platforms, Int is a 32-bit integer, and on 64-bit platforms, Int is a 64-bit integer. If you use Int32 you'll get the same exception. You can use Int64 or the arbitrary-precision Integer type to avoid this problem on both kinds of platform.

atoi() is not converting properly

I was trying to call atoi on the strings 509951644 and 4099516441. The first one got converted without any problem. The second one is giving me the decimal value 2,147,483,647 (0x7FFFFFFF). Why is this happening?
Your second integer is creating an overflow. The maximum 32-bit signed integer is 2147483647.
It's generally not recommended to use atoi anyway; use strtol instead, which actually tells you if your value is out of range. (The behavior of atoi is undefined when the input is out of range. Yours seems to be simply spitting out the maximum int value)
You could also check if your compiler has something like a atoi64 function, which would let you work with 64-bit values.
2147483647 is the maximum integer value in C (signed). It is giving the max that it can... the original is too large to convert to signed int. I suggest looking up how to convert into an unsigned int.

How should I declare a long in Objective-C? Is NSInteger appropriate?

I see NSInteger is used quite often and the typedef for it on the iPhone is a long, so technically I could use it when I am expect int(64) values. But should I be more explicit and use something like int64_t or long directly? What would be the downside of just using long?
IIRC, long on the iPhone/ARM is 32 bits. If you want a guaranteed 64-bit integer, you should (indeed) use int64_t.
Integer Data Types Sizes
short - ILP32: 2 bytes; LP64: 2 bytes
int - ILP32: 4 bytes; LP64: 4 bytes
long - ILP32: 4 bytes; LP64: 8 bytes
long long - ILP32: 8 bytes; LP64: 8 bytes
It may be useful to know that:
The compiler defines the __LP64__ macro when compiling for the 64-bit runtime.
NSInteger is a typedef of long so it will be 32-bits in a 32-bit environment and 64-bits in a 64-bit environment.
When converting to 64-bit you can simply replace all your ints and longs to NSInteger and you should be good to go.
Important: pay attention to the alignment of data, LP64 uses natural alignment for all Integer data types but ILP32 uses 4 bytes for all Integer data types with size equal to or greater than 4 bytes.
You can read more about 32 to 64 bit conversion in the Official 64-Bit Transition Guide for Cocoa Touch.
Answering you questions:
How should I declare a long in Objective-C? Is NSInteger appropriate?
You can use either long or NSInteger but NSInteger is more idiomatic IMHO.
But should I be more explicit and use something like int64_t or long directly?
If you expect consistent 64-bit sizes neither long nor NSInteger will do, you'll have to use int64_t (as Wevah said).
What would be the downside of just using long?
It's not idiomatic and you may have problems if Apple rolls out a new architecture again.
If you need a type of known specific size, use the type that has that known specific size: int64_t.
If you need a generic integer type and the size is not important, go ahead and use int or NSInteger.
NSInteger's length depends on whether you are compiling for 32 bit or 64 bit. It's defined as long for 64 bit and iPhone and int for 32 bit.
So on iPhone the length of NSInteger is the same as the length of a long, which is compiler dependent. Most compilers make long the same length as the native word. i.e. 32 bit for 32 bit architectures and 64 bit for 64 bit architectures.
Given the uncertainty over the width of NSInteger, I use it only for types of variables to be used in the Cocoa API when NSInteger is specified. If I need a fixed width type, I go for the ones defined in stdint.h. If I don't care about the width I use the C built in types
If you want to declare something as long, declare it as long. Be aware that long can be 32 or 64 bit, depending on the compiler.
If you want to declare something to be as efficient as possible, and big enough to count items, use NSInteger or NSUInteger. Note that both can be 32 or 64 bits, and can be actually different types (int or long), depending on the compiler. Which protects you from mixing up types in some cases.
If you want 32 or 64 bit, and nothing else, use int32_t, uint32_t, int64_t, uint64_t. Be aware that either type can be unnecessarily inefficient on some compiler.