Is a credit/debit card number numeric or an integer? - numbers

Since a number can also be a decimal this makes me think that a CC number should be an integer. This would make sense as I don't think any credit cards start with 0 and since they all follow the same sort of pattern:
4444333322221111
So I guess they're an integer but I'm not too sure what international cards are like? Do any start with 0?
Update
Thanks for all your responses. It's less to store them (in fact I'd only store the last 4 numbers) and more to do a quick validation check. Regardless, I'd just treat it as an integer for validation, i.e. making sure that it's between 13-16 digits in length and always never a decimal.

Credit card numbers are not strictly numbers. They are strings, but the numbers which make up the long 16 digit number can be exploded in order to validate the number by using the checksum of the number.
You aren't going to be doing any multiplication or division on the CC number, so it should be a string in my opinion.
Quick Prefix, Length, and Check Digit Criteria
CARD TYPE | Prefix | Length | Check digit algorithm
-----------------------------------------------------------------
MASTERCARD | 51-55 | 16 | mod 10
VISA | 4 | 13, 16 | mod 10
AMEX | 34/37 | 15 | mod 10
Discover | 6011 | 16 | mod 10
enRoute | 2014/2149 | 15 | any
JCB | 3 | 16 | mod 10
JCB | 2131/1800 | 15 | mod 10

Don't use an integer for this.
Depending on the size of your integers (language/machine dependent), they may be too large to store as integers.
The use of credit card numbers is also not as integers, as there's no reason for doing arithmetic with them.
You should generally regard them as arrays of decimal digits, which might most easily be stored as strings, but might merit an actual array, again depending on your programming language.
They also contain encoded banking authority information as described in the wikipedia article on Bank Card Numbers, and are a special case of ISO/IEC 7812 numbers, which in fact can start with zero (though I don't think any credit cards do). If you need this information, a CC number might actually merit it's own data type, and likely some banks implement one.

Better to use an array of single-digit ints. Often the individual digits are used in some type of checksum to validate the credit card number. It would also take care of the case that a CC# actually starts with 0.
For example,
int[] cc = { 4, 3, 2, 1 }
bool Validate(int[] cc)
{
return ((cc[0] + 2*cc[1] + 6*cc[2]) % 5) == 0;
}
Something like that, although the equations they use are more complex. This would be a lot harder (i.e. would require division and truncation) with just
int cc = 4321;
Edit:
Keep in mind also, that each digit in a credit card number means something. For example, the 3rd and 4th digits might represent the state in which the card was made, and the 5th digit might be an index to the particular bank that issued the card, for example.

Personally, I always store them as a string... it is a series of integers, much like a telephone number, not one big integer itself.

I would imagine that they are integers as they (almost certainly) do not contain any alphabetic characters and never have any decimal places.

No credit/debit card numbers start with a zero (probably due to discussions like this).
All credit/debit card numbers have a check digit that is calculated using the Luhn algorithm
As it happens, 4444333322221111 passes the Luhn check digit test.

Related

Efficiently Store Decimal Numbers with Many Leading Zeros in Postgresql

A number like:
0.000000000000000000000000000000000000000123456
is difficult to store without a large performance penalty with the available numeric types in postgres. This question addresses a similar problem, but I don't feel like it came to an acceptable resolution. Currently one of my colleagues landed on rounding numbers like this to 15 decimal places and just storing them as:
0.000000000000001
So that the double precision numeric type can be used which prevents the penalty associated with moving to a decimal numeric type. Numbers that are this small for my purposes are more or less functionally equivalent, because they are both very small (and mean more or less the same thing). However, we are graphing these results and when a large portion of the data set would be rounded like this it looks exceptionally stupid (flat line on the graph).
Because we are storing tens of thousands of these numbers and operating on them, the decimal numeric type is not a good option for us as the performance penalty is too large.
I am a scientist, and my natural inclination would just be to store these types of numbers in scientific notation, but it does't appear that postgres has this kind of functionality. I don't actually need all of the precision in the number, I just want to preserve 4 digits or so, so I don't even need the 15 digits that the float numeric type offers. What are the advantages and disadvantages of storing these numbers in two fields like this:
1.234 (real)
-40 (smallint)
where this is equivalent to 1.234*10^-40? This would allow for ~32000 leading decimals with only 2 bytes used to store them and 4 bytes to store the real value, for a total of maximally 6 bytes per number (gives me the exact number I want to store and takes less space than the existing solution which consumes 8 bytes). It also seems like sorting these numbers would be much improved as you'd need only sort on the smallint field first followed by the real field second.
You and/or your colleague seem to be confused about what numbers can be represented using the floating point formats.
A double precision (aka float) number can store at least 15 significant digits, in the range from about 1e-307 to 1e+308. You have to think of it as scientific notation. Remove all the zeroes and move that to the exponent. If whatever you have once in scientific notation has less than 15 digits and an exponent between -307 and +308, it can be stored as is.
That means that 0.000000000000000000000000000000000000000123456 can definitely be stored as a double precision, and you'll keep all the significant digits (123456). No need to round that to 0.000000000000001 or anything like that.
Floating point numbers have well-known issue of exact representation of decimal numbers (as decimal numbers in base 10 do not necessarily map to decimal numbers in base 2), but that's probably not an issue for you (it's an issue if you need to be able to do exact comparisons on such numbers).
What are the advantages and disadvantages of storing these numbers in
two fields like this
You'll have to manage 2 columns instead of one.
Roughly, what you'll be doing is saving space by storing lower-precision floats. If you only need 4 digits of precision, you can go further and save 2 more bytes by using smallint + smallint (1000-9999 + exponent). Using that format, you could cram the two smallint into one 32 bits int (exponent*2^16 + mantissa), that should work too.
That's assuming that you need to save storage space and/or need to go beyond the +/-308 digits exponent limit of the double precision float. If that's not the case, the standard format is fine.

How to round the result of a division in intersystems cache?

What's the best way to round the result of a division in intersystems cache?
Thanks.
There are some functions, which used to format numbers, as well they would round it if necessary
$justify(expression,width[,decimal]) - Caché rounds or pads the number of fractional digits in expression to this value.
write $justify(5/3,0,3)
1.667
$fnumber(inumber,format,decimal)
write $fnumber(5/3,"",3)
1.667
$number(num,format,min,max)
write $number(5/3,3)
1.667
$normalize(num,scale)
w $normalize(5/3,3)
1.667
You just can choose which of them much more suitable for you. They doing different things, but result could be same.
In standard MUMPS (which Cache Object Script is backwards compatible with)
there are three "division" related operators. The first is the single character "/" (i.e. forward slash). This is a real number divide. 5/2 is 2.5, 10.5/5 is 2.1, etc. This takes two numbers (each possibly including a decimal point and a fraction ) and returns a number possibly with a fraction. A useful thing to remember is that this numeric divide yields results that are as simple as they can be. If there are leading zeros in front of the decimal point like 0007 it will treat the number as 7.
If there are trailing zeros after the decimal point, they will be trimmed as well.
So 2.000 gets trimmed to 2 (notice no decimal point) and 00060.0100 would be trimmed to just 60.01
In the past, many implementors would guarantee that 3/3 would always be 1 (not .99999) and that math was done as exactly as could be done. This is not an emphasis now, but there used to be special libraries to handle Binary Coded Decimal, (BCD) to guarantee as close to possible that fractions of a penny were never generated.
The next division operator was the single character "\" (i.e. backward slash).
this operator was called integer division or "div" by some folks. It would
do the division and throw away any remainder. The interesting thing about this is that it would always result in an integer, but the inputs didn't have to be an integer. So 10\2 is 5, but 23\2.3 is 10 and so is 23.3\2.33 , If there would be a fraction left over, it is just dropped. So 23.3\2.3 is 10 as well. The full divide operator would give you many fractions. 23.3/2.3 is 10.130434 etc.
The final division operator is remainder (or "mod" or "modulo"), symbolized by the single character "#" (sometimes called hash, pound sign, or octothorpe). To get the answer for this one, the integer division "/" is calculated, and what ever is left over when an integer division is calculated will be the result. In our example of 23\2 the answer is 11 and the remaining value is 1, so 23#2 is 1
ad 23.3#2.3 is .3 You may notice that (number#divisor)+((number\divisior)*divisor) is always going to be your original number back.
Hope this helps you make this idea clear in your programming.

MIDI division in the header chunk

The last word of a MIDI header chunk specifies the division. It contains information about whether delta times should be interpreted as ticks per quarter note or ticks per frame (where frame is a subdivision of a second). If bit 15 of this word is set then information is in ticks per frame. Next 7 bits (bit 14 through bit 8) specify the amount of frames per second and can contain one of four values: -24, -25, -29, or -30. (they are negative)
Does anyone know whether the bit 15 counts towards this negative value? So the question is, are the values which specify fps actually 8 bits long (15 through 8) or are they 7 bit long(14 through 8). The documentation I am reading is very unclear about this, and I can not find info anywhere else.
Thanks
The MMA's Standard MIDI-File Format Spec says:
The third word, <division>, specifies the meaning of the delta-times.
It has two formats, one for metrical time, and one for time-code-based
time:
+---+-----------------------------------------+
| 0 | ticks per quarter-note |
==============================================|
| 1 | negative SMPTE format | ticks per frame |
+---+-----------------------+-----------------+
|15 |14 8 |7 0 |
[...]
If bit 15 of <division> is a one, delta times in a file correspond
to subdivisions of a second, in a way consistent with SMPTE and MIDI
Time Code. Bits 14 thru 8 contain one of the four values -24, -25, -29,
or -30, corresponding to the four standard SMPTE and MIDI Time Code
formats (-29 corresponds to 30 drop frome), and represents the
number of frames per second. These negative numbers are stored in
two's complement form. The second byte (stored positive) is the
resolution within a frame [...]
Two's complement representation allows to sign-extend negative values without changing their value by adding a MSB bit of value 1.
So it does not matter whether you take 7 or 8 bits.
In practice, this value is designed to be interpreted as a signed 8-bit value, because otherwise it would have been stored as a positive value.

Why does my UILabel start counting backward at 2 billion?

I have a UILabel in my iPhone app simulator. It displays a coin count and I have an action that adds 1 hundred million to the count. I want the number to keep going up but for some reason once the count hits 2 billion, it adds a minus sign and starts counting down, then counts back up to 2 billion and back down again and so on.
I want to be able to display a much greater number of digits ie trillions and so on... Does anyone know what's going on with this and how to fix it so the label digits will keep going up as high as I want.
I'm using Xcode and Interface Builder and running through the simulator. I'm storing the number in a int variable, if that matters.
You store your coin count in an int, that's the problem. A 4 byte int can't store numbers higher than 2,147,483,647. If you add 1 to 2,147,483,647 you will get −2,147,483,648, which is the smallest possible int.
If you want to store bigger numbers you have to use a long which can store numbers between −(2^63) and 2^63−1 (or −9,223,372,036,854,775,808 to 9,223,372,036,854,775,807).
See this for additional details.
This is occurring because, as #DrummerB pointed out, your int variable only has enough bits to store integral values in the range of -2,147,483,647 to 2,147,483,647. The reason this gets "reset" or "rolls over" back to a negative has to do with how computers store data, which is in binary.
For example, if you had an 8-bit integer (otherwise known as a byte) can store integral values from 0 to 255 if it is unsigned (meaning it can only store positive values) and -127 to 127 if it is signed (meaning it can store negative numbers). When an integer reaches its max value, it is represented in memory by all ones as you see here with the unsigned value 255:
255 = 11111111
So the maximum number that can be stored in an 8-bit int (byte) is 255. If you add 1 to this value, you end up flipping all the 1 values so that they are zeroes and since storing the value 256 would require a 9th bit you lose the 9th bit entirely and the integer value will appear to "roll over" to the minimum value.
Now.. As I stated above, the result of the addition above yields the value 256, but we only have 8 bits of storage in our integer so the most significant bit (9th bit) is lost. So you can picture it kinda like this with the pipes | marking your storage area:
only 8 bits of storage total
v
255 = 0|11111111|
+ 1 = 0|00000001|
-------------------
256 = 1|00000000|
^
9th bit is lost
In an unsigned int, the same is true, however the first bit is used to determine if the value is negative so you gain signing but you lose 1 bit of storage, resulting in your only having enough space to store the values 0 to 127 and 1 bit for signing.
Now that we understand what's going on, it should be noted that iOS is, at the time of this writing, a 32-bit operating system and while it can handle larger integers you probably don't want to use them all over the place as it's not optimized to work with these values.
If you just want to increase the range of values you can store in this variable, I would recommend changing it to an unsigned int, which can be done using the NSUInteger typedef.

Why do numbers with 17 and more digits turn EVEN automatically?

I'm testing a photo application for Facebook. I'm getting object IDs from the Facebook API, but I received some incorrect ones, which doesn't make sense - why would Facebook send wrong IDs? I investigated a bit and found out that numbers with 17 and more digits are automatically turning into even numbers!
For example, let's say the ID I'm supposed to receive from Facebook is 12345678912345679. In the debugger, I've noticed that Flash Player automatically turns it into 12345678912345678. And I even tried to manually set it back to an odd number, but it keeps changing back to even.
Is there any way to stop Flash Player from rounding the numbers? BTW the variable is defined as Object, I receive it like that from Facebook.
This is related to the implementation of data types:
int is a 32-bit number, with an even distribution of positive and
negative values, including 0. So the maximum value is
(2^32 / 2 ) - 1 == 2,147,483,647.
uint is also a 32-bit number, but it doesn't have negative values. So the
maximum value is
2^32 - 1 == 4,294,967,295.
When you use a numerical value greater than the maximum value of int or uint, it is automatically cast to Number. From the Adobe Doc:
The Number data type is useful when you need to use floating-point
values. Flash runtimes handle int and uint data types more efficiently
than Number, but Number is useful in situations where the range of
values required exceeds the valid range of the int and uint data
types. The Number class can be used to represent integer values well
beyond the valid range of the int and uint data types. The Number data
type can use up to 53 bits to represent integer values, compared to
the 32 bits available to int and uint.
53 bits have a maximum value of:
2^53 - 1 == 9,007,199,254,740,989 => 16 digits
So when you use any value greater than that, the inner workings of floating point numbers apply.
You can read about floating point numbers here, but in short, for any floating point value, the first couple of bits are used to specify a multiplication factor, which determines the location of the point. This allows for a greater range of values than are actually possible to represent with the number of bits available - at the cost of reduced precision.
When you have a value greater than the maximum possible integer value a Number could have, the least significant bit (the one representing 0 and 1) is cut off to allow for a more significant bit (the one representing 2^54) to exist => hence, you lose the odd numbers.
There is a simple way to get around this: Keep all your IDs as Strings - they can have as many digits as your system has available bytes ;) It's unlikely you're going to do any calculations with them, anyway.
By the way, if you had a value greater than 2^54-(1+2), your numbers would be rounded down to the next multiple of 4; if you had a value greater than 2^55-(1+2+4), they would be rounded down to the next multiple of 8, etc.