I'm trying to program it such that it would calculate the check digit of isbn 10 from isbn 13. Can anyone give some advice on how to carry it out?
Firstly, how do i actually loop through a 13 digit isbn, remove the prefixed 978 in front before i proceed on to calculate the check digit of the isbn10? Thank you in advance!:)
This is how you can remove the first 3 digits:
NSString *str = #"978XXXXXXXXX";
NSString *newStr = [str substringFromIndex:3];
And as for your ISBN10:
The final character of a ten digit International Standard Book Number is a check digit computed so that multiplying each digit by its position in the number (counting from the right) and taking the sum of these products modulo 11 is 0. The digit the farthest to the right (which is multiplied by 1) is the check digit, chosen to make the sum correct. It may need to have the value 10, which is represented as the letter X. For example, take the ISBN 0-201-53082-1. The sum of products is 0×10 + 2×9 + 0×8 + 1×7 + 5×6 + 3×5 + 0×4 + 8×3 + 2×2 + 1×1 = 99 ≡ 0 modulo 11. So the ISBN is valid.
While this may seem more complicated than the first scheme, it can be validated simply by adding all the products together then dividing by 11. The sum can be computed without any multiplications by initializing two variables, t and sum, to 0 and repeatedly performing t = t + digit; sum = sum + t; (which can be expressed in C as sum += t += digit;). If the final sum is a multiple of 11, the ISBN is valid.
Taken from here.
Related
I don't understand how floating point numbers are represented in hex notation in Swift. Apple's documentation shows that 0xC.3p0 is equal to 12.1875 in decimal. Can someone walk me through how to do that conversion? I understand that before the decimal hex value 0xC = 12. The 3p0 after the decimal is where I am stumped.
From the documentation:
Floating-Point Literals
...
Hexadecimal floating-point literals consist of a 0x prefix, followed
by an optional hexadecimal fraction, followed by a hexadecimal
exponent. The hexadecimal fraction consists of a decimal point
followed by a sequence of hexadecimal digits. The exponent consists of
an upper- or lowercase p prefix followed by a sequence of decimal
digits that indicates what power of 2 the value preceding the p is
multiplied by. For example, 0xFp2 represents 15 × 22, which evaluates
to 60. Similarly, 0xFp-2 represents 15 × 2-2, which evaluates to 3.75.
In your case
0xC.3p0 = (12 + 3/16) * 2^0 = 12.1875
Another example:
0xAB.CDp4 = (10*16 + 11 + 12/16 + 13/16^2) * 2^4 = 2748.8125
This format is very similar to the %a printf-format (see for example
http://pubs.opengroup.org/onlinepubs/009695399/functions/fprintf.html).
It can be used to specify a floating point number directly in its
binary IEEE 754 representation, see Why does Swift use base 2 for the exponent of hexadecimal floating point values?
for more information.
Interpret 0xC.3p0 using the place value system:
C (or 12) is in the 16^0 place
3 is in the 16^-1 place (and 3/16 == 0.1875)
p says the exponent follows (like the e in 6.022e23 in base 10)
0 is the exponent (in base 10) that is the power of 2 (2^0 == 1)
So putting it all together
0xC.3p0 = (12 + (3/16)) * 2^0 = 12.1875
In order to sum up what I've read, you can see those representations as follow:
0xC.3p0 = (12*16^0 + 3*16^-1) * 2^0 = 12.1875
From Martin R's example above :
0xAB.CDp4 = (10*16^1 + 11*16^0 + 12*16^-1 + 13*16^-2) * 2^4 = 2748.8125
The 0xC is 12, as you said. The decimal part is ((1/16)*3)*10^0.
So you need to take the decimal part and divide it by 16. Then you need to multiply it by 2 raised to the power of the number after the p
Hexadecimal -(0-9,A=10,B=11,C=12,D=13,E=14,F=15) and p0 means 2^0
ex: - 0xC = 12 (0x prefix represents hexadecimal)
After the decimal part as in 0xC.3p0 we divide the numbers with the power of 16
So here its 3/16 = 0.1875
so 0xC.3p0 = (12 + (3/16) ) 2^0
If it was 0xC.43p0 then for the 4 we would use 4/(16), for 3 we would use 3/(16 ^2) and similarly if the decimal part increases.
ex: 0xC.231p1 = (12 + 2/16 + 3/(256) + 1/(16^3)) 2^1 = 24.27392578125
Given that a number can contain only digits from 1 to 8 (with no repetition), and is of length 8, how can we hash such numbers without using a hashSet?
We can't just directly use the value of the number of the hashing value, as the stack size of the program is limited. (By this, I mean that we can't directly make the index of an array, represent our number).
Therefore, this 8 digit number needs to be mapped to, at maximum, a 5 digit number.
I saw this answer. The hash function returns a 8-digit number, for a input that is an 8-digit number.
So, what can I do here?
There's a few things you can do. You could subtract 1 from each digit and parse it as an octal number, which will map one-to-one every number from your domain to the range [0,16777216) with no gaps. The resulting number can be used as an index into a very large array. An example of this could work as below:
function hash(num) {
return parseInt(num
.toString()
.split('')
.map(x => x - 1), 8);
}
const set = new Array(8**8);
set[hash(12345678)] = true;
// 12345678 is in the set
Or if you wanna conserve some space and grow the data structure as you add elements. You can use a tree structure with 8 branches at every node and a maximum depth of 8. I'll leave that up to you to figure out if you think it's worth the trouble.
Edit:
After seeing the updated question, I began thinking about how you could probably map the number to its position in a lexicographically sorted list of the permutations of the digits 1-8. That would be optimal because it gives you the theoretical 5-digit hash you want (under 40320). I had some trouble formulating the algorithm to do this on my own, so I did some digging. I found this example implementation that does just what you're looking for. I've taken inspiration from this to implement the algorithm in JavaScript for you.
function hash(num) {
const digits = num
.toString()
.split('')
.map(x => x - 1);
const len = digits.length;
const seen = new Array(len);
let rank = 0;
for(let i = 0; i < len; i++) {
seen[digits[i]] = true;
rank += numsBelowUnseen(digits[i], seen) * fact(len - i - 1);
}
return rank;
}
// count unseen digits less than n
function numsBelowUnseen(n, seen) {
let count = 0;
for(let i = 0; i < n; i++) {
if(!seen[i]) count++;
}
return count;
}
// factorial fuction
function fact(x) {
return x <= 0 ? 1 : x * fact(x - 1);
}
kamoroso94 gave me the idea of representing the number in octal. The number remains unique if we remove the first digit from it. So, we can make an array of length 8^7=2097152, and thus use the 7-digit octal version as index.
If this array size is bigger than the stack, then we can use only 6 digits of the input, convert them to their octal values. So, 8^6=262144, that is pretty small. We can make a 2D array of length 8^6. So, total space used will be in the order of 2*(8^6). The first index of the second dimension represents that the number starts from the smaller number, and the second index represents that the number starts from the bigger number.
I don't understand how floating point numbers are represented in hex notation in Swift. Apple's documentation shows that 0xC.3p0 is equal to 12.1875 in decimal. Can someone walk me through how to do that conversion? I understand that before the decimal hex value 0xC = 12. The 3p0 after the decimal is where I am stumped.
From the documentation:
Floating-Point Literals
...
Hexadecimal floating-point literals consist of a 0x prefix, followed
by an optional hexadecimal fraction, followed by a hexadecimal
exponent. The hexadecimal fraction consists of a decimal point
followed by a sequence of hexadecimal digits. The exponent consists of
an upper- or lowercase p prefix followed by a sequence of decimal
digits that indicates what power of 2 the value preceding the p is
multiplied by. For example, 0xFp2 represents 15 × 22, which evaluates
to 60. Similarly, 0xFp-2 represents 15 × 2-2, which evaluates to 3.75.
In your case
0xC.3p0 = (12 + 3/16) * 2^0 = 12.1875
Another example:
0xAB.CDp4 = (10*16 + 11 + 12/16 + 13/16^2) * 2^4 = 2748.8125
This format is very similar to the %a printf-format (see for example
http://pubs.opengroup.org/onlinepubs/009695399/functions/fprintf.html).
It can be used to specify a floating point number directly in its
binary IEEE 754 representation, see Why does Swift use base 2 for the exponent of hexadecimal floating point values?
for more information.
Interpret 0xC.3p0 using the place value system:
C (or 12) is in the 16^0 place
3 is in the 16^-1 place (and 3/16 == 0.1875)
p says the exponent follows (like the e in 6.022e23 in base 10)
0 is the exponent (in base 10) that is the power of 2 (2^0 == 1)
So putting it all together
0xC.3p0 = (12 + (3/16)) * 2^0 = 12.1875
In order to sum up what I've read, you can see those representations as follow:
0xC.3p0 = (12*16^0 + 3*16^-1) * 2^0 = 12.1875
From Martin R's example above :
0xAB.CDp4 = (10*16^1 + 11*16^0 + 12*16^-1 + 13*16^-2) * 2^4 = 2748.8125
The 0xC is 12, as you said. The decimal part is ((1/16)*3)*10^0.
So you need to take the decimal part and divide it by 16. Then you need to multiply it by 2 raised to the power of the number after the p
Hexadecimal -(0-9,A=10,B=11,C=12,D=13,E=14,F=15) and p0 means 2^0
ex: - 0xC = 12 (0x prefix represents hexadecimal)
After the decimal part as in 0xC.3p0 we divide the numbers with the power of 16
So here its 3/16 = 0.1875
so 0xC.3p0 = (12 + (3/16) ) 2^0
If it was 0xC.43p0 then for the 4 we would use 4/(16), for 3 we would use 3/(16 ^2) and similarly if the decimal part increases.
ex: 0xC.231p1 = (12 + 2/16 + 3/(256) + 1/(16^3)) 2^1 = 24.27392578125
I want to convert the decimal number 27 into binary such a way that , first the digit 2 is converted and its binary value is placed in an array and then the digit 7 is converted and its binary number is placed in that array. what should I do?
thanks in advance
That's called binary-coded decimal. It's easiest to work right-to-left. Take the value modulo 10 (% operator in C/C++/ObjC) and put it in the array. Then integer-divide the value by 10 (/ operator in C/C++/ObjC). Continue until your value is zero. Then reverse the array if you need most-significant digit first.
If I understand your question correctly, you want to go from 27 to an array that looks like {0010, 0111}.
If you understand how base systems work (specifically the decimal system), this should be simple.
First, you find the remainder of your number when divided by 10. Your number 27 in this case would result with 7.
Then you integer divide your number by 10 and store it back in that variable. Your number 27 would result in 2.
How many times do you do this?
You do this until you have no more digits.
How many digits can you have?
Well, if you think about the number 100, it has 3 digits because the number needs to remember that one 10^2 exists in the number. On the other hand, 99 does not.
The answer to the previous question is 1 + floor of Log base 10 of the input number.
Log of 100 is 2, plus 1 is 3, which equals number of digits.
Log of 99 is a little less than 2, but flooring it is 1, plus 1 is 2.
In java it is like this:
int input = 27;
int number = 0;
int numDigits = Math.floor(Log(10, input)) + 1;
int[] digitArray = new int [numDigits];
for (int i = 0; i < numDigits; i++) {
number = input % 10;
digitArray[numDigits - i - 1] = number;
input = input / 10;
}
return digitArray;
Java doesn't have a Log function that is portable for any base (it has it for base e), but it is trivial to make a function for it.
double Log( double base, double value ) {
return Math.log(value)/Math.log(base);
}
Good luck.
the thing is that, the 1st number is already ORACLE LONG,
second one a Date (SQL DATE, no timestamp info extra), the last one being a Short value in the range 1000-100'000.
how can I create sort of hash value that will be unique for each combination optimally?
string concatenation and converting to long later:
I don't want this, for example.
Day Month
12 1 --> 121
1 12 --> 121
When you have a few numeric values and need to have a single "unique" (that is, statistically improbable duplicate) value out of them you can usually use a formula like:
h = (a*P1 + b)*P2 + c
where P1 and P2 are either well-chosen numbers (e.g. if you know 'a' is always in the 1-31 range, you can use P1=32) or, when you know nothing particular about the allowable ranges of a,b,c best approach is to have P1 and P2 as big prime numbers (they have the least chance to generate values that collide).
For an optimal solution the math is a bit more complex than that, but using prime numbers you can usually have a decent solution.
For example, Java implementation for .hashCode() for an array (or a String) is something like:
h = 0;
for (int i = 0; i < a.length; ++i)
h = h * 31 + a[i];
Even though personally, I would have chosen a prime bigger than 31 as values inside a String can easily collide, since a delta of 31 places can be quite common, e.g.:
"BB".hashCode() == "Aa".hashCode() == 2122
Your
12 1 --> 121
1 12 --> 121
problem is easily fixed by zero-padding your input numbers to the maximum width expected for each input field.
For example, if the first field can range from 0 to 10000 and the second field can range from 0 to 100, your example becomes:
00012 001 --> 00012001
00001 012 --> 00001012
In python, you can use this:
#pip install pairing
import pairing as pf
n = [12,6,20,19]
print(n)
key = pf.pair(pf.pair(n[0],n[1]),
pf.pair(n[2], n[3]))
print(key)
m = [pf.depair(pf.depair(key)[0]),
pf.depair(pf.depair(key)[1])]
print(m)
Output is:
[12, 6, 20, 19]
477575
[(12, 6), (20, 19)]