I have a big integer in javascript, having 128 single digit numbers. I generated this big integer from the hex sum of SHA3-512.
I would like to derive a password from this big integer, following the rules for a strong pasword:
At least 8 characters long
Has capital letters
Has small letters
Has numbers
Has special characters
Now, I would like to generate a password of at least 20 characters from this big integer. How can I do that? I would like to make this function so that whenever I pass the same big integer, it gives me the same password everytime (just like hashing algorithms).
So you have this huge number X in the range [0, 2^512). Surely you can get a password from it, and you can use something like base conversion.
What you first want to do is to create a range for each character position starting from 0, which could e.g. be the character A, say [0, Mi) where M is the amount of characters and i is the index in the password. The number of characters at a certain position is called an alphabet (the English ABC is one specific alphabet, usually called the alphabet).
Now if you create the product over all the Mi, giving you the value N. Now you get the remainder of X over N, let's call this Y. Now you have a number that is reasonably unbiased as long as 2^512 is much larger than N. It and represents all of the passwords possible by index. Nice, but how do you get a password by index, you cannot list them all.
This is where we need to do some more number magic. What you do need to do is to calculate the remainder of Y over Mi, then divide Y by Mi, giving you Ci. You then lookup character Ci within the alphabet for position i, and put it into the password string.
Here's an example I created before in Java (sorry, conversion necessary):
import java.math.BigInteger;
import java.security.MessageDigest;
public class CreatePasswordFromLargeRandom {
private static final String ALPHABET_UPPER = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
private static final String ALPHABET_LOWER = "abcdefghijklmnopqrstuvwxyz";
private static final String ALPHABET_UPPERLOWER = ALPHABET_UPPER + ALPHABET_LOWER;
private static final String ALPHABET_DIGITS = "0123456789";
private static final BigInteger SHA512_RANGE = BigInteger.TWO.pow(512);
public static char[] createPassword(BigInteger rangeOfX, BigInteger x, char[]... alphabets) {
// n is the amount of passwords possible
var n = BigInteger.ONE;
for (int i = 0; i < alphabets.length; i++) {
n = n.multiply(BigInteger.valueOf(alphabets[i].length));
}
if (rangeOfX.bitLength() - n.bitLength() < 64) {
throw new IllegalArgumentException("Range of X value too small compared with the number of possible passwords, bias would be introduced");
}
// y is an index into all the possible passwords
var y = x.remainder(n);
var password = new char[alphabets.length];
// using least significant values for digits on the right
for (int i = alphabets.length - 1; i >= 0; i--) {
var yc = y.divideAndRemainder(BigInteger.valueOf(alphabets[i].length));
y = yc[0];
// c is the index in the current alphabet
var c = yc[1].intValueExact();
password[i] = alphabets[i][c];
}
return password;
}
public static void main(String[] args) throws Exception {
var sha512 = MessageDigest.getInstance("SHA-512");
var digest = sha512.digest(new byte[0]);
var x = new BigInteger(1, digest);
var upperLower = ALPHABET_UPPERLOWER.toCharArray();
var digits = ALPHABET_DIGITS.toCharArray();
var password = createPassword(SHA512_RANGE, x, upperLower, digits, upperLower, digits);
System.out.println(new String(password));
}
}
which simply outputs h5g6.
To get an idea how this works: if you input zero for X then you'd get password "A0A0", the password that has the lowest index. You'd get the same value if you'd input K times N for any K, as that will produce the same index. Any X that is N - 1 (mod N) will produce the last password possible, "z9z9" in this case.
Related
Given that a number can contain only digits from 1 to 8 (with no repetition), and is of length 8, how can we hash such numbers without using a hashSet?
We can't just directly use the value of the number of the hashing value, as the stack size of the program is limited. (By this, I mean that we can't directly make the index of an array, represent our number).
Therefore, this 8 digit number needs to be mapped to, at maximum, a 5 digit number.
I saw this answer. The hash function returns a 8-digit number, for a input that is an 8-digit number.
So, what can I do here?
There's a few things you can do. You could subtract 1 from each digit and parse it as an octal number, which will map one-to-one every number from your domain to the range [0,16777216) with no gaps. The resulting number can be used as an index into a very large array. An example of this could work as below:
function hash(num) {
return parseInt(num
.toString()
.split('')
.map(x => x - 1), 8);
}
const set = new Array(8**8);
set[hash(12345678)] = true;
// 12345678 is in the set
Or if you wanna conserve some space and grow the data structure as you add elements. You can use a tree structure with 8 branches at every node and a maximum depth of 8. I'll leave that up to you to figure out if you think it's worth the trouble.
Edit:
After seeing the updated question, I began thinking about how you could probably map the number to its position in a lexicographically sorted list of the permutations of the digits 1-8. That would be optimal because it gives you the theoretical 5-digit hash you want (under 40320). I had some trouble formulating the algorithm to do this on my own, so I did some digging. I found this example implementation that does just what you're looking for. I've taken inspiration from this to implement the algorithm in JavaScript for you.
function hash(num) {
const digits = num
.toString()
.split('')
.map(x => x - 1);
const len = digits.length;
const seen = new Array(len);
let rank = 0;
for(let i = 0; i < len; i++) {
seen[digits[i]] = true;
rank += numsBelowUnseen(digits[i], seen) * fact(len - i - 1);
}
return rank;
}
// count unseen digits less than n
function numsBelowUnseen(n, seen) {
let count = 0;
for(let i = 0; i < n; i++) {
if(!seen[i]) count++;
}
return count;
}
// factorial fuction
function fact(x) {
return x <= 0 ? 1 : x * fact(x - 1);
}
kamoroso94 gave me the idea of representing the number in octal. The number remains unique if we remove the first digit from it. So, we can make an array of length 8^7=2097152, and thus use the 7-digit octal version as index.
If this array size is bigger than the stack, then we can use only 6 digits of the input, convert them to their octal values. So, 8^6=262144, that is pretty small. We can make a 2D array of length 8^6. So, total space used will be in the order of 2*(8^6). The first index of the second dimension represents that the number starts from the smaller number, and the second index represents that the number starts from the bigger number.
I try to read the values from a cell as a String (as one would see it in Excel). I reads from a xlsx (XSSFWorkbook) using Apache POI 3.15.
My goal is e.g. to omit decimal point and trailing zeros if the cell contains an integer. This works for CellType.NUMERIC:
val dataFormatter = new DataFormatter(true) // set emulateCsv to true
val stringValue = dataFormatter.formatCellValue(cell)
If I use the same code for CellType.FORMULA cell (e.g. a cell which references another "integer" cell), it just gives me the formula as a string instead of its computed value.
How can I get value of the formula-cell as displayed in Excel displays?
You need to "evaluate" cells in order to get the result of formulas. This is not done automatically by POI as it can be a heavy operation and often will not be necessary.
See http://poi.apache.org/spreadsheet/eval.html for details, basically you create a FormulaEvaluator and retrieve a CellValue for the Cell in question
FormulaEvaluator evaluator = wb.getCreationHelper().createFormulaEvaluator();
...
CellValue cellValue = evaluator.evaluate(cell);
Thanks to Centic and Raphael I ended up using the concept with NumberFormat to fix an issue, this is Java but I am sure it can easily be converted to Scala
The issue is around numbers with decimal places which produces scientific decimal points.
This was only required when converting Apache POI XLS / XLSX to CSV format
//Create an evaluator from current work book
FormulaEvaluator evaluator = wb.getCreationHelper().createFormulaEvaluator();
// Cell cell2 = evaluator.evaluateInCell(cell);
// As per above get CellValue
CellValue cellValue = evaluator.evaluate(cell);
//Get Double Value of formula which may contain E numbers
Double value = cellValue.getNumberValue();
// This gets numberFormat (below function) and assigns correct formatting to it
NumberFormat formatter = getNumberFormat(value);
//This should now be string value of number with correct decimal place values (non scientific)
formatter.format(value)
/**
* getNumberFormat takes number and either assigns #0
* if no decimal places or
* depending on how many numbers after decimal place assigns correct format
*/
public static NumberFormat getNumberFormat(Double value) {
String v = value.toString();
String format = "#0";
// This fixes scientific value issue
if (v.contains(".")) {
int decimals = v).substring(v.indexOf(".") + 1).length();
//Calls generateNumberSigns based on decimal places in given double
String numberSigns = generateNumberSigns(decimals);
format = "0." + numberSigns;
}
return new DecimalFormat(format);
}
/**
* This will generate correct formula for amount of decimal places
*/
public static String generateNumberSigns(int n) {
String s = "";
for (int i = 0; i < n; i++) {
s += "#";
}
return s;
}
I am trying to do an assignment in JES a student jython program. I need to convert our student number taken as a string input variable to pass through our function i.e.
def assignment(stringID) and convert it into integers. The exact instructions are:
Step 1
Define an array called id which will store your 7 digit number as integers (the numbers you set in the array does not matter, it will be over written with your student number in the next step).
Step 2 Your student number has been passed in to your function as a String. You must separate the digits and assign them to your array id. This can do this manually line by line or using a loop. You will need to type cast each character from stringID to an integer before storing it in id.
I have tried so many different ways using the int and float functions but I am really stuck.
Thanks in advance!
>>> a = "545.2222"
>>> float(a)
545.22220000000004
>>> int(float(a))
545
I had to do some jython scripting for a websphere server. It must be a really old version of python it didn't have the ** operator or the len() function. I had to use an exception to find the end of a string.
Anyways I hope this saves someone else some time
def pow(x, y):
total = 1;
if (y > 0):
rng = y
else:
rng = -1 * y
print ("range", rng)
for itt in range (rng):
total *= x
if (y < 0):
total = 1.0 / float(total)
return total
#This will return an int if the percision restricts it from parsing decimal places
def parseNum(string, percision):
decIndex = string.index(".")
total = 0
print("decIndex: ", decIndex)
index = 0
string = string[0:decIndex] + string[decIndex + 1:]
try:
while string[index]:
if (ord(string[index]) >= ord("0") and ord(string[index]) <= ord("9")):
times = pow(10, decIndex - index - 1)
val = ord(string[index]) - ord("0")
print(times, " X ", val)
if (times < percision):
break
total += times * val
index += 1
except:
print "broke out"
return total
Warning! - make sure the string is a number. The function will not fail but you will get strange and almost assuredly, useless output.
This question already has answers here:
How to use Bitxor for Double Numbers?
(2 answers)
Closed 9 years ago.
I have two matrices a = [120.23, 255.23669877,...] and b = [125.000083, 800.0101010,...] with double numbers in [0, 999]. I want to use bitxor for a and b. I can not use bitxor with round like this:
result = bitxor(round(a(1,j),round(b(1,j))
Because the decimal parts 0.23 and 0.000083 ,... are very important to me. I thought maybe I could do a = a*10^k and b = b*10^k and use bitxor and after that result/10^k (because I want my result's range to also be [0, 999]. But I do not know the maximum length of the number after the decimal point. Does k = 16 support the max range of double numbers in Matlab? Does bitxor support two 19-digit numbers? Is there a better solution?
This is not really an answer, but a very long comment with embedded code. I don't have a current matlab installation, and in any case don't know enough to answer the question in that context. Instead, I've written a Java program that I think may do what you are asking for. It uses two Java classes, BigInteger and BigDecimal. BigInteger is an extended integer format. BigDecimal is the combination of a BigInteger and a decimal scale.
Conversion from double to BigDecimal is exact. Conversion in the opposite direction may require rounding.
The function xor in my program converts each of its operands to BigDecimal. It finds a number of decimal digits to move the decimal point by to make both operands integers. After scaling, it converts to BigInteger, does the actual xor, and converts back to BigDecimal undoing the scaling.
The main point of this is for you to look at the results, and see whether they are what you want, and would be useful to you if you could do the same thing in Matlab. Explaining any ways in which the results are not what you want may help clarify your requirements for the Matlab experts.
Here is some test output. The top and bottom rows of each block are in decimal. The middle row is the scaled integer versions of the inputs, in hex.
Testing operands 1.100000000000000088817841970012523233890533447265625, 2
2f0a689f1b94a78f11d31b7ab806d40b1014d3f6d59 xor 558749db77f70029c77506823d22bd0000000000000 = 7a8d21446c63a7a6d6a61df88524690b1014d3f6d59
1.1 xor 2.0 = 2.8657425494106605
Testing operands 100, 200.0004999999999881765688769519329071044921875
2cd76fe086b93ce2f768a00b22a00000000000 xor 59aeee72a26b59f6380fcf078b92c4478e8a13 = 7579819224d26514cf676f0ca932c4478e8a13
100.0 xor 200.0005 = 261.9771865509636
Testing operands 120.3250000000000028421709430404007434844970703125, 120.75
d2c39898113a28d484dd867220659fbb45005915 xor d3822c338b76bab08df9fee485d1b00000000000 = 141b4ab9a4c926409247896a5b42fbb45005915
120.325 xor 120.75 = 0.7174277813579485
Testing operands 120.2300000000000039790393202565610408782958984375, 120.0000830000000036079654819332063198089599609375
d298ff20fbed5fd091d87e56002df79fc7007cb7 xor d231e5f39e1db18654cb8c43d579692616a16a1f = a91ad365f0ee56c513f215d5549eb9d1a116a8
120.23 xor 120.000083 = 0.37711627930683345
Here is the Java program:
import java.math.BigDecimal;
import java.math.BigInteger;
public class Test {
public static double xor(double a, double b) {
BigDecimal ad = new BigDecimal(a);
BigDecimal bd = new BigDecimal(b);
/*
* Shifting the decimal point right by scale will make both operands
* integers.
*/
int scale = Math.max(ad.scale(), bd.scale());
/*
* Scale both operands by, in effect, multiplying by the same power of 10.
*/
BigDecimal aScaled = ad.movePointRight(scale);
BigDecimal bScaled = bd.movePointRight(scale);
/*
* Convert the operands to integers, treating any rounding as an error.
*/
BigInteger aInt = aScaled.toBigIntegerExact();
BigInteger bInt = bScaled.toBigIntegerExact();
BigInteger resultInt = aInt.xor(bInt);
System.out.println(aInt.toString(16) + " xor " + bInt.toString(16) + " = "
+ resultInt.toString(16));
/*
* Undo the decimal point shift, in effect dividing by the same power of 10
* as was used to scale to integers.
*/
BigDecimal result = new BigDecimal(resultInt, scale);
return result.doubleValue();
}
public static void test(double a, double b) {
System.out.println("Testing operands " + new BigDecimal(a) + ", " + new BigDecimal(b));
double result = xor(a, b);
System.out.println(a + " xor " + b + " = " + result);
System.out.println();
}
public static void main(String arg[])
{
test(1.1, 2.0);
test(100, 200.0005);
test(120.325, 120.75);
test(120.23, 120.000083);
}
}
"But I do not know the max length of number after point ..."
In double precision floating-point you have 15–17 significant decimal digits. If you give bitxor double inputs these must be less than intmax('uint64'): 1.844674407370955e+19. The largest double, realmax (= 1.797693134862316e+308), is much bigger than this, so you can't represent everything in the the way you're using. For example, this means that your value of 800.0101010*10^17 won't work.
If your range is [0, 999], one option is to solve for the largest fractional exponent k and use that: log(double(intmax('uint64'))/999)/log(10) (= 16.266354234268810).
This is the code i have:
int resultInt = [ja.resultCount intValue];
float pages = resultInt / 10;
NSLog(#"%d",resultInt);
NSLog(#"%.2f",pages);
the resultInt comes back from php script with the value 3559 so the pages result should be 355.9 but i get the result as 355.00 which isn't right
Use
float pages = resultInt / 10.0f;
int/int is int
but int/float or float/int is float
Edited for more explanation
It is important to remember that the resultant value of a mathematical operation is subject to the rules of the receiving variable's data type. The result of a division operation may yield a floating point value. However, if assigned to an integer the fractional part will be lost. Equally important, and less obvious, is the effect of an operation performed on several integers and assigned to a non-integer. In this case, the result is calculated as an integer before being implicitly converted. This means that although the resultant value is assigned to a floating point variable, the fractional part is still truncated unless at least one of the values is explicitly converted first. The following examples illustrate this:
int a = 7;
int b = 3;
int integerResult;
float floatResult;
integerResult = a / b; // integerResult = 2 (truncated)
floatResult = a / b; // floatResult = 2.0 (truncated)
floatResult = (float)a / b; // floatResult = 2.33333325
This has to do with the fact that you're using integer and not float.
Tell the variables that you are using that they are floats and you are done.
int resultInt = [ja.resultCount intValue];
float pages = (float)resultInt / 10.f;