Integer encoding format - encoding

I've run across some PIN encoding which I'm trying to figure out so I can improve upon a web application used at my place of work.
When I reset users' PINs (in this case, just my own for testing purposes), I'm seeing the following:
PIN VALUE
000000 = 7F55858585858585
111111 = 7F55868686868686
222222 = 7F55878787878787
999999 = 7F558E8E8E8E8E8E
000001 = 7F01313131313132
000011 = 7F55858585858686
000111 = 7F01313131323232
001111 = 7F55858586868686
011111 = 7F01313232323232
000002 = 7F02323232323234
100000 = 7F01323131313131
111112 = 7F03343434343435
123456 = 7F0738393A3B3C3D
654321 = 7F073D3C3B3A3938
1357924680 = 7F01323436383A3335373931
1111111111 = 7F5586868686868686868686
1234567890 = 7F0132333435363738393A31
It's clearly just hex, and always starts with 7F (1111111 or 127), but I'm not seeing a pattern for how the next two characters are chosen. Those two characters seem to be the determining value for converting the PIN.
For example:
000000 = 7F 55 858585858585
7F (hex) = 127 (dec) or 1111111 (bin) ## appears to not be used in the calculation?
55 (hex) = 85 (dec) or 1010101 (bin)
0 (PIN) + 85 = 85
000000 = 858585858585
111111 = 7F 55 868686868686
7F (hex) = 127 (dec) or 1111111 (bin) ## appears to not be used in the calculation?
55 (hex) = 85 (dec)
1 (PIN) + 85 = 86
111111 = 868686868686
But then also:
1357924680 = 7F 01 323436383A3335373931
01 (hex) = 31 (dec) ?
1 (PIN) + 31 = 32
1357924680 = 323436383A3335373931
Any help pointing me in the right direction would be greatly appreciated.

I don't see enough data in your minimal reproducible example to uncover an algorithm how the pinshift value should be determined (supplied to the pin_to_hex function). A random value is used in the following solution:
def hex_to_pin( pinhex: str) -> list:
'''
decode a PIN from a particular hexadecimal-formatted string
hex_to_pin('7F0738393A3B3C3D')
inverse of the "pin_to_hex" function (any of the following):
hex_to_pin(pin_to_hex('123456', 7))
pin_to_hex(*hex_to_pin('7F0738393A3B3C3D'))
'''
xxaux = bytes.fromhex(pinhex)
return [bytes([x - xxaux[1] for x in xxaux[2:]]).decode(),
xxaux[1]]
def pin_to_hex( pindec: str, pinshift: int, upper=False) -> str:
'''
encode a PIN to a particular hexadecimal-formatted string
pin_to_hex('123456', 7)
inverse of the "hex_to_pin" function (any of the following):
pin_to_hex(*hex_to_pin('7F0738393A3B3C3D'),True)
hex_to_pin(pin_to_hex('123456', 7))
'''
shift_ = max( 1, pinshift % 199) ## 134 for alpha-numeric PIN code
retaux = [b'\x7F', shift_.to_bytes(1, byteorder='big')]
for digit_ in pindec.encode():
retaux.append( (digit_ + shift_).to_bytes(1, byteorder='big'))
if upper:
return (b''.join(retaux)).hex().upper()
else:
return (b''.join(retaux)).hex()
def get_pin_shift( pindec: str) -> int:
'''
determine "pinshift" parameter for the "pin_to_hex" function
currently returns a random number
'''
return random.randint(1,198) ## (1,133) for alpha-numeric PIN code
hexes = [
'7F01323436383A3335373931',
'7F0738393A3B3C3D',
'7F558E8E8E8E8E8E'
]
print("hex_to_pin:")
maxlen = len( max(hexes, key=len))
deces = []
for xshex in hexes:
xsdec = hex_to_pin( xshex)
print( f"{xshex:<{maxlen}} ({xsdec[1]:>3}) {xsdec[0]}")
deces.append(xsdec[0])
import random
print("pin_to_hex:")
for xsdec in deces:
xsshift = get_pin_shift( xsdec)
xshex = pin_to_hex( xsdec, xsshift)
print( f"{xshex:<{maxlen}} ({xsshift:>3}) {xsdec}")
Output SO\71875753.py
hex_to_pin:
7F01323436383A3335373931 ( 1) 1357924680
7F0738393A3B3C3D ( 7) 123456
7F558E8E8E8E8E8E ( 85) 999999
pin_to_hex:
7f1041434547494244464840 ( 16) 1357924680
7f4e7f8081828384 ( 78) 123456
7f013a3a3a3a3a3a ( 1) 999999

Related

Convert octal value to letters and join together - JAVA

I have a octal value, its - 0110 0145 0154 0154 0157 054 040 0110 0151, the result must be - Hello, Hi.
Here is my code :
String octal = "0110 0145 0154 0154 0157 054 040 0110 0151 ";
List<String> result = Arrays.asList(octal.split("\\s*,\\s*"));
long item = 1;
String res = "";
while(item < result.size()) {
char re = (char) Integer.parseInt(result.get((int) item), 8);
res = res + " "+ re;
item += 1;
}
System.out.println("Its" + res);
But the output :
Its e
Expected
Hello, Hi
I tried everything, but failed ):
Why did you think the split pattern "\\s*,\\s*" were a solution for your needs? To split at space, we can use "\\s+".
And in order to start at the first letter, you should initialize item = 0.

SBData is wrong when SBValue comes from a Swift Dictionary

I'm trying to write a Python function to format a Foundation.Decimal, for use as a type summarizer. I posted it in this answer. I'll also include it at the bottom of this answer, with extra debug prints.
I've now discovered a bug, but I don't know if the bug is in my function, or in lldb, or possibly in the Swift compiler.
Here's a transcript that demonstrates the bug. I load my type summarizer in ~/.lldbinit, so the Swift REPL uses it.
:; xcrun swift
registering Decimal type summaries
Welcome to Apple Swift version 4.2 (swiftlang-1000.11.37.1 clang-1000.11.45.1). Type :help for assistance.
1> import Foundation
2> let dec: Decimal = 7
dec: Decimal = 7
Above, the 7 in the debugger output is from my type summarizer and is correct.
3> var dict = [String: Decimal]()
dict: [String : Decimal] = 0 key/value pairs
4> dict["x"] = dec
5> dict["x"]
$R0: Decimal? = 7
Above, the 7 is again from my type summarizer, and is correct.
6> dict
$R1: [String : Decimal] = 1 key/value pair {
[0] = {
key = "x"
value = 0
}
}
Above, the 0 (in value = 0) is from my type summarizer, and is incorrect. It should be 7.
So why is it zero? My Python function is given an SBValue. It calls GetData() on the SBValue to get an SBData. I added debug prints to the function to print the bytes in the SBData, and also to print the result of sbValue.GetLoadAddress(). Here's the transcript with these debug prints:
:; xcrun swift
registering Decimal type summaries
Welcome to Apple Swift version 4.2 (swiftlang-1000.11.37.1 clang-1000.11.45.1). Type :help for assistance.
1> import Foundation
2> let dec: Decimal = 7
dec: Decimal = loadAddress: ffffffffffffffff
data: 00 21 00 00 07 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
7
Above, we can see that the load address is bogus, but the bytes of the SBData are correct (byte 1, 21, contains the length and flags; byte 4, '07', is the first byte of the significand).
3> var dict = [String: Decimal]()
dict: [String : Decimal] = 0 key/value pairs
4> dict["x"] = dec
5> dict
$R0: [String : Decimal] = 1 key/value pair {
[0] = {
key = "x"
value = loadAddress: ffffffffffffffff
data: 00 00 00 00 00 21 00 00 07 00 00 00 00 00 00 00 00 00 00 00
0
}
}
Above, we can see that the load address is still bogus, and now the bytes of the SBData are incorrect. The SBData still contains 20 bytes (the correct number for a Foundation.Decimal, aka NSDecimal), but now four 00 bytes have been inserted at the front and the last four bytes have been dropped.
So here are my specific questions:
Am I using the lldb API incorrectly, and thus getting wrong answers? If so, what am I doing wrong and how should I correct it?
If I'm using the lldb API correctly, then is this a bug in lldb, or is the Swift compiler emitting incorrect metadata? How can I figure out which tool has the bug? (Because if it's a bug in one of the tools, I'd like to file a bug report.)
If it's a bug in lldb or Swift, how can I work around the problem so I can format a Decimal correctly when it's part of a Dictionary?
Here is my type formatter, with debug prints:
# Decimal / NSDecimal support for lldb
#
# Put this file somewhere, e.g. ~/.../lldb/Decimal.py
# Then add this line to ~/.lldbinit:
# command script import ~/.../lldb/Decimal.py
import lldb
def stringForDecimal(sbValue, internal_dict):
from decimal import Decimal, getcontext
print(' loadAddress: %x' % sbValue.GetLoadAddress())
sbData = sbValue.GetData()
if not sbData.IsValid():
raise Exception('unable to get data: ' + sbError.GetCString())
if sbData.GetByteSize() != 20:
raise Exception('expected data to be 20 bytes but found ' + repr(sbData.GetByteSize()))
sbError = lldb.SBError()
exponent = sbData.GetSignedInt8(sbError, 0)
if sbError.Fail():
raise Exception('unable to read exponent byte: ' + sbError.GetCString())
flags = sbData.GetUnsignedInt8(sbError, 1)
if sbError.Fail():
raise Exception('unable to read flags byte: ' + sbError.GetCString())
length = flags & 0xf
isNegative = (flags & 0x10) != 0
debugString = ''
for i in range(20):
debugString += ' %02x' % sbData.GetUnsignedInt8(sbError, i)
print(' data:' + debugString)
if length == 0 and isNegative:
return 'NaN'
if length == 0:
return '0'
getcontext().prec = 200
value = Decimal(0)
scale = Decimal(1)
for i in range(length):
digit = sbData.GetUnsignedInt16(sbError, 4 + 2 * i)
if sbError.Fail():
raise Exception('unable to read memory: ' + sbError.GetCString())
value += scale * Decimal(digit)
scale *= 65536
value = value.scaleb(exponent)
if isNegative:
value = -value
return str(value)
def __lldb_init_module(debugger, internal_dict):
print('registering Decimal type summaries')
debugger.HandleCommand('type summary add Foundation.Decimal -F "' + __name__ + '.stringForDecimal"')
debugger.HandleCommand('type summary add NSDecimal -F "' + __name__ + '.stringForDecimal"')
This looks like an lldb bug. Please file a bug about this against lldb with http://bugs.swift.org.
For background: there is some magic going on behind your back in the Dictionary case. I can't show this in the REPL, but if you have a [String : Decimal] array as a local variable in some real code and do:
(lldb) frame variable --raw dec_array
(Swift.Dictionary<Swift.String, Foundation.Decimal>) dec_array = {
_variantBuffer = native {
native = {
_storage = 0x0000000100d05780 {
Swift._SwiftNativeNSDictionary = {}
bucketCount = {
_value = 2
}
count = {
_value = 1
}
initializedEntries = {
values = {
_rawValue = 0x0000000100d057d0
}
bitCount = {
_value = 2
}
}
keys = {
_rawValue = 0x0000000100d057d8
}
values = {
_rawValue = 0x0000000100d057f8
}
seed = {
0 = {
_value = -5794706384231184310
}
1 = {
_value = 8361200869849021207
}
}
}
}
cocoa = {
cocoaDictionary = 0x00000001000021b0
}
}
}
A swift Dictionary doesn't actually contain the dictionary elements anywhere obvious, and certainly not as ivars. So lldb has a "Synthetic child provider" for Swift Dictionaries that makes up SBValues for the keys and values of the Dictionary, and it is one of those synthetic children that your formatter is being handed.
That's also why the load address is -1. That really means "this is a synthetic thing whose data lldb is directly managing, not a thing at an address somewhere in your program." The same thing is true of REPL results, they are more a fiction lldb maintains. But if you looked at a local variable of type Decimal, you would see a valid load address, because it is a thing that lives somewhere in memory.
Anyway, so apparently the Synthetic children Decimal objects we are making up to represent the values of the dictionary don't set the start of the data correctly. Interestingly enough, if you make a [Decimal : String] dictionary, the key field's SBData is correct, and your formatter works. It's just the values that aren't right.
I tried the same thing with Dictionaries that have Strings as values, and the SBData looks correct there. So there's something funny about Decimal. Anyway, thanks for pursuing this, and please do file a bug.

the code is not returning any value and the code is being stored in user defined function instead of stored procedure

CREATE PROCEDURE bank(
in bk_cd_in CHAR( 4 ) ,
out bk_cd_out CHAR( 4 ) ,
out bk_nm CHAR( 40 ) ,
out brh_cd CHAR( 8 ) ,
out bak_hnm CHAR( 40 ) ,
out ur_cd CHAR(18),
out updt DATE,
out updt_flag CHAR( 1 ) ,
out brh_nm CHAR( 40 ) ,
out cty_nm CHAR( 40 ) )
SELECT bank_cd, bank_nm, branch_cd, bank_hnm, user_cd, update_dt, update_flag, branch_nm, city_nm
FROM bankmst
WHERE bank_cd = bk_cd_in
into bk_cd_out, bk_nm, brh_cd, bak_hnm, ur_cd, updt, updt_flag, brh_nm, cty_nm ;
the calling code written in jsp the out parametres are empty and on running the jsp page the error : com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'b1' in 'field list' ended
try
{
cs = con.prepareCall("{call bank(?,?,?,?,?,?,?,?,?,?)}");
String s1 = para;
cs.setString(1,para);
cs.registerOutParameter(2, java.sql.Types.VARCHAR);
cs.registerOutParameter(3, java.sql.Types.VARCHAR);
cs.registerOutParameter(4, java.sql.Types.CHAR);
cs.registerOutParameter(5, java.sql.Types.CHAR);
cs.registerOutParameter(6, java.sql.Types.CHAR);
cs.registerOutParameter(7, java.sql.Types.DATE);
cs.registerOutParameter(8, java.sql.Types.CHAR);
cs.registerOutParameter(9, java.sql.Types.CHAR);
cs.registerOutParameter(10, java.sql.Types.CHAR);
cs.registerOutParameter(11, java.sql.Types.INTEGER);
rs = cs.executeQuery();
out.println("\nexecuted 3\n");
String b1 = cs.getString(2);
String b2 = cs.getString(3);
String b3 = cs.getString(4);
String b4 = cs.getString(5);
String b5 = cs.getString(6);
java.util.Date b6 = cs.getDate(7);
String b7 = cs.getString(8);
String b8 = cs.getString(9);
String b9 = cs.getString(10);
System.out.println(b1);
System.out.println(b2);
System.out.println(b3);
System.out.println(b4);
System.out.println(b5);
System.out.println(b6);
System.out.println(b7);
System.out.println(b8);
System.out.println(b9);
}
..I am using eclipse 3.6 and the code is written in mysql 5.1

Matlab - read a specific format line

I have a file which contains data in the following format 0,"20 300 40 12".
How can I read this data with sscanf function such that I store 0 in a separate variable and 20 300 40 12 in another variable. The problem is that the array within the " " changes its size, so I cannot use a fix length array. So I can have something like this within my file:
0,"20 300 40 12"
0,"20 300 43 40 12"
1,"22 40 12"
Can you give me a hint of how to read this?
Have you tried with this:
fid = fopen(filename,'r');
A = textscan(fid,'%d,%q','Delimiter','\n');
Here's another way to do it:
[a,b] = textread('ah.txt','%d,"%[^"]"');
fun = #(x) split(' ',x);
resb = cellfun(fun,b,'UniformOutput',false)
res = {a resb};
function l = split(d,s)
%split string s on string d
out = textscan(s,'%s','delimiter',d,'multipleDelimsAsOne',1);
l = out{1};

T-SQL Decimal Division Accuracy

Does anyone know why, using SQLServer 2005
SELECT CONVERT(DECIMAL(30,15),146804871.212533)/CONVERT(DECIMAL (38,9),12499999.9999)
gives me 11.74438969709659,
but when I increase the decimal places on the denominator to 15, I get a less accurate answer:
SELECT CONVERT(DECIMAL(30,15),146804871.212533)/CONVERT(DECIMAL (38,15),12499999.9999)
give me 11.74438969
For multiplication we simply add the number of decimal places in each argument together (using pen and paper) to work out output dec places.
But division just blows your head apart. I'm off to lie down now.
In SQL terms though, it's exactly as expected.
--Precision = p1 - s1 + s2 + max(6, s1 + p2 + 1)
--Scale = max(6, s1 + p2 + 1)
--Scale = 15 + 38 + 1 = 54
--Precision = 30 - 15 + 9 + 54 = 72
--Max P = 38, P & S are linked, so (72,54) -> (38,20)
--So, we have 38,20 output (but we don use 20 d.p. for this sum) = 11.74438969709659
SELECT CONVERT(DECIMAL(30,15),146804871.212533)/CONVERT(DECIMAL (38,9),12499999.9999)
--Scale = 15 + 38 + 1 = 54
--Precision = 30 - 15 + 15 + 54 = 84
--Max P = 38, P & S are linked, so (84,54) -> (38,8)
--So, we have 38,8 output = 11.74438969
SELECT CONVERT(DECIMAL(30,15),146804871.212533)/CONVERT(DECIMAL (38,15),12499999.9999)
You can do the same math if follow this rule too, if you treat each number pair as
146804871.212533000000000 and 12499999.999900000
146804871.212533000000000 and 12499999.999900000000000
To put it shortly, use DECIMAL(25,13) and you'll be fine with all calculations - you'll get precision right as declared: 12 digits before decimal dot, and 13 decimal digits after.
Rule is: p+s must equal 38 and you will be on safe side!
Why is this?
Because of very bad implementation of arithmetic in SQL Server!
Until they fix it, follow that rule.
I've noticed that if you cast the dividing value to float, it gives you the correct answer, i.e.:
select 49/30 (result = 1)
would become:
select 49/cast(30 as float) (result = 1.63333333333333)
We were puzzling over the magic transition,
P & S are linked, so:
(72,54) -> (38,29)
(84,54) -> (38,8)
Assuming (38,29) is a typo and should be (38,20), the following is the math:
i. 72 - 38 = 34,
ii. 54 - 34 = 20
i. 84 - 38 = 46,
ii. 54 - 46 = 8
And this is the reasoning:
i. Output precision less max precision is the digits we're going to throw away.
ii. Then output scale less what we're going to throw away gives us... remaining digits in the output scale.
Hope this helps anyone else trying to make sense of this.
Convert the expression not the arguments.
select CONVERT(DECIMAL(38,36),146804871.212533 / 12499999.9999)
Using the following may help:
SELECT COL1 * 1.0 / COL2