How can I extrapolate the reversed hex string with Flutter? - flutter

through flutter I am reading a series of beacons with nfc_manager, the string I read is the one reported in the log below, how can I extrapolate the reversed hex string to be able to print it from the string represents in the log?
Flutter Code:
NfcManager.instance.startSession(onDiscovered: (NfcTag tag) async {
log(tag.data.toString());
});
Log print:
{nfca: {identifier: [4, 7, 255, 82, 190, 78, 129], atqa: [68, 0],
maxTransceiveLength: 253, sak: 0, timeout: 618},
mifareultralight: {identifier: [4, 7, 255, 82, 190, 78, 129],
maxTransceiveLength: 253, timeout: 618, type: 1}, ndefformatable: {identifier: [4, 7, 255, 82, 190, 78, 129]}}

Decode the json string like
var convert = json.decode(tag.data.toString());
Then you can use String.fromCharCodes like the following
List<int> charCodes = convert['nfca']['identifier'];
print(new String.fromCharCodes(charCodes));

Related

Flutter_Blue how to convert service data to Eddystone Namespace ID?

I am still having trouble converting [113, 28, 115, 246, 74, 253, 206, 7, 183, 227] to 711c73f64afdce07b7e3
I did
var step1 = r.advertisementData.serviceData['0000feaa-0000-1000-8000-00805f9b34fb'];
final step2 = Uint8List.fromList(step1);
final step3 = ByteData.sublistView(step2, 2, 11);
and I get TypedDataView(cid: 153)
;/

Extracting Temperatures from .ravi file in Matlab

My Problem
Much like the post here: How can I get data from 'ravi' file?, I have a .ravi file (a radiometric video file, which is rather similar to an .avi) and I am trying to extract the Temperatures in it, to use them together with additional sensor data.
A Sample File can be found in the documentation (http://infrarougekelvin.com/en/optris-logiciel-eng/) when you download the "PIX Connect Software". Unfortunately, according to the documentation, the temperature information is stored in a 16 Bit Format, that Matlab seems to be rather unhappy with.
How I tried to solve my problem
I tried to follow the instructions from the before mentioned post, but I somehow struggle to reach results, which are even close to the correct temperatures.Original Picture with temperatures in the Optris Software
I tried to read the video with different methods:
At first I hoped to use the videorecorder Feature in Matlab:
video = VideoReader(videoPath);
frame1 = video.read(1);
imagesc(frame1)
But it only resulted in this poor picture, which is exactly what I can see, when I try to play the .ravi file in a media player like vlc.
First try with videorecorder function
Then I tried to look at the binary representation of my file and noticed, that I could separate the frames at a certain marker Beginning of a new frame in binary representation
So I tried to read the file with the matlab fread function:
fileID = fopen(videoPath);
[headerInfo,~] = fread(fileID,[1,123392],'uint8');
[imageMatrix,count] = fread(fileID,[video.width, video.height],'uint16', 'b');
imagesc(imageMatrix')
Now the image looks better, and you can at least see the brake disc, but it seems, as if the higher temperatures have some kind of offset, that is stil missing, for the picture to be right.
Also, the values that I read from the file are nowhere near actual temperatures, as the other post and the documentation suggests.
Getting somewhere!
My Question
Am I somehow missing something important? Could someone point me in the right direction, where to look or how to get the actual temperatures from my video? As it worked with the cpp code in the other post, I am guessing this might be a matlab problem.
A relatively simple solution for getting the raw frame data is converting the RAVI video file to raw video file format.
You can use FFmpeg (command line tool) for converting the RAVI to RAW format.
Example:
ffmpeg -y -f avi -i "Sequence_LED_Holder.ravi" -vcodec rawvideo "Sequence_LED_Holder.yuv"
The YUV (raw binary data) file, can be simply read by MATLAB using fread function.
Note: the .yuv is just a convention (used by FFmpeg) for raw video files - the actual pixel format is not YUV, but int16 format.
You can try parsing the RAVI file manually, but using FFmpeg is much simpler.
The raw file format is composed of raw video frames one after the other with no headers.
I our case, each frame is width*height*2 bytes.
The pixel type is int16 (may include negative values).
The IR video frames has no color information.
The colors are just "false colors" created using palette and used for visualization.
The code sample uses a palette from different IR camera manufacture.
Getting the temperature:
I could not find the way to convert the pixel value to the equivalent temperature.
I didn't read the documentation - there is a chance that the conversion is documented somewhere.
The MATLAB code sample applies the following stages:
Convert RAVI file format to RAW video file format using FFmpeg.
Read video frames as [cols, rows] size int16 matrix.
Remove the first line that probably contains data (not pixels).
Use linear contrast stretch - for visualization.
Apply false colors - for visualization.
Display the processed video frame.
Here is the code sample:
%ravi_file_name = 'Brake disc.ravi';
%ravi_file_name = 'Combustion process.ravi';
%ravi_file_name = 'Electronic board.ravi';
%ravi_file_name = 'Sequence_carwheels.ravi';
%ravi_file_name = 'Sequence_drop.ravi';
ravi_file_name = 'Sequence_LED_Holder.ravi';
%ravi_file_name = 'Steel workpiece with hole.ravi';
yuv_file_name = strrep(ravi_file_name, '.ravi', '.yuv'); % Same file name with .yuv extension.
% Get video resolution.
vidinfo = mmfileinfo(ravi_file_name);
cols = vidinfo.Video.Width;
rows = vidinfo.Video.Height;
% Execute ffmpeg (in the system shell) for converting RAVI to raw data file.
% Remark: download FFmpeg if needed, and make sure ffmpeg executable is in the execution path.
if ~exist(yuv_file_name, 'file')
% Remark: For some of the video files, cmdout returns a string with lots of meta-data
[status, cmdout] = system(sprintf('ffmpeg -y -f avi -i "%s" -vcodec rawvideo "%s"', ravi_file_name, yuv_file_name));
if (status ~= 0)
fprintf(cmdout);
error(['Error: ffmpeg status = ', num2str(status)]);
end
end
% Get the number of frames according to file size.
filesize = getfield(dir(yuv_file_name), 'bytes');
n_frames = filesize / (cols*rows*2);
f = fopen(yuv_file_name, 'r');
% Iterate the frames (skip the last frame).
for i = 1:n_frames-1
% Read frame as cols x rows and int16 type.
% The data is signed (int16) and not uint16.
I = fread(f, [cols, rows], '*int16')';
% It looks like the first line contains some data (not pixels).
data_line = I(1, :);
I = I(2:end, :);
% Apply linear stretch - in order to "see something"...
J = imadjust(I, stretchlim(I, [0.02, 0.98]));
% Apply false colors - just for visualization.
K = ColorizeIr(J);
if (i == 1)
figure;
h = imshow(K, []); %h = imshow(J, []);
impixelinfo
else
if ~isvalid(h)
break;
end
h.CData = K; %h.CData = J;
end
pause(0.05);
end
fclose(f);
imwrite(uint16(J+2^15), 'J.tif'); % Write J as uint16 image.
imwrite(K, 'K.png'); % Write K image (last frame).
% Colorize the IR video frame with "false colors".
function J = ColorizeIr(I)
% The palette apply different IR manufacture - don't expect the result to resemble OPTRIS output.
% https://groups.google.com/g/flir-lepton/c/Cm8lGQyspmk
colormapIronBlack = uint8([...
255, 255, 255, 253, 253, 253, 251, 251, 251, 249, 249, 249, 247, 247,...
247, 245, 245, 245, 243, 243, 243, 241, 241, 241, 239, 239, 239, 237,...
237, 237, 235, 235, 235, 233, 233, 233, 231, 231, 231, 229, 229, 229,...
227, 227, 227, 225, 225, 225, 223, 223, 223, 221, 221, 221, 219, 219,...
219, 217, 217, 217, 215, 215, 215, 213, 213, 213, 211, 211, 211, 209,...
209, 209, 207, 207, 207, 205, 205, 205, 203, 203, 203, 201, 201, 201,...
199, 199, 199, 197, 197, 197, 195, 195, 195, 193, 193, 193, 191, 191,...
191, 189, 189, 189, 187, 187, 187, 185, 185, 185, 183, 183, 183, 181,...
181, 181, 179, 179, 179, 177, 177, 177, 175, 175, 175, 173, 173, 173,...
171, 171, 171, 169, 169, 169, 167, 167, 167, 165, 165, 165, 163, 163,...
163, 161, 161, 161, 159, 159, 159, 157, 157, 157, 155, 155, 155, 153,...
153, 153, 151, 151, 151, 149, 149, 149, 147, 147, 147, 145, 145, 145,...
143, 143, 143, 141, 141, 141, 139, 139, 139, 137, 137, 137, 135, 135,...
135, 133, 133, 133, 131, 131, 131, 129, 129, 129, 126, 126, 126, 124,...
124, 124, 122, 122, 122, 120, 120, 120, 118, 118, 118, 116, 116, 116,...
114, 114, 114, 112, 112, 112, 110, 110, 110, 108, 108, 108, 106, 106,...
106, 104, 104, 104, 102, 102, 102, 100, 100, 100, 98, 98, 98, 96, 96,...
96, 94, 94, 94, 92, 92, 92, 90, 90, 90, 88, 88, 88, 86, 86, 86, 84, 84,...
84, 82, 82, 82, 80, 80, 80, 78, 78, 78, 76, 76, 76, 74, 74, 74, 72, 72,...
72, 70, 70, 70, 68, 68, 68, 66, 66, 66, 64, 64, 64, 62, 62, 62, 60, 60,...
60, 58, 58, 58, 56, 56, 56, 54, 54, 54, 52, 52, 52, 50, 50, 50, 48, 48,...
48, 46, 46, 46, 44, 44, 44, 42, 42, 42, 40, 40, 40, 38, 38, 38, 36, 36,...
36, 34, 34, 34, 32, 32, 32, 30, 30, 30, 28, 28, 28, 26, 26, 26, 24, 24,...
24, 22, 22, 22, 20, 20, 20, 18, 18, 18, 16, 16, 16, 14, 14, 14, 12, 12,...
12, 10, 10, 10, 8, 8, 8, 6, 6, 6, 4, 4, 4, 2, 2, 2, 0, 0, 0, 0, 0, 9,...
2, 0, 16, 4, 0, 24, 6, 0, 31, 8, 0, 38, 10, 0, 45, 12, 0, 53, 14, 0,...
60, 17, 0, 67, 19, 0, 74, 21, 0, 82, 23, 0, 89, 25, 0, 96, 27, 0, 103,...
29, 0, 111, 31, 0, 118, 36, 0, 120, 41, 0, 121, 46, 0, 122, 51, 0, 123,...
56, 0, 124, 61, 0, 125, 66, 0, 126, 71, 0, 127, 76, 1, 128, 81, 1, 129,...
86, 1, 130, 91, 1, 131, 96, 1, 132, 101, 1, 133, 106, 1, 134, 111, 1,...
135, 116, 1, 136, 121, 1, 136, 125, 2, 137, 130, 2, 137, 135, 3, 137,...
139, 3, 138, 144, 3, 138, 149, 4, 138, 153, 4, 139, 158, 5, 139, 163,...
5, 139, 167, 5, 140, 172, 6, 140, 177, 6, 140, 181, 7, 141, 186, 7,...
141, 189, 10, 137, 191, 13, 132, 194, 16, 127, 196, 19, 121, 198, 22,...
116, 200, 25, 111, 203, 28, 106, 205, 31, 101, 207, 34, 95, 209, 37,...
90, 212, 40, 85, 214, 43, 80, 216, 46, 75, 218, 49, 69, 221, 52, 64,...
223, 55, 59, 224, 57, 49, 225, 60, 47, 226, 64, 44, 227, 67, 42, 228,...
71, 39, 229, 74, 37, 230, 78, 34, 231, 81, 32, 231, 85, 29, 232, 88,...
27, 233, 92, 24, 234, 95, 22, 235, 99, 19, 236, 102, 17, 237, 106, 14,...
238, 109, 12, 239, 112, 12, 240, 116, 12, 240, 119, 12, 241, 123, 12,...
241, 127, 12, 242, 130, 12, 242, 134, 12, 243, 138, 12, 243, 141, 13,...
244, 145, 13, 244, 149, 13, 245, 152, 13, 245, 156, 13, 246, 160, 13,...
246, 163, 13, 247, 167, 13, 247, 171, 13, 248, 175, 14, 248, 178, 15,...
249, 182, 16, 249, 185, 18, 250, 189, 19, 250, 192, 20, 251, 196, 21,...
251, 199, 22, 252, 203, 23, 252, 206, 24, 253, 210, 25, 253, 213, 27,...
254, 217, 28, 254, 220, 29, 255, 224, 30, 255, 227, 39, 255, 229, 53,...
255, 231, 67, 255, 233, 81, 255, 234, 95, 255, 236, 109, 255, 238, 123,...
255, 240, 137, 255, 242, 151, 255, 244, 165, 255, 246, 179, 255, 248,...
193, 255, 249, 207, 255, 251, 221, 255, 253, 235, 255, 255, 24]);
lutR = colormapIronBlack(1:3:end);
lutG = colormapIronBlack(2:3:end);
lutB = colormapIronBlack(3:3:end);
% Convert I to uint8
I = im2uint8(I);
R = lutR(I+1);
G = lutG(I+1);
B = lutB(I+1);
J = cat(3, R, G, B);
end
Sample output:
Update:
Python code sample using OpenCV (without colorizing):
Using Python and OpenCV, we may skip the FFmpeg conversion part.
Instead of converting the RAVI file to YUV file, we may fetch undecoded RAW video from the RAVI file.
Open a video file and set CAP_PROP_FORMAT property for fetch undecoded RAW video:
cap = cv2.VideoCapture(ravi_file_name)
cap.set(cv2.CAP_PROP_FORMAT, -1) # Format of the Mat objects. Set value -1 to fetch undecoded RAW video streams (as Mat 8UC1).
When reading a video frame (using ret, frame = cap.read()), the undecoded frame is read as a "long" row vector of uint8 elements.
Converting the frame to int16 type, and reshaping to cols x rows:
First, we have to "view" the vector elements as int16 elements (opposed to uint8 elements): frame.view(np.int16)
Second, we have to reshape the vector into a matrix.
Conversion and reshaping code:
frame = frame.view(np.int16).reshape(rows, cols)
Complete Python code sample:
import numpy as np
import cv2
ravi_file_name = 'Sequence_LED_Holder.ravi'
cap = cv2.VideoCapture(ravi_file_name) # Opens a video file for capturing
# Fetch undecoded RAW video streams
cap.set(cv2.CAP_PROP_FORMAT, -1) # Format of the Mat objects. Set value -1 to fetch undecoded RAW video streams (as Mat 8UC1). [Using cap.set(cv2.CAP_PROP_CONVERT_RGB, 0) is not working]
cols = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) # Get video frames width
rows = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) # Get video frames height
while True:
ret, frame = cap.read() # Read next video frame (undecoded frame is read as long row vector).
if not ret:
break # Stop reading frames when ret = False (after the last frame is read).
# View frame as int16 elements, and reshape to cols x rows (each pixel is signed 16 bits)
frame = frame.view(np.int16).reshape(rows, cols)
# It looks like the first line contains some data (not pixels).
# data_line = frame[0, :]
frame_roi = frame[1:, :] # Ignore the first row.
# Normalizing frame to range [0, 255], and get the result as type uint8 (this part is used just for making the data visible).
normed = cv2.normalize(frame_roi, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8U)
cv2.imshow('normed', normed) # Show the normalized video frame
cv2.waitKey(10)
cap.release()
cv2.destroyAllWindows()
Sample output:
Note:
In case colorization is required, you may use the following example: Thermal Image Processing
In most of ravi files processed with Ffmpeg, there are non-pixel values on the first line of the raw image.
This first line stores some redondant information such as image width and height.
We have to skip this line which corresponds to the image width. Since data values are 16-bit, we must multiply by 2, to get the exact offset of the binary data. We have also to calculate the exact size of the image: imageLength = Frame size - (image width * 2).
In the other case, data are from the start of the file and we can use the frame size (w * h * 2) to copy binary data and update the offset.
To know if it's necessary to calculate the data offset, we just look at image height. If this value is odd, that means there is a supplementary first line and thus we apply the correction. If the value is even, no correction for the data offset.
This is the same story when parsing the original ravi files. First we have to find the offset of the movi tag in the file. If the movi tag if followed by the ix00 tag, that means we have just after a series of values that give the offset and the size of each frames from the offset of the movi tag. Real data are elsewere in the file. If ix00 tag is not present, that means that data are just inside the movi chunck, after the 00db flag, and frame by frame. In this last case, we can also look for the idx1 tag (at the end of the file) which gives access to the exact offset and size of each frame.
Both approaches allow a rather correct image representation in grayscale or in pseudo-color, but the temperature formula provided by the libirimager tool-kit (float t = (float)data[i] / 10.f - 100.f) is incorrect and I do not undestand why, since the formula was correct when I was using raw data produced by the PI-160 camera.
Fmmpeg test
I found an alternative way. In recent ravi Optris file we can get the temperature range in the INFO chunk. Then, it's easy to find the minimal and maximal values in raw data and to interpolate in reference to the temperature scale.
with correct temperatures
Each frame holds 16-bit values by pixel with low byte first and high byte after. To find the temperature you have to apply this formula: temp = (hi * 256.0 + lo) / 10.0 - 100.0.
With low value, you can create a grayscale image. I used this approach with old Pi-160 Optris camera with success. However with new PI-450, it's more difficult since PI Connect does not support binary export now.
I tested the solution with Ffmpeg without success. You get a 16-bit data file, but the offset of real data is incorrect, and thus the temperature is aberrant.
Did you succeed ?
Sample of binary reading:

How can I convert a compressed .gz file that I'm getting from a bluetooth Low-energy device to an actual decompressed file in Swift 4.2, using gzip?

I am very new to Xcode and iOS, I have a device, let's call it Brains, that I'm connecting to via Bluetooth LE using an app I built with Swift 4 and Xcode 10 on my iPhone 5, call it Body. Brains is similar to an arduino board, but not exactly. I can connect and get all the data with BLE with no problems, until I tried to get a compressed file filled with json strings.
I am receiving the compressed bytes but I can't seem to know what to do next. How can I get the compressed file, decompress it and read the data inside?
I have tried many things from using the Modules: GzipSwift,
DataCompression and SSZipArchive
I have used gunzipped(), gunzip() and decompress() but none of them seem to work.
I have read this thread: iOS :: How to decompress .gz file using GZIP Utility? and it say that I have to get all the compressed bytes stream and convert that to NSData and then decompress it, trouble is he's Using objective-c and I cant seem to translate into swift 4.
I'm getting the bytes from the Bluetooth LE characteristic in a [UInt8] array, in this function:
func received_logs(data: [UInt8]) {
let data_array_example = [31, 139, 8, 8, 16, 225, 156, 92, 2, 255, 68, 97, 116, 97, 0, 181, 157, 107, 110, 220, 56, 16, 6, 175, 226, 3, 248, 71, 63, 73, 234, 44, 193, 222, 255, 26, 171, 30, 35, 192, 90, 20, 18, 121, 182, 11, 112, 16, 35, 48, 10, 31, 154, 197, 22, 135, 34, 227, 95, 191, 76, 244, 16, 183, 248, 252, 48, 137, 229, 38, 242, 249, 161, 231, 87, 156, 127, 207, 113, 126, 227, 159, 31, 231, 183, 110, 223, 255, 200, 239, 47, 203, 252, 253, 173, 255, 231, 159, 235, 235, 108, 105, 110, 101, 48, 47, 50, 48]
for data_byte in stride(from: 0, to: data_array_example.count, by: 1) {
let byte = String(data_array_example[data_byte])
sourceString = sourceString + byte //getting all the bytes and converting to string to store in a variable
}
/******************************************************************/
let text = sourceBuffer
do {
try text.write(to: path!, atomically: false, encoding: String.Encoding.utf8)
}
catch {
print("Failed writing")
} //dump the var into a txt file
/**********UPDATED**********/
var file_array : [UInt8] = []
let byte2 = NSData(data: data_array_example.data)
let asc_array = Data(bytes: byte2.data)
let decompressedData: Data
do {
try decompressedData = asc.gunzipped()
print("Decom: ", String(data: decompressedData, encoding: .utf8))
}
catch {
print(error) //Gives me the "unknown compression method error"
}
}
I expect to see the Uncompressed file's contents but I only get:
GzipError(kind: Gzip.GzipError.Kind.data, message: "incorrect header check")
Maybe I'm just making it more complicated than It needs to be. Any help would be greatly appreciated!
Thank you very much :)
UPDATE:
I created a .gz file and used the both the gunzipped() and gunzip() functions and both of them worked.
UPDATE:
Tried to directly convert the data to NSData and then gunzip() but now getting the error:
GzipError(kind: Gzip.GzipError.Kind.data, message: "unknown compression method")
The updated example data has a correct gzip header, and so would not be giving you an incorrect header check if you are feeding the data correctly to the gunzipper.
I solve my issue. Turns out I was miscounting the bytes and some of them were in the wrong order. Thank you guys for your help!

snmpv3 getone fails while trying via pysnmp (WrongValueError)

while trying to run get one with pysnmp for snmpv3, getting below error
pysnmp.smi.error.WrongValueError: WrongValueError({'msg': WrongValueError(), 'name': (1, 3, 6, 1, 6, 3, 15, 1, 2, 2, 1, 5, 24, 48, 48, 48, 48, 49, 100, 51, 98, 48, 48, 48, 48, 55, 53, 100, 49, 97, 99, 49, 48, 48, 49, 48, 49, 5, 107, 107, 48, 51, 48), 'idx': 3})
from pysnmp.hlapi import *
errorIndication, errorStatus, errorIndex, varBinds = next(
getCmd(SnmpEngine(),
UsmUserData('USERNAME',authKey='AUTHKEY', privKey='PRIVKEY', authProtocol='usmHMACSHAAuthProtocol', privProtocol='usmAESCfb256Protocol',
securityEngineId=OctetString(hexValue='0000303010')),
UdpTransportTarget(('<IP-ADDR>', <PORT>)),
ContextData(),
ObjectType(ObjectIdentity('<MIB-FILE-NAME>','<MIB-NAME>',<INDEX>)))
The same code works for SNMP-V2 with community string instead of UsmUserData. However, not working for SNMP-V3.
The traceback is too long and no clue
File "supy.py", line 15, in <module>
ObjectType(ObjectIdentity('<MIB-FILE-NAME>','<MIB-NAME>',<INDEX>)))
File "/usr/lib/python2.7/site-packages/pysnmp/hlapi/asyncore/sync/cmdgen.py", line 111, in getCmd
lookupMib=options.get('lookupMib', True)))
File "/usr/lib/python2.7/site-packages/pysnmp/hlapi/asyncore/cmdgen.py", line 124, in getCmd
addrName, paramsName = lcd.configure(snmpEngine, authData, transportTarget)
File "/usr/lib/python2.7/site-packages/pysnmp/hlapi/lcd.py", line 60, in configure
securityName=authData.securityName
File "/usr/lib/python2.7/site-packages/pysnmp/entity/config.py", line 159, in addV3User
(usmUserEntry.name + (13,) + tblIdx1, 'createAndGo'))
File "/usr/lib/python2.7/site-packages/pysnmp/smi/instrum.py", line 256, in writeVars
return self.flipFlopFsm(self.fsmWriteVar, varBinds, acInfo)
File "/usr/lib/python2.7/site-packages/pysnmp/smi/instrum.py", line 239, in flipFlopFsm
raise origExc
pysnmp.smi.error.WrongValueError: WrongValueError({'msg': WrongValueError(), 'name': (1, 3, 6, 1, 6, 3, 15, 1, 2, 2, 1, 5, 24, 48, 48, 48, 48, 49, 100, 51, 98, 48, 48, 48, 48, 55, 53, 100, 49, 97, 99, 49, 48, 48, 49, 48, 49, 5, 107, 107, 48, 51, 48), 'idx': 3})
Please help us with some clue here.
Make sure your authentication and private keys comply with the minimum length required by the underlying crypto algorithms. Perhaps the keys should be at least 8+ characters for any algorithm.
Please find below link:
https://pysnmp.readthedocs.io/en/latest/docs/api-reference.html#high-level-v3arch-asyncore
Please check and ensure that authProtocol & privProtocol are not mentioned as strings but as set of numbers for example: instead of using 'des' as authprotocol use (1, 3, 6, 1, 6, 3, 10, 1, 2, 2).
enter image description here

Loading RSA private key

I'm trying to load a private key (generated with RSA in an external application) in a javacard. I've written some normal java code to generate a keypair and to print the exponent and modulus of the private key:
public class Main {
public static void main(String[] args) throws NoSuchAlgorithmException {
KeyPairGenerator keyGen = KeyPairGenerator.getInstance("RSA");
keyGen.initialize(512);
KeyPair kp = keyGen.generateKeyPair();
RSAPrivateKey privateKey = (RSAPrivateKey) kp.getPrivate();
BigInteger modulus = privateKey.getModulus();
BigInteger exponent = privateKey.getPrivateExponent();
System.out.println(Arrays.toString(modulus.toByteArray()));
System.out.println(Arrays.toString(exponent.toByteArray()));
}
}
I then copied the byte arrays to the javacard code
try {
RSAPrivateKey rsaPrivate = (RSAPrivateKey) KeyBuilder.buildKey(KeyBuilder.TYPE_RSA_PRIVATE, KeyBuilder.LENGTH_RSA_512, false);
byte[] exponent = new byte[]{113, 63, 80, -115, 103, 13, -90, 75, 85, -31, 83, 84, -15, -8, -73, -68, -67, -27, -114, 48, -103, -10, 27, -77, -27, 70, 61, 102, 17, 36, 0, -112, -10, 111, 40, -117, 116, -120, 76, 35, 54, -109, 115, 70, -11, 118, 92, -43, -15, -38, -67, 112, -13, -115, 7, 65, -41, 89, 127, 62, -48, -66, 8, 17};
byte[] modulus = new byte[]{0, -92, -30, 28, -59, 41, -57, 95, -61, 2, -50, -67, 0, 6, 67, -13, 22, 61, -96, -15, -95, 20, -86, 113, -31, -91, -92, 77, 124, 26, -67, -24, 40, -42, -41, 115, -66, 109, -115, -111, -6, 33, -51, 63, -72, 113, -36, 22, 99, 116, 18, 108, 106, 97, 95, -69, -118, 49, 9, 83, 67, -43, 50, -36, -55};
rsaPrivate.setExponent(exponent, (short) 0, (short) exponent.length);
rsaPrivate.setModulus(modulus, (short) 0, (short) modulus.length);
}
catch (Exception e) {
short reason = 0x88;
if (e instanceof CryptoException)
reason = ((CryptoException)e).getReason();
ISOException.throwIt(reason);
}
Now for some reason, a CryptoException is thrown when setting the modulus with reason 1. According to the API, this means CryptoException.ILLEGAL_VALUE if the input modulus data length is inconsistent with the implementation or if input data decryption is required and fails.
I really got no clue why this is failing. Generating the keys on card is not an option in this project.
And I know 512 bits is not safe anymore, it's just for testing purpose. It will be replaced by 2048 bits in the end.
I figured out that the RSAPrivateKey api expects unsigned values and the toByteArray of a BigInteger returns the signed version. This post ( BigInteger to byte[] ) helped to figure me out I could simply remove the leading zero byte in the modulus byte array. It's working ok now.