Trying to save to file private and public keys of ECDSA using mbedtls - ecdsa

Working on a embedded bootloader, which is trying to check ECDSA signatures for programs being loaded. In order to accomplish this, I am trying to do the following:
First time - generate private and public keys. Sign any application using the private key, and place the public key in the bootloader to verify.
Subsequent times - Read the private and public keys from file. Sign any application using the private key - the public key is already in the bootloader so no need to modify anything there.
My issue is with saving the private key. The first run of the signing software doesn't find files, so it calls to mbedtls_ecdsa_genkey, which works, and gives me two keys. I tried writing them to files like this:
Attempt 1) For both keys, calls to
mbedtls_ecp_point_write_binary(&ctx->MBEDTLS_PRIVATE(grp), &key->MBEDTLS_PRIVATE(Q), MBEDTLS_ECP_PF_UNCOMPRESSED, &len, buf, sizeof buf);
and
mbedtls_ecp_point_write_binary(&ctx->MBEDTLS_PRIVATE(grp), &key->MBEDTLS_PRIVATE(d), MBEDTLS_ECP_PF_UNCOMPRESSED, &len, buf, sizeof buf);
and fwriting them to their own files.
On the second run, I read them both back in with
mbedtls_ecp_point_read_binary(...)
and this works, however, although the keys appear identical under the debugger, the signature fails with something crashing in mbedtls_internal_aes_encrypt.
So instead I tried, for the private key, using mbed_ecp_write_key/read_key. The key was half the size (using MBEDTLS_ECP_DP_SEC521R1) - first method gave me keys of 133 bytes each, second made the private key 66 as I did this when writing:
len = (key->private_grp.nbits + 7)/8; // = 66
mbed_ecp_write_key(key, buf, len);
Same issue, though, crashing in mbedtls_internal_aes_encrypt.
I've been digging though mbed_ecdsa_genkey to see what else is happening which I am obviously missing, but have been unable to spot it yet.

Related

How to test data transport between two local sockets

I have two classes ReceiverSocket receiving data and SenderSocket sending data.
explicit SenderSocket(const std::string &receiver_ip, const int port_num);
In debug modle, I pass receiver_ip = "127.0.0.1". Here is a method called SendPacket and its declaration:
void SendPacket(const std::vector<unsigned char> &data);
Similar to SenderSocket, ReceiverSocket can receive data on a port.
explicit ReceiverSocket(int port_number);
const std::vector<unsigned char> GetPacket() const;
//The method returns true on success, false otherwise.
const bool BindSocketToListen() const;
In order to test these two classes, I have to create two executable files , one for SenderSocket another for ReceiverSocket.
Is there any way to write a test for data transport ?
For manual tests, have you tried this using WireShark? This topic explains how:
https://superuser.com/questions/508623/how-can-i-see-127-0-0-1-traffic-on-windows-using-wireshark?answertab=votes#tab-top
Another option which might be more suitable for automated testing, could be to create a third executable which acts as a 'man in the middle'. Both of your executables will have to communicate thought this third executable, which just passes all data through, but also allows you to listen and validate the data.
Of course you could also use a testing framework to run the 'man-in-the-middle pass-through server'.

Coverting between javascript keycodes and libgdx keycodes

I am porting a existing webgame to libgdx. The game is controlled by a script I would rather not change.
The script specifies actions to do on certain keypress events using their javascript keycode value.
ie. "OnKeyPress=32:" would define actions to run when space is pressed.
"OnKeyPress=40:" would define actions to run when down is pressed, etc.
Now, as LibGDX uses different keycode system I need some way to fire my existing events when the correct key is pressed.
#Override
public boolean keyDown(int keycode) {
//convert input processors keycode to javascripts?
}
I can only think I have to create some sort of large static hashmap mapping between
GDXs
Input.Keys
and
com.google.gwt.event.dom.client.KeyCodes
But before going to this bother I thought Id ask around in case theres a better way?
Thanks,
In the end I just created a class to associate one keycode (com.badlogic.gdx.Input.Keys) with another. (com.google.gwt.event.dom.client.KeyCodes)
Fairly easy, but anyone doing the same should pay careful attention that not everything has a 1:1 mapping.
For example;
ALT_LEFT and ALT_RIGHT both map to ALT in gwt/javascript.
Likewise CNTR left and right and SHIFT left and right.
GDX meanwhile treats BACKSPACE and DELETE as the same key.
This is, naturally, as GDX needs to be cross platform so has different requirements.
Other then that just a ImmutableMap seemed to do the job....
/**
* static mapping between javascript/gwt keycodes and gdxs
**/
static ImmutableMap<Integer,Integer> keyCodeConversion = ImmutableMap.<Integer, Integer>builder()
.put(Input.Keys.UP, GwtKeyCodeCopy.UP)
.put(Input.Keys.DOWN, GwtKeyCodeCopy.DOWN)
.put(Input.Keys.LEFT, GwtKeyCodeCopy.LEFT)
.put(Input.Keys.RIGHT, GwtKeyCodeCopy.RIGHT)
.....
...(lots more)
.bind();
static public int getJavascriptKeyCode(int GdxKeycode){
return keyCodeConversion.get(GdxKeycode);
}

How do you define backdoor access for fields which span two registers?

I have a register map which has 16 bit wide registers. I have a field with is greater than 16 bits wide, so it must span two addresses. How do I define the backdoor access to this field?
This is what I tried for my field test_pattern[23:0]:
register_a.add_hdl_path_slice("path.to.regmap.test_pattern[15:0]", 0, 16);
register_b.add_hdl_path_slice("path.to.regmap.test_pattern[23:16]", 0, 8);
This fails with this error:
ERROR: VPI TYPERR
vpi_handle_by_name() cannot get a handle to a part select.
It is not clear if this is a constraint of my tool, or of how the UVM code uses the VPI. After poking around inside the UVM code I see the code that should handle part-selects, but it is inside #ifdef QUESTA directives so I think this is a tool constraint.
Is there a good work around for this?
According to the UVM Class Reference:
function void add_hdl_path_slice(string name,
int offset,
int size,
bit first = 0,
string kind = "RTL")
I'm guessing the solution should use the offset to select the starting index.
register_a.add_hdl_path_slice("path.to.regmap.test_pattern", 0, 16);
register_b.add_hdl_path_slice("path.to.regmap.test_pattern", 16, 8);
Possible alternative, bit select in a for-loop:
for (int i=0; i<16; i++) begin
string tmp_path_s;
tmp_path_s = $sformatf("path.to.regmap.test_pattern[%0d]", i);
register_a.add_hdl_path_slice(tmp_path_s, i, 1);
end
for (int i=0; i<8; i++) begin
string tmp_path_s;
tmp_path_s = $sformatf("path.to.regmap.test_pattern[%0d]", i+16);
register_a.add_hdl_path_slice(tmp_path_s, i, 1);
end
It's a great pity that whoever contributed this code (presumably Mentor?) felt it necessary to add a useful feature to a Universal library wrapped in ifdefs. In fact it's even worse on the UVM_1_2 branch where the whole DPI/PLI interface file is split into simulator specific implementations!
Looking at distrib/src/dpi/uvm_hdl.c on master branch of git://git.code.sf.net/p/uvm/code it looks like the only QUESTA specific code is this function:
static int uvm_hdl_set_vlog_partsel(char *path, p_vpi_vecval value, PLI_INT32 flag);
static int uvm_hdl_get_vlog_partsel(char *path, p_vpi_vecval value, PLI_INT32 flag);
Which uses the following DPI defined values:
svLogic logic_bit;
svGetBitselLogic(&bit_value,0);
svLogicVecVal bit_value;
svGetPartselLogic(&bit_value,value,i,1);
svPutPartselLogic(value,bit_value,i,1);
In theory if both your simulator and the Mentor code are compliant to the standard you could remove the ifdefs and it should still work.
You could also do this by detecting the part select in the path and use vpi_handle_by_index to read the individual bits, which should also be supported in any simulator.
NB my original answer was wrong about the code being Mentor specific - thanks to #dave_59 for setting me straight and apologies to Mentor.
Is there some reason why you aren't splitting this into 2 registers. Since your register size is 16 bits it doesn't make sense to declare a register that is larger than this.
The way I've seen large fields like this defined is to declare 2 registers with a separate field in each. For example, if you needed a 32-bit pointer you'd have:
addr_high with a 16 bit field
addr_low with a 16 bit field
For convenience, you could add a task that would access both in sequence.

Encrypting 16 bytes of UTF8 with SecKeyWrapper breaks (ccStatus == -4304)

I'm using Apple's SecKeyWrapper class from the CryptoExercise sample code in the Apple docs to do some symmetric encryption with AES128. For some reason, when I encrypt 1-15 characters or 17 characters, it encrypts and decrypts correctly. With 16 characters, I can encrypt, but on decrypt it throws an exception after the CCCryptorFinal call with ccStatus == -4304, which indicates a decode error. (Go figure.)
I understand that AES128 uses 16 bytes per encrypted block, so I get the impression that the error has something to do with the plaintext length falling on the block boundary. Has anyone run into this issue using CommonCryptor or SecKeyWrapper?
The following lines...
// We don't want to toss padding on if we don't need to
if (*pkcs7 != kCCOptionECBMode) {
if ((plainTextBufferSize % kChosenCipherBlockSize) == 0) {
*pkcs7 = 0x0000;
} else {
*pkcs7 = kCCOptionPKCS7Padding;
}
}
... are the culprits of my issue. To solve it, I simply had to comment them out.
As far as I can tell, the encryption process was not padding on the encryption side, but was then still expecting padding on the decryption side, causing the decryption process to fail (which is generally what I was experiencing).
Always using kCCOptionPKCS7Padding to encrypt/decrypt is working for me so far, for strings that satisfy length % 16 == 0 and those that don't. And, again, this is a modification to the SecKeyWrapper class of the CryptoExercise example code. Not sure how this impacts those of you using CommonCrypto with home-rolled wrappers.
I too have encountered this issue using the CommonCrypto class but for ANY string with a length that was a multiple of 16.
My solution is a total hack since I have not yet found a real solution to the problem.
I pad my string with a space at the end if it is a multiple of 16. It works for my particular scenario since the extra space on the data does not affect the receipt of the data on the other side but I doubt it would work for anyone else's scenario.
Hopefully somebody smarter can point us in the right direction to a real solution.

I'm sending a command to a serial COM port in C# and not getting data back, but when I use Putty I get data - what am I doing wrong?

I have a C# application, which I'm writing to try automate data extraction from a serial device. As the title of my question says, I have tried the exact same commands in Putty and I get data back. Could somebody please tell me what I have missed out, so that I can get the same data out with my C# application please?
Basically, I need to COM6, a speed/baud of 57600, and send the command without quotes "UH". I should be presented with a few lines of text data, which appears to only work on Putty.
As a quick test, I threw this together:
private void SerialPort serialPort = new SerialPort();
private void getHistory_Click(object sender, EventArgs e)
{
serialPort.DataReceived += new SerialDataReceivedEventHandler(serialPort_DataReceived);
serialPort.PortName = "COM6";
serialPort.BaudRate = 57600;
serialPort.Open();
if (serialPort.IsOpen())
{
serialPort.Write("UH");
}
}
private void serialPort_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
string result = serialPort.ReadExisting();
Invoke(new MethodInvoker(delegate{ textbox1.AppendText(result); }));
}
The DataReceived event does get fired, but it only returns back the "UH" I sent up, no further data. Any help with this problem would be highly appreciated!
Justin
Well, without further detail of the device in question, it is hard to say for sure, but two things spring to mind:
Firstly, what comms protocol does the device require? You have set up the baud rate, but have no mention of data bits, parity, or stop bits. I think the .NET serial port class defaults to 8,N,1. If your device is the same then you should be fine. If it is not, then it won't work.
Secondly, does the device require any kind of termination to the data to define a complete packet? Commonly this can be the data sent is appended with a carriage return and a line feed (0x0D and 0x0A), or perhaps is has a prefix of STX (0x02) and a suffix of ETX (0x03).
Any message that the device responds with is likely to be in the same format too.
I don't know how Putty works, but check the setup and see if it is appending anything to the message you type, and the protocol. Hyperterminal does this too, so you could test it with this also.