Unable to send 128 Bytes data from Javacard but can send 127 Bytes as a response to APDU command when using sendBytesLong() - applet

While sending data from javacard in the form APDU commands using the apdu.sendBytesLong() Function, I am able to send 127 bytes data as response but 128 bytes data give error code 6f00(SW_UNKNOWN).
Why is this happening and can anybody suggest the way around without splitting the data into two apdu commands.
le = apdu.setOutgoing();
if(le != 128)
ISOException.throwIt(ISO7816.SW_WRONG_LENGTH);
apdu.setOutgoingLength((byte)le);
apdu.sendBytesLong(mod_PkAIKR,(short)0, le);
where mod_PkAIKR is an byte array of 128 bytes.
Thank you

Change
apdu.setOutgoingLength((byte)le);
to
apdu.setOutgoingLength(le);
The parameter type of api apdu.setOutgoing() is short, it didn't needs type convert.
If you convert le to type byte, the parameter value will be a nagative. the value of (byte) 128 is -128.

Related

STUN MESSAGE-INTEGRITY dummy definition

in RFC5389 MESSAGE-INTEGRITY calculation includes itself but with dummy content
dummy content is not defined
how can MESSAGE-INTEGRITY be verified without knowing dummy content value?
why would MESSAGE-INTEGRITY calculation include itself?
is't it faster to calculate MESSAGE-INTEGRITY and equally secure if it didn't include itself?
Since the MESSAGE-INTEGRITY attribute itself is not part of the hash, you can append whatever you want for the last 20 bytes. Just replace it with the hash of all the bytes leading up to the attribute itself.
The algorithm is basically this:
Let L be the original size of the STUN message byte stream. Should be the same as the value for MESSAGE LENGTH in the STUN message header.
Append a 4 byte header onto the STUN message followed by 20 null bytes
Adjust the LENGTH field of the STUN message to account for these 24 new bytes.
Compute the HMAC/SHA1 of the first L bytes of the message (all but the 24 bytes you just appended).
replace the 20 null bytes with the 20 bytes of the computed hash
And as discussed in comments, the bytes don't have to be null bytes, they can be anything - since they aren't included in the hash computation.
There's an implementation of MESSAGE-INTEGRITY for both short-term and long-term credentials on my Github: here and here

Sendto- forcing sending a UDP datagram of X bytes

I have a basic question on sendto:
Suppose we wish that the destination will receive a UDP packet of exactly X bytes. That means, it cannot receive a packet of less than X bytes (which is possible if sendto returns less than X bytes). Is it possible to force the sender to send exactly X bytes, or even to return an error if it is not possible? (i.e., the receiver either will get the packet of X bytes, or will not get the packet).
Edit:
If the number of bytes sent is always X, then why the return value (the number of bytes sent) might be less than the number of bytes of the sent data (as explained in
https://learn.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-sendto
) and be non-negative?
That means, it cannot receive a packet of less than X bytes (which is possible if sendto returns less than X bytes).
This will never happen on a UDP socket. From the send(2) manual page:
If the message is too long to pass atomically through the underlying protocol, the error EMSGSIZE is returned, and the message is not transmitted.
In short, the behavior you are asking for is already present by default.

Writing partial packet to SSL BIO

I have a socket application that reads and writes data. I'm using OpenSSL to do the encryption/decryption. My question is whether the "BIO_write" method can buffer data internally or if I have to append to a growing buffer as I read more from the socket. Here's what I'm doing.
I read from the socket and I use the class method below to write all the bytes that were read into the BIO:
int CSslConnectionContext::Store(BYTE* pbData, DWORD dwDataLength)
{
int bytes = 0;
if (dwDataLength > 0)
{
bytes = BIO_write(bio[BIO_RECV], pbData, dwDataLength);
}
return bytes;
}
I then immediately call the SSL_read method to get decrypted data:
int CSslConnectionContext::Read(BYTE* pbBuffer, DWORD dwBufferSize)
{
int bytes = SSL_read(ssl, pbBuffer, dwBufferSize);
return bytes;
}
If SSL_read returns a positive number then I've got decrypted data available for my application.
What I'm not sure of is what happens when my socket read doesn't capture all of the data required for decryption in a single read.
So if I need 100 bytes to be able to decrypted the data and the first read only gets 80, can I call BIO_write() with those 80, do another socket read to get the next 20, and then call BIO_write() with just those 20 bytes?
Or do I need to write my code so when I read 80 I do something like this:
call BIO_write() with the 80 bytes.
if that returns a failure indicator - hold onto that 80 bytes.
read the next 20 bytes from the socket and append it to the buffered 80 bytes.
call BIO_write() with 100 bytes
OpenSSL holds internal buffer - let's call it SSL stack - on top of TCP stack. OpenSSL library handles SSL stack. BIO_xxx() functions can operate on different end-points: i.e. memory, sockets.
It behaves differently depending on the actual item it operates on. For instance if BIO_write() uses memory (BIO_s_mem), BIO_write never fails except insufficient memory. But if it uses socket, and socket is non-blocking it can return error on failure, or it can write some number of bytes instead of all of the requested bytes where socket buffer is full.
So, how to use/handle buffer depends many factors, but most noticable ones are:
Blocking or Nonblocking IO
BIO object that operates on (memory, socket, etc.)
For instance if you're using BIO_s_mem and non-blocking socket operations, following technique can be applied:
Write buffer using BIO_write, and check if it failed. If it did not fail, you can be sure that you've written all buffer to SSL stack.
Call Read_SSL and check for errors, if error is WANT_READ, or WANT_WRITE then you need to write more data to SSL stack to be able to read a valid record.
For the question and example:
You can write partially (As many as you can, even 1 byte). For instance if you read 80 bytes from socket, then write those using BIO_write. Then call to SSL_read may fail (WANT_READ, WANT_WRITE, or other). Then you receive 20 bytes from socket, then write these bytes using BIO_write. Then call SSL_read again. Whenever SSL_read returns without error this means SSL stack decoded a valid record.
But it is quite important to understand waiting on non-blocking sockets using select() to handle SSL reads/writes are quite cumbersome. One SSL_write can result multiple writes to socket while you already waiting for READ event for the socket.
please use bio_pending.. to know all the bytes available with openssl.. Loop using the return value of bio_pending. This should be called before bio_read.

How can I fetch a value from response of web_custom_request in LoadRunner

I have a LR script and I am using to make a call on a REST API to download a file. The file gets downloaded successfully but I also need the value of the file size downloaded for verification purpose. Here is what i see in loadrunner console.
Action.c(50): web_custom_request("GetImage") was successful, 2373709 body bytes, 528 header bytes, 99 chunking overhead bytes.
How can I get the value 2373709?? I tried using the below code but the size it returns is a little bit different from the above mentioned and is not solving the purpose.
HttpDownLoadSize=web_get_int_property(HTTP_INFO_DOWNLOAD_SIZE);
lr_output_message("File Size %i", HttpDownLoadSize);
Any help would be appreciated. Thanks in advance for you help.
HTTP_INFO_DOWNLOAD_SIZE property stores the last HTTP response total download size. This includes total size of headers and bodies of all responses, and possible communication overhead. 2373709 body bytes is the total body size of all responses got in a particular step, so if there are several requests/responses in your custom request step, this number will be greater then the actual file size.
I'd suggest validating your response body size. There is no standard API to retrieve it though (at least, in LR 12.53, the latest available version). As far as I see, your response is chunked so I cannot suggest you any efficient methods to do this. Here is rather inefficient method based on storing the whole body to a temporary buffer (twice!):
unsigned long length = 0;
char* tmp = 0;
web_reg_save_param_ex(
"ParamName=Body",
"LB=",
"RB=",
SEARCH_FILTERS,
"Scope=Body",
"RelFrameID=1",
LAST);
web_custom_request(...);
lr_eval_string_ext("{Body}", strlen("{Body}"), &tmp, &length, 0, 0, -1);
lr_output("body length is %d", length);
lr_eval_string_ext_free(&tmp);
Also you might need to increase the maximum HTML parameter length using web_set_max_html_param_len().
However, if you had a non-chuncked non-compressed response containing Content-Length header, you could validate it more efficiently:
web_reg_find("Text=Content-Length: 2373709",
"Search=Headers",
"RelFrameID=1",
LAST);
web_custom_request(...);

Examine data at in callout driver for FWPM_LAYER_EGRESS_VSWITCH_TRANSPORT_V4 layer in WFP

I am writing the callout driver for Hyper-V 2012 where I need to filter the packets sent from virtual machines.
I added filter at FWPM_LAYER_EGRESS_VSWITCH_TRANSPORT_V4 layer in WFP. Callout function receive packet buffer which I am typecasting it to NET_BUFFER_LIST. I am doing following to get the data pointer
pNetBuffer = NET_BUFFER_LIST_FIRST_NB((NET_BUFFER_LIST*)pClassifyData->pPacket);
pContiguousData = NdisGetDataBuffer(pNetBuffer, NET_BUFFER_DATA_LENGTH(pNetBuffer), 0, 1, 0);
I have simple client-server application to test the packet data. Client is on VM and server is another machine. As I observed, data sent from client to server is truncated and some garbage value is added at the end. There is no issue for sending message from server to client. If I dont add this layer filter client-server works without any issue.
Callback function receives the metadata which incldues ipHeaderSize and transportHeaderSize. Both these values are zero. Are these correct values or should those be non-zero??
Can somebody help me to extract the data from packet in callout function and forward it safely to further layers?
Thank You.
These are the TCP packets. I looked into size and offset information. It seems the problem is consistent across packets.
I checked below values in (NET_BUFFER_LIST*)pClassifyData->pPacket.
NET_BUFFER_LIST->NetBUfferListHeader->NetBUfferListData->FirstNetBuffer->NetBuffe rHeader->NetBufferData->CurrentMdl->MappedSystemVa
First 24 bytes are only sent correctly and remaining are garbage.
For example total size of the packet is 0x36 + 0x18 = 0x4E I don't know what is there in first 0x36 bytes which is constant for all the packets. Is it a TCP/IP header? Second part 0x18 is the actual data which i sent.
I even tried with API NdisQueryMdl() to retrieve from MDL list.
So on the receiver side I get only 24 bytes correct and remaining is the garbage. How to read the full buffer from NET_BUFFER_LIST?