SmartCard MUTUAL AUTHENTICATION - applet

Hi I'm looking for information about working MUTUAL AUTHENTICATION with SmartCards, I wonder if you can help me.
I'm working on reverse engineering apdu commands and would like to know how to calculate MAC.
For example I have the following:
APPLET: A4 00 04 0C XX XX XX
APDU: 00 84 00 00 08 C9
TRX: 00 82 00 00 28 [seed bytes Transformed enc mac ...] 00
TRX: B0 0C 0D 81 00 97 01 5D 8E 08 [MAC] 00
The first two commands I understand them perfectly, but starting in the third one 00 82 is where I would like to know how to calculate the MAC and what follows, to make the readings
I would like to know information and documents to read and learn more about MUTUAL AUTHENTICATION

A short summary of Mutual Authenticate in general (I agree, ISO 7816-4 is a bit terse on the meaning concentrating on the interface):
Mutual Authenticate is the combination of an Internal Authenticate and an External Authenticate command.
First a host application requests a random number from the card.
Then this random number is somehow encrypted using a secret key, typically by applying a MAC algorithm
The computation result is sent back to the card in the command data field of Mutual Authenticate, with another random number generated outside.
The card verifies the MAC result and if successful grants an access right. It also computes a MAC from the externally supplied random number using a different key, and sends this as command response. If unsuccessful some provision against brute force is made, either by an error counter blocking the key or by a substantial delay.
The host application verifies the MAC from the card. If the result is correct, the host application can be sure, its a "legal" card.
Two points are critical:
how to separate and encode MAC and externally provided random number - e.g. as TLV structure using two data objects
how to identify both keys, since in P2 just one can be specified.

Related

CoinGecko get all pages at once with one call

I'm looking that the follow endpoint is paginated
https://api.coingecko.com/api/v3/coins/bitcoin/tickers?page=1
Each page has 100 results but in my app, I need all results at once, since this is a free API , I researched and there are 65 pages available, which will exceed on the quota since I only have 10 - 50 request per minute and here I'm doing 65 requests , is there a way or format that I can access for example to get all pages results as one call ?
I tried with
https://api.coingecko.com/api/v3/coins/bitcoin/tickers?page=1,2,3,4,5
but it is always returning 100 elements despiting that the order is different

AWS Route 53 CNAME with long destination URL not allowed

I'm adding a CNAME entry into Route 53. The URL I'm trying to redirect to (i.e. destination) has a large number of characters. When I try to add it I get this error:
Error occurred
Bad request.
(InvalidChangeBatch 400: DomainLabelTooLong (Domain label is too long) encountered with '<my-url>', Unparseable CNAME encountered)
After some fiddling it looks like it gives me this error if the URL is longer than 70 characters. If the URL has less than that, it works fine. I can't find anywhere documenting this limit, so is it a bug? Is there any way to increase this limit?
Here is the relevant information from the Route53 documentation:
Domain names (including the names of domains, hosted zones, and
records) consist of a series of labels separated by dots. Each label
can be up to 63 bytes long. The total length of a domain name cannot
exceed 255 bytes, including the dots.
Wikipedia provides similar information:
A label may contain zero to 63 characters. The null label, of length
zero, is reserved for the root zone. The full domain name may not
exceed the length of 253 characters in its textual representation. In
the internal binary representation of the DNS the maximum length
requires 255 octets of storage, as it also stores the length of the
name.

How to avoid leaks when paging in a RESTful web service?

We have a RESTful web service that returns a collection of tickets. Because it's possible for the collection returned to be too large to be processed in a single gulp, we've added offset and limit query parameters. The idea is that we run the query, then skip the first offset records, then return the next limit records.
The problem is that this can leak tickets.
Suppose, for example, that there are eight tickets that need work, at the time the client first queries:
ID STATUS
00 needs work
01 needs work
02 needs work
03 needs work
04 needs work
05 needs work
06 needs work
07 needs work
If the client requests the tickets that need work, with an offset of 0 and a limit of 4, we'll return:
ID STATUS
00 needs work
01 needs work
02 needs work
03 needs work
If someone else, then does some work, changing some tickets to:
01 doesn't need work
02 doesn't need work
If the client then requests the tickets that need work, with an offset of 4 and a limit of 4, the results of the query will be:
ID STATUS
00 needs work
03 needs work
04 needs work
05 needs work
06 needs work
07 needs work
And after we skip the first four records, we'll return:
06 needs work
07 needs work
And tickets 04 and 05 will have been skipped.
If we go back to the ticket table on every subsequent paging request, we'll leak tickets, if tickets on earlier pages have been changed so that they fall out of the query results.
Part of me is wondering how important this is.
The client is going to request the needs work tickets on some sort of schedule. When there are more tickets than the limit, it will then page through the rest in multiple calls, incrementing offset on each call. If we do nothing, we will sometimes leak needs work tickets, but they will be picked up the next time the client requests new needs work tickets.
That is, the leaked tickets will only be leaked on this pass, they'll show up on the next.
But if it is important that we not leak tickets, I don't see any way of resolving it other than saving the identifiers of all of the needs work tickets during the first call, and then paging through the collection of identifiers, rather than through the tickets themselves.
We could, for example, when the client requests needs work tickets with an offset of zero, populate a second table with the ids of all of the tickets that need work, then return the first limit tickets that are in the second table. The next call, we use offset and limit against the second table, to determine which tickets to return.
The problem with this is we need to deal with multiple clients running simultaneously. So we need a primary key on the second table that we can match against a specific client, based on what is in the request.
I'd like to be able to manage this without putting additional burden on the client programmers. But I don't see how.
Is there any way for me to tell, by examining a request and its headers, that it came from the same client as an earlier request? I've not been able to find one.
We're currently returning paging information in the response headers:
Paging-offset: 0
Paging-limit: 4
Paging-returnedCount: 4
Paging-totalRecordCount: 54
What I'm thinking is that we might return a Paging-collection value, when we're paging, which would provide a key value into the second table. We could then require the client to provide the collection value when they make a request with offset != 0.
Does this seem reasonable? Do you think that this would put too great a burden on the client programmers?
How have other people solved this problem? Or do they just ignore it?
Is there any way for me to tell, by examining a request and its headers, that it came from the same client as an earlier request? I've not been able to find one.
You're not supposed to be able to - stateless protocol. In particular, if you are trying to do REST, you want the request to have all of the necessary information so that a new server can answer the request when the original server is busy.
But possibilities include giving each client its own resource to work with. There are a number of different ways you can match the request to the unique resource.
The problem with this is we need to deal with multiple clients running simultaneously.
As a rule, REST works much better if you can provide multiple clients with a common understanding of resources, rather than trying to tailor your representations to each.
Consider: Alice queries for a pile of work, Bob changes something, then Charlie queries for a pile of work. Can you live with it if Charlie gets a representation of the pile that was cached by Alice's query (ie, before Bob's change)? Cuz that's kind of how the web is designed to work....
(It doesn't have to - you can have each response set a bunch of no-cache headers. But it's something you should be thinking about, because it may be trying to tell you that the REST architectural constraints are not a great fit for your problem.)
How have other people solved this problem? Or do they just ignore it?
Well, it's a concurrent modification that's invalidating your iterator, right? Maybe you just pitch a fit and force the client to start over....
You might look into AtomSyndication, and how some services use it
For your case, I'd probably look at turning the problem around; instead of asking the server for N tickets that have some property, I'd look into asking the server for all tickets in some range that have the property. The client can just keep navigating through ranges until it fills its own bucket.
Another way of describing your problem is that you are trying to page through a mutable collection.
If you drop the paging constraint -- each request always fetches N unworked tickets starting from the first one, that's pretty straight forward.
If you drop the mutable constraint -- paging through an immutable list is straight forward. Making a mutable list immutable may be easy -- instead of asking for the latest version of the list, you ask for the version of the list as of some particular point in time. This is a very happy problem to have when using event sourcing.
One thing we've discussed is having one query return a list of ticket IDs, which would be small enough to return all of them, all of the time, then have a second query to return a single ticket given a query.
Another good answer; that's fundamentally the way web pages work -- (relatively) small payload of html, with hyperlinks to the java script, media, and so on that extends the representation.

IMAP fetch command race condition on sequence number change

I'm trying to work my way through RFC 3501 to determine what happens when you fetch from sequence number, but a CREATE or EXPUNGE command comes before the response. e.g.
> C: t fetch 32 rfc822.size
> S: * 32 FETCH (RFC822.SIZE 4085)
is easy, but what about:
> C: t fetch 32 rfc822.size
> S: * 12 EXPUNGE
> S: * 32 EXISTS
> S: * 31 FETCH (RFC822.SIZE 4085)
Does the 31 refer to the new sequence number, or the sequence number referenced in the fetch?
Section 7.4.1 of RFC 3501 specifically contains this language:
An EXPUNGE response MUST NOT be sent when no command is in
progress, nor while responding to a FETCH, STORE, or SEARCH
command. This rule is necessary to prevent a loss of
synchronization of message sequence numbers between client and
server. A command is not "in progress" until the complete command
has been received; in particular, a command is not "in progress"
during the negotiation of command continuation.
This specifically forbids the example. It cannot have been sent unilaterally ("MUST NOT be sent when no command is in progress"), and it could not have been sent as a response to FETCH ("nor while responding to a FETCH, STORE, or SEARCH command").
Also see 5.5 which contains some information about race conditions when multiple commands are in progress. The client is forbidden from sending plain FETCH, STORE, or SEARCH while other types of commands are in progress, and vice versa.
Your answer should be obvious - for the 31 in the response following the expunge to reference something other than the "current" sequence number 31 message would mean the IMAP server is maintaining an index of sequence numbers for each command-point-in-time. Obviously the IMAP protocol requires no such work on part of the server.
Furthermore note that strictly speaking the untagged responses have nothing to do with the fetch command; the association is merely a suggestion.

How to get MIDI instrument model from device ID code?

Is there any public database that allows to get the model name from the device ID code (returned in reply to the f0 7e 7f 06 01 f7 SysEx) ?
The MIDI Manufacturer's Association maintains a list of ID's on their site, seems you'll probably have to scrape it in order to get it into a database which you can query. There is no authoritative database of devices; making your own would likely take quite a bit of time. Also the list on the MMA's website is not updated terribly often...
Keep in mind that not all manufacturers bother to register their ID's, but at least this list is better than nothing.