Custom content types: XLink vs. Atom - rest

I'm trying to design a RESTful interface for a filesystem-like web service. To provide hyperlinkability among the various resources (files, directories, etc.), I thought I would use XLink. However, it seems there is a strange omission from XLink: content types.
Atom provides an attribute to specify the content type of links as well as the linked resource's relationship to the current, as in:
<link rel="alternate" type="text/html" href="http://example.org"/>
Because I am creating a custom content type for each of my resources' representations, this seems like an important bit of information to include in my hyperlinks.
I can kind of make out an analog to rel in the XLink spec (label, from and to, I guess?), but why is content type missing from XLink? Do they intend that the role is somehow meant to convey what a client finds at the end of a link? Perhaps I missed the purpose of XLink?

It appears xlink has purposely ignored this; the only mention of media types or representations has to do with how fragment identifiers are to be interpreted. XLink actually only defines links to be between resources, and not their representations.
This means that if you used XLink you have to define your own way of specifying the expected media type of the target of the link, whereas if you use Atom's link you get the target media type, but not the versatility of XLink.
Since you're probably defining your own media type, it's not extremely important unless you want generic clients that don't know of your media type to be able to parse the embeded links. Any client that knows about your media type can read your documentation, and will know to use XLink, Atom, HTML (the link element) or your own proprietary link semantics.
Just as an example of the latter: The Sun Cloud API uses a JSON list of objects with rel, and href attributes for outgoing links.

Related

How to find NXP type of NFC Tag in flutter nfc_manager

I am using Flutter nfc_manager to read, write, and protect operations.
Now I need to find the type. For example, I need to know that the scanned tag is NTAG213 or NTAG216.
Any possibilities to check that using nfc_manager.
There is no standard method to identify exactly a NFC Tag, there are some methods that can be used to help deduce which Tag you are dealing with but some of these are family specific methods to identify which member of a family of Tags it is.
As these methods are a lower level type of access how to use them on iOS and Android are different.
At the low level the different Tag technologies have different methods getting more details about the Tag.
From the Flutter side you can use the from method at least narrow down which Tag Technology you are working with. This from method will return null if the Tag is not the right type.
Then for example if it was an NfcA Tag hardware (as the NTAX21x Tags are) different NFC Tag families will quite often give different low level ATQA and SAK responses (This data is actually used work out some of the Tag Technologies but not all e.g. Mifare, etc). This ATQA and SAK responses are available in different method in Android and iOS
Then from the Tag's UID the first byte is supposed to be Manufacturer identifier, so for NXP all UID's should begin with 04h
Then there is things specific to a family of Tags, for example for the NTAX21x series you can transceive or sendMiFareCommand the "GET_VERSION" 60h command to get back product information and decode it as per the datasheet
But is knowing the exact Tag type off any use?, are you just wanting to find out how much data you can store on it?
So as well as using he NTAX21x "GET_VERSION" command there is the more generic method of using the NDEF size method to get this more useful info on how much data can this tag store. This NDEF size should work on any Tag that conforms to one of the NFC Forum Tag Standards e.g. Type 2 in the NTAX21x case.
(Also the size of any Type 2 Tag is stored in the capability container at page 03h in byte 02 and you can transceive or sendMiFareCommand a READ 03h command to any Type 2 tag to get this data)
NXP provides a library called TapLinx, which provides simple APIs to interact with NXP manufactured tags. You can use that if you don't wanna get into the details.

Need feedbck on the quality of REST URL

For getting the latest valid address (of the logged in user), how RESTful is the following URL?
GET /addresses/valid/latest
Probably
GET /addresses?valid=true&limit=1
is the best, but it should then return a list. And, I'd like to return an object rather then a list.
Any other suggestions?
Your url structure doesn't have much to do with how RESTful something is.
So lets assume which one is the 'best'. Also a bit hard to say, pretty subjective.
I would generally avoid a pattern like /addresses/valid/latest. This kinda suggest that there is a 'latest resource' in the 'valid collection', in the 'addresses collection'.
So I like your other suggestion a bit better, because it suggests that you're using an 'addresses' collection, filtering by valid items and only showing 1.
If you don't want all kinds of parameters, I would be more inclined to find a url pattern that's not literally 'addresses, but only the valid, but only the latest', but think about what the purpose is of the endpoint. Maybe something that's easier to remember like /fresh-address =)
how RESTful is the following URL?
Any identifier that satisfies the production rules described by RFC 3986 is RESTful.
General purpose components are not supposed to derive semantics from identifiers, they are opaque. Which means that the server is free to encode information into those identifiers at its own discretion.
Consider Google search: does your browser care what URI is used as the target of the search form? Does your browser care about the href provided by Google with each search result? In both cases, the browser just does what it is told, which is to say it creates an HTTP request based on the representation of application state that was provided by the server.
URI are in the same broad category as variable names in a programming language - the machines don't care so long as the spellings are consistent with some simple constraints. People care, so there are some benefits to having a locally consistent and logical scheme.
But there are contexts in which easily guessed URI are not what you want. See Mark Seemann 2013.
Since the semantic content of the URI is reserved for use by the server only, it follows that the server can choose to encode that information into path segments or the query part. Or both.
Spellings that can be described by a URI Template can be very powerful. The most familiar URI template is probably an HTML form using the GET method, which encodes key value pairs onto the query part of the URI; so you should think about whether that's a use case you want to support.

What mime type should be used for squashfs files?

In my API, I need to provide a file/directory resource (call it a thing) in different formats including a tar.gz and as a squashfs file. I have been looking at the "official" mime types and it looks like application/x-compressed-tar is appropriate for a thing.tar.gz file.
But what about if thing is created using mksquashfs? I am not sure if the vendor-specific mime types are the answer, because I don't think there is a vendor to specify.
Also, the output of mksquashfs is usually a compressed file (default is gzip). So I could use application/x-gzip, but since there are multiple options for compression, I don't want to have to know which compression was used since the API is focused on serving up a previously created squashfs thing and not a create a squashfs with a specific compression as requested by the user.
Is it okay to just make your own mime type?
application/x-squashfs?
application/x-sqsh?
application/vnd.???.squashfs?
application/vnd.???.sqsh+gzip?
The vnd. namespace is reserved for registered vendor types, so don't use that (or go through the long and arduous process of registering this type with IANA before you can use it). In theory, registering it could be useful and you don't have to be a "vendor" really (though I suppose Linux or the SquashFS community could be named as the responsible governing entity).
The x- prefix is now also discouraged (and subsumed by the x. prefix) and never really provided good semantics anyway (either you have an "unofficial standard" which nobody specifies but many people know of, or you have an unknown undocumented thing which doesn't help specify things beyond application/octet-stream at all).
If you want to go by the book, and don't want to go through defining a MIME type via IANA (though they define a lightweight process to encourage this) the tried and true application/octet-stream is still good for random byte streams.
If you do want to go for a lightweight registration, something like application/filesystem with suffixes like +ext2, +ext3, +dmg (for Mac images), +ntfs etc would be my proposal, but this is without much thinking about it.

Difference between WebPage and Article - Schema.org

As there are a limited number of options available in Schema.org, I wonder whats the best schema to use when it doesn't fit into the other categories. For example if I'm writing about a Car (assuming there is no car schema as I've not seen one) then should I use the Article or WebPage schemas?
Official documentation suggests three options:
If you publish content of an unsupported type, you have three options:
Do nothing (don't mark up the content in any way). However, before you decide to do this, check to see if any of the types supported by schema.org - such as reviews, comments, images, or breadcrumbs - are relevant.
Use a less-specific markup type. For example, schema.org has no "Professor" type. However, if you have a directory of professors in your university department, you could use the "person" type to mark up the information for every professor in the directory.
If you are feeling ambitious, use the schema.org extension system to define a new type.
Also if you do not declare explicitly the type of a web page it is considered to be of http://schema.org/WebPage, that is the most general type that you can use in this case.
Quote source
(Schema.org has a type for cars, Car, which is a Product. I’m using a parrot as example in this answer.)
You might want to differentiate between the thing the page is about and the page.
You can mark up your page with WebPage, but that doesn’t convey what the page is about / what it contains. To denote that, you need another item that can be used as value for the about / mainEntity property.
If Schema.org doesn’t offer a specific type, go up in the type hierarchy. There’s always a type that works: Thing. Or in other words: start at Thing and go down until you find the most specific type. See my answer on Webmasters SE with more details.
So a page (WebPage) about a specific parrot (Thing) could be marked up like this:
<body typeof="schema:WebPage">
<article property="schema:mainEntity" typeof="schema:Thing">
</article>
</body>
And if possible, it can be a good idea to use suitable specific types from other vocabularies (e.g., from animal or even parrot ontologies) in addition to the Schema.org types. For example, you could use the Parrot type from DBpedia:
<body typeof="schema:WebPage" prefix="dbpedia: http://dbpedia.org/resource/">
<article property="schema:about" typeof="schema:Thing dbpedia:Parrot">
</article>
</body>

Is my understanding of Media Types correct?

1) Assume that when media type name is set to "X/xml", the software agent SA is capable of identifying Hypermedia controls contained in representation format RF
a) If SA receives the following HTTP reply ( which contains RF ), then "text" part of the media type name text/xml informs SA that it should process RF as plain XML ( thus it shouldn't try to identify Hypermedia controls )?
HTTP/1.1 200 OK
...
Content-Type: text/xml
...
<order xmlns=″...″>
...
<link rel=...″
href=...″/>
</order>
b) But if instead SA receives the following HTTP reply, then "X" part in media type name X/xml which informs SA that while processing RF it should also identify Hypermedia controls?
HTTP/1.1 200 OK
...
Content-Type: X/xml
...
<order xmlns=″...″>
...
<link rel=...″
href=...″/>
</order>
2) If my understanding in 1) is correct, then I assume all values preceding "xml" ( say "X/vnd.Y" in media type name X/vnd.Y+xml ) are used to inform software agent which processing model it should use with xml document?
EDIT:
I apologize in advance for being so verbose
1)
To clarify, XML has no hypermedia controls. I assume you meant a
hypermedia-capable XML format such as Atom (application/atom+xml).
I know XML is just a markup language and thus has no Hypermedia controls. But my understanding is that if my custom media type MyMedia/xml identifies BeMyLink element ( defined within XML Schema namespace MySchemaNamespace ) as Hypermedia control, then when processing the following XML document in accordance with MyMedia/xml ( thus when Content-Type header is set to MyMedia/xml ), the BeMyLink element is considered a Hypermedia control:
<MyCreation xmlns="MySchemaNamespace">
<BeMyLink rel="..."
href="..."/>
</MyCreation >
?
2)
Assuming "X" is "application ..."
Could you clarify what you've meant to say here? Perhaps that media type name is application/xml?
3)
If "X" is not "application" but some other type, it may not be safe
for your agent to parse the document as such.
Doesn't X just describe ( in terms of processing model ) how resource representation should be interpreted/parsed? As such, couldn't I name X part of media type name X/xml anything I want ( say blahblah/xml ), as long as agents trying to process this representation ( received via HTTP response with Content-Type header set to blahblah/xml ) are aware ( ie know how to process this representation according to instructions given by blahblah/xml ) of media type blahblah/xml?
2. EDIT
1)
This is why you should be using standard media types, rather than
custom media types -- because in the end, the media type itself is not
a driver of application behavior
Isn't a downside to using standard media types in that when agent receives a resource representation, it won't know just by checking media type value whether it semantically understands the representation, while with custom media types, agent can figure just by looking at media type value whether it knows the semantic meaning of a resource representation?
2)
This is why you should be using standard media types ...
Then I also assume we should only be using standard ( ie those registered with IANA ) rel values?
3)
Higher level application semantics are communicated through external
documentation, not the media type. Through the documentation of rels
and links and what the named values within the representation mean.
Perhaps I'm nitpicking, but ... you say higher level semantics should be communicated through external documentation and not media type, but then you note that documentation of rel values should convey higher level semantics. But aren't rel values in lots of cases ( not always, as rel values can also be independently defined, such as those registered with IANA ) considered as being part of a media type?
Thank you
To clarify, XML has no hypermedia controls. I assume you meant a hypermedia-capable XML format such as Atom (application/atom+xml).
1a) From RFC 2046 section 3:
Plain text is intended to be displayed "as-is". ... Other subtypes are to be used for enriched text in forms where application software may enhance the appearance of the text, but such software must not be required in order to get the general idea of the content.
In your example, your software agent receiving a response of text/xml may choose to enhance the display of the document (clickable links, syntax highlighting, etc). See note 1.
1b) Assuming "X" is "application" then yes, your agent may freely parse the document for hypermedia controls and use them for deciding future operation. If "X" is not "application" but some other type, it may not be safe for your agent to parse the document as such.
You're basically right. For more information, check out RFC 6839 section 4.1 and RFC 3023.
Response to edits:
A media type takes the form type/[vnd.]subtype[+suffix]. Since "type" is a little ambiguous in this context, we usually call it the "top-level media type." There's only a small handful of reasons to ever declare a new top-level media type, so unless you're absolutely sure you need it, stick with the standards: text, image, audio, video, and application. The [vnd.] is the optional vendor prefix which is used to denote non-standard subtypes. The [+suffix] is used to denote when your custom subtype is a specialization of an existing standard subtype.
If you want to define a custom XML format, use application/vnd.mymedia+xml. Using application indicates that the document is intended to be used in conjunction with programs as opposed to human display. Using vnd. indicates that it's non-standard. Using +xml means, well, that it's an XML document. And mymedia is the name of your subtype.
It's the subtype, not the top-level media type, that indicates what you refer to as the processing model. If an agent knew how to parse vnd.mymedia+xml it wouldn't (theoretically) matter if it was application/vnd.mymedia+xml or audio/vnd.mymedia+xml as both types refer to the same document format. Whether audio/vnd.mymedia+xml makes any sense or not is a separate issue.
Note 1: In practice, you can probably treat text/xml as application/xml without issue. However, you can't treat application/xml as text/xml due to the possibility of non-printable data.
You're basically on track. The media type represents the processing model for the representation. The processing model is what "knows" about hypermedia elements.
So, as mentioned, text/xml is not a hypermedia media type because raw XML has no concept of hypermedia controls. Whereas, XHTML is clearly a hypermedia media type.
The processing model based on the media type effectively represents the syntax of the representation, as well as some level of processing semantics at the MEDIA TYPE level.
What they do not do, is represent semantics at an APPLICATION level.
This is why you should be using standard media types, rather than custom media types -- because in the end, the media type itself is not a driver of application behavior.
Consider application/vnd.invoice+xml. This implies (based on the name) that this media represents some kind of type resource, such as an invoice. in contrast to simply application/xhtml+xml. The html format clearly has no "invoice" semantics, its not even implied.
By using generic media types, the clients and applications can layer their own semantics upon the representation as the application requires. Higher level application semantics are communicated through external documentation, not the media type. Through the documentation of rels and links and what the named values within the representation mean.
Addenda:
The media type is syntax, not semantics. When you get an HTML form from a request, for example, is the form for submitting address information? airport reservations? tax information? The HTML media type can represent all of these, but none of these are related to the HTML media type at all.
If you followed the "registerCar" rel, the odds are pretty high that the form you got back have something to do with registering a car, HTML has no say in it. But the details surrounding the "registerCar" rel are documented externally. There may well be markers in the representation, such as <title>Car Registration Form</title>, but, too is something delegated to the documentation. But there's no criteria that suggest that semantics of a resource representation is necessarily able to convey the semantics under which it is used.
If you got a form back with a name and address, who's name is it? Which address? If you do "GET /person/1/homeaddress", then it's fair to say that you got their home address. But introspection of the payload doesn't necessarily tell you that. It's all a matter of context.
As for media types that document rels, some do, some don't. Atom has some, HTML doesn't. HAL doesn't. IANA has a list of generic standardized rels, but those are independent of media type. They would be applicable in either HTML or HAL. They're orthogonal.
If you're going to use standardized rels, then you should stick close to the semantics as standardized, thus leveraging the standard and the wide knowledge that they bring. But in the end how your application processes different rels is up to your application.