Can REST URLs send back UI rendering hints as metadata? - rest

I could not find a definitive document mentioning if UI rendering hints can be sent back as REST metadata or not. First of all, what all can be classified as REST metadata? Surely, data type of attribute can be metadata but can presentation hints like whether an attribute is single-valued or multi-valued also be described metadata? What about "ishidden" and "isReadOnly"? As per my understanding information like min/max/regexp/fixed value are OK as metadata but not sure if anything related to presentation hints like the one I mentioned above are good candidates for REST metadata? Any pointers would be of utmost help.
Thanks,
Paddy

Honnestly you are free to send back to the REST client what you want in your payload. However I'm not sure that it's a good idea to always use these UI-oriented metadata. As a matter of fact, you can also have applications that consume data without having UI issues.
You could implement a mechanism that let you select the metadata level you want to return to the REST client within the content negociation (Conneg based on the header Accept) in a similar way that what OData does. Here is a sample below:
GET serviceRoot/People
Accept: application/json;odata.metadata=minimal
You could imagine the following values for the header Accept:
No metadata: application/json;metadata=none
Structural metadata (property type, ...): application/json;metadata=minimal
Validation metadata (useful to determine the expected values for properties): application/json;metadata=validation
UI rendering metadata (readonly, ...): application/json;metadata=rendering
You can then structure the content as described below:
{
"property1": "value",
// Structural
"property1#metadata": {
"type": "string"
},
"property2": 10,
// Structural + validation
"property2#metadata": {
"type": "integer"
"minValue": 2,
"maxValue": 15
},
"property3": 10,
// Structural + ui rendering
"property3#metadata": {
"type": "integer"
"minValue": 2,
"maxValue": 15,
"readOnly": true,
"hidden": false
}
}
If you want to have a look at how metadata are handled within OData v4, you can use the following links from odata.org:
Basic tutorial - http://www.odata.org/getting-started/basic-tutorial/
Advanced tutorial - http://www.odata.org/getting-started/advanced-tutorial/
EDIT: in a comment, inf3rno underlines the header Prefer could also be used to describe the meta level required.
Here is a sample of use below:
GET serviceRoot/People
Accept: application/json
Prefer: metadata=rendering
Hope it helps you,
Thierry

Related

Designing rest api for nested resources

I have the following resources in my system 1. Services 2. Features where a feature has the following JSON structure,
{
id: "featureName",
state: "active",
allowList: [serviceID1, serviceID2],
denyList: [serviceID3, serviceID4]
}
I am trying to update the allowList or denyList which consists of serviceIDs and thinking of using PATCH method to do it like below,
/features/{featureId}/allowlist
/features/{featureId}/denylist
/features/{featureName}/state/{state}
My first question is should I even include allowlist, state, denylist in the url as my resources are services and features, not the allowlist or denylist.
How should the rest endpoint look like?
After reading thread mentioned below I was thinking about restructuring urls as below,
/features/{featureId}
[
{ "op": "add", "path": "/allowList", "value": [ "serviceA", "serviceB"]},
{ "op": "update", "path": "/state", "value": false}
]
Lastly, the use of PATCH even justified here? or there is any better way to design the api.
Note: I have got some help from the thread REST design for update/add/delete item from a list of subresources but have not used patch often.
How should the rest endpoint look like?
The URI that you use to edit (PUT, PATCH) a resource should look the same as the URI that you use to read (GET) the resource. The motivation for this design is cache-invalidation; your successful writes automatically invalidate previously cached reads of the same resource (same URI).
Lastly, the use of PATCH even justified here? or there is any better way to design the api.
In this example, the representation of the document is small compared to the HTTP headers, and the size of your patch document is close to the size of the resource representation. If that's the typical case, I'd be inclined to use PUT rather than PATCH. PUT has idempotent semantics, which general purpose components can take advantage of (for example, automatically resending requests when the response to an earlier request has been lost on the network).
GET /features/1 by user1
PUT /features/1 //done by user 2
PUT /features/1 //done by user1
the PUT by user2 will not be visible for user1 and user1 will make an update on the old object's state (with id=1) what can be done in this situation?
Conditional Requests.
You arrange things such that (a) the GET request from the server includes validators that identify the representation (b) the server responds 428 Precondition Required when the request lacks conditional headers (c) the clients know to read the validators from the resource metadata, and use the correct condition headers when submitting the PUT request (d) the server knows to compare the validator to the current representation before accepting the new representation.

REST request validation rules for a model

I am writing a REST API and a WEB on top. I would really like the API to provide the WEB with validation and default value information for input models.
Here is a fictional example:
{
"name": "string", // 1 to 50 characters.
"gender": "string", // Must be one of 'Male', 'Female', 'Legal Entity'
"BirthYear": "int" // [1900, 2019] - Default 1999
"weight": "decimal" // numeric(10, 2) Precision=10, Scale=2
"deceased": "bool" // Default = false.
}
I know I can use EnumDataType to list enumerations in Swagger but sometimes I have dynamic enumerations based on values in the database. Gender could for example be dynamic as people identify with new genders all the time :)
So in REST is there a known pattern how to pass such information to the client from the API for example via the OPTION verb?
Can anyone point to a good article or information about something like this?
I think you should validate the type and if the field is not empty, if the genre does not exist in the database, you throws a 422 status code, that indicate a semantical error (the client wanna POST/UPDATE with right media type, a right syntax , but uncorrect semantical value).
The optional method according to RFC2616 is used to view what resource or sever support(allowed methods and headers for example):
The HTTP OPTIONS method is used to describe the communication options
for the target resource.

How to model a progress "resource" in a REST api?

I have the following data structure that contains an array of sectionIds. They are stored in the order in which they were completed:
applicationProgress: ["sectionG", "sectionZ", "sectionA"]
I’d like to be able to do something like:
GET /application-progress - expected: sectionG, sectionZ, sectionA
GET /application-progress?filter[first]=true - expected: sectionG
GET /application-progress?filter[current]=true - expected: sectionA
GET /application-progress?filter[previous]=sectionZ - expected: sectionG
I appreciated the above URLs are incorrect, but I’m not sure how to name/structure them to get the expected data e.g. Are the resources here "sectionids"?
I'd like to adhere to the JSON:API specification.
UPDATE
I'm looking to adhere to JSON:API v1.0
In terms of resources I believe I have "Section" and "ProgressEntry". Each ProgressEntry will have a one-to-one relationship with a Section.
I'd like to be able to query within the collection e.g.
Get the first item in the collection:
GET /progress-entries?filter[first]
Returns:
{
"data": {
"type": "progress-entries",
"id": "progressL",
"attributes": {
"sectionId": "sectionG"
},
"relationships": {
"section": {
"links": {
"related": "http://example.com/sections/sectionG"
}
}
}
},
"included": [
{
"links": {
"self": "http://example.com/sections/sectionG"
},
"type": "sections",
"id": "sectionG",
"attributes": {
"id": "sectionG",
"title": "Some title"
}
}
]
}
Get the previous ProgressEntry given a relative ProgressEntry. So in the following example find a ProgressEntry whose sectionId attribute equals "sectionZ" and then get the previous entry (sectionG). I wasn't clear before that the filtering of this is based on the ProgressEntry's attributes:
GET /progress-entries?filter[attributes][sectionId]=sectionZ&filterAction=getPreviousEntry
Returns:
{
"data": {
"type": "progress-entries",
"id": "progressL",
"attributes": {
"sectionId": "sectionG"
},
"relationships": {
"section": {
"links": {
"related": "http://example.com/sections/sectionG"
}
}
}
},
"included": [
{
"links": {
"self": "http://example.com/sections/sectionG"
},
"type": "sections",
"id": "sectionG",
"attributes": {
"id": "sectionG",
"title": "Some title"
}
}
]
}
I started to comment on jelhan's reply though my answer was just to long for a reasonable comment on his objection, hence I include it here as it more or less provides a good introduction into the answer anyways.
A resource is identified by a unique identifier (URI). A URI is in general independent from any representation format else content-type negotiation would be useless. json-api is a media-type that defines the structure and semantics of representations exchanged for a specific resource. A media-type SHOULD NOT force any constraints on the URI structure of a resource as it is independent from it. One can't deduce the media-type to use based on a given URI even if the URI contains something like vnd.api+json as this might just be a Web page talking about json:api. A client may as well request application/hal+json instead of application/vnd.api+json on the same URI and receive the same state information just packaged in a different representation syntax, if the server supports both representation formats.
Profiles, as mentioned by jelhan, are just extension mechanisms to the actual media-type that allow a general media-type to specialize through adding further constraints, conventions or extensions. Such profiles use URIs similar to XML namespaces, and those URIs NEED NOT but SHOULD BE de-referencable to allow access to further documentation. There is no talk about the URI of the actual resource other than given by Web Linking that URIs may hint a client on the media-type to use, which I would not recommend as this requires a client to have certain knowledge about that hint.
As mentioned in my initial comments, URIs shouldn't convey semantics as link relations are there for!
Link-relations
By that, your outlined resource seems to be a collection of some further resources, sections by your domain language. While pagination as defined in json:api does not directly map here perfectly, unless you have so many sections that you want to split these into multiple pages, the same concept can be used using standardized link relations defined by IANA.
Here, at one point a server may provide you a link to the collection resource which may look like this:
{
"links": {
"self": "https://api.acme.org/section-queue",
"collection": "https://api.acme.org/app-progression",
...
},
...
}
Due to the collection link relation standardized by IANA you know that this resource may hold a collection of entries which upon invoking may return a json:api representation such as:
{
"links": {
"self": "https://api.acme.org/app-progression",
"first": "https://api.acme.org/app-progression/sectionG",
"last": "https://api/acme.org/app-progression/sectionA",
"current": "https://api.acme.org/app-progression",
"up": "https://api.acme.org/section-queue",
"https://api/acme.org/rel/section": "https://api.acme.org/app-progression/sectionG",
"https://api/acme.org/rel/section": "https://api.acme.org/app-progression/sectionZ",
"https://api/acme.org/rel/section": "https://api.acme.org/app-progression/sectionA",
...
},
...
}
where you have further links to go up or down the hierarchy or select the first or last section that finished. Note the last 3 sample URIs that leverages the extension relation types mechanism defined by RFC 5988 (Web Linking).
On drilling down the hierarchy further you might find links such as
{
"links": {
"self": "https://api.acme.org/app-progression/sectionZ",
"first": "https://api.acme.org/app-progression/sectionG",
"prev": "https://api.acme.org/app-progression/sectionG",
"next": "https://api.acme.org/app-progression/sectionA",
"last": "https://api.acme.org/app-progression/sectionA",
"current": "https://api.acme.org/app-progression/sectionA",
"up": "https://api.acme.org/app-progression",
...
},
...
}
This example should just showcase how a server is providing you with all the options a client may need to progress through its task. It will simply follow the links it is interested in. Based on the link relation names provided a client can make informed choices on whether the provided link is of interest or not. If it i.e. knows that a resource is a collection it might to traverse through all the elements and processes them one by one (or by multiple threads concurrently).
This approach is quite common on the Internet and allows the server to easily change its URI scheme over time as clients will only act upon the link relation name and only invoke the URI without attempting to deduce any logic from it. This technique is also easily usable for other media-types such as application/hal+json or the like and allows each of the respective resources to be cached and reused by default, which might take away load from your server, depending on the amount of queries it has to deal with.
Note that no word on the actual content of that section was yet said. It might be a complex summary of things typical to sections or it might just be a word. We could classify it and give it a name, as such even a simple word is a valid target for a resource. Further, as Jim Webber mentioned, your resources you expose via HTTP (REST) and your domain model are not identical and usually do not map one-to-one.
Filtering
json:api allows to group parameters together semantically by defining a customized x-www-form-urlencoded parsing. If content-type negotiation is used to agree on json:api as representation format, the likelihood of interoperability issues is rather low, though if such a representation is sent unsolicitedly parsing of such query parameters might fail.
It is important to mention that in a REST architecture clients should only use links provided by the server and not generate one on their own. A client usually is not interested in the URI but in the content of the URI, hence the server needs to know how to act upon the URI.
The outlined suggestions can be used but also URIs of the form
.../application-progress?filter=first
.../application-progress?filter=current
.../application-progress?filter=previous&on=sectionZ
can be used instead. This approach should in addition to that also work on almost all clients without the need to change their url-encoded parsing mechanism. In addition to that he management overhead to return URIs for other media-types generated may be minimized as well. Note that each of the URIs in the example above represent their own resource and a cache will store responses to such resources based on the URI used to retrieve such results. Queries like .../application-progress?filter=next&on=sectionG and .../application-progress?filter=previous&on=sectionA which retrieve basically the same representations are two distinctive resources which will be processed two times by your API as the response of the first query can't be reused as the cache key (URI) is different. According to Fielding caching is one of the few constraints REST has which has to be respected otherwise you are violating it.
How you design such URIs is completely up to you here. The important thing is, how you teach a client when to invoke such URIs and when it should not. Here, again, link-relations can and should be used.
Summary
In summary, which approach you prefer is up to you as well as which URI style you choose. Clients, especially in a REST environment, do not care about the structure of the URI. They operate on link-relations and use the URI just for invoking it to progress on with their task. As such, a server API should help a client by teaching it what it needs to know like in a text-based computer game in the 70/80's as mentioned by Jim Webber. It is helpful to think of the interaction model to design as affordances and state machine as explained by Asbjørn Ulsberg .
While you could apply filtering on grouped parameters provided by json:api such links may only be usable within the `json:api´ representation. If you copy & paste such a link to a browser or to some other channel, it might not be processable by that client. Therefore this would not be my first choice, TBH. Whether or not you design sections to be their own resource or just properties you want to retrieve is your choice here as well. We don't know really what sections are in your domain model, IMO it sounds like a valid resource though that may or may not have further properties.

Breeze failing silently while parsing metadata

Trying to get going on breeze but encountering the worst kind of error, which is none at all. It appears the metadata that I am producing is not being accepted by breeze. I know currently there are some issues with the metadata, such as 'foreignKeyNamesOnServer' has incorrect values in it and a bunch of others. The metadata I am producing can be viewed here (too large):
http://pastebin.com/ycP4jXxn
var serviceName = 'http://www.dockyard.com:8080/rest';
var entityManager = new breeze.EntityManager({serviceName: serviceName});
var entityQuery = new breeze.EntityQuery();
var query = breeze.EntityQuery.from("application")
entityManager.executeQuery(query)
.then(function (data) {
console.log(data);
}, function (error) {
console.log(error);
});
So the behaviour I am seeing is no javascript errors related to metadata parsing, the metadata is returning ok with 200 OK. The hit to /rest/application is returning 200 OK with the following data.
[{"#id":1,"id":1,"name":"dsad","deploymentStrategies":null,"versions":null,"groups":null},{"#id":2,"id":2,"name":"sss","deploymentStrategies":null,"versions":null,"groups":null},{"#id":3,"id":3,"name":"fdsfs","deploymentStrategies":null,"versions":null,"groups":null},{"#id":4,"id":4,"name":"fdsa","deploymentStrategies":null,"versions":null,"groups":null},{"#id":5,"id":5,"name":"dasda","deploymentStrategies":null,"versions":null,"groups":null}]
Promise is calling the error callback with: cannot execute _executeQueryCore until metadataStore is populated
The contents of the metadata store:
{"namingConvention":{"name":"camelCase"},"localQueryComparisonOptions":{"name":"caseInsensitiveSQL","isCaseSensitive":false,"usesSql92CompliantStringComparison":true},"dataServices":[{"serviceName":"http://www.dockyard.com:8080/rest/","hasServerMetadata":true,"jsonResultsAdapter":"webApi_default","useJsonp":false}],"_resourceEntityTypeMap":{"platform":"Platform:#com.psidox.dockyard.controller.model.dockyard","application":"Application:#com.psidox.dockyard.controller.model.application","host":"Host:#com.psidox.dockyard.controller.model.host","groupdeploymentstrategy":"GroupDeploymentStrategy:#com.psidox.dockyard.controller.model.application","dockyard":"Dockyard:#com.psidox.dockyard.controller.model.dockyard","configurationentry":"ConfigurationEntry:#com.psidox.dockyard.controller.model","hoststrategy":"HostStrategy:#com.psidox.dockyard.controller.model.application","dockerimage":"DockerImage:#com.psidox.dockyard.controller.model.docker","version":"Version:#com.psidox.dockyard.controller.model.application","docker":"Docker:#com.psidox.dockyard.controller.model.docker","hostproviderconfig":"HostProviderConfig:#com.psidox.dockyard.controller.model.host","hostprovider":"HostProvider:#com.psidox.dockyard.controller.model.host","metadataimpl":"MetadataImpl:#com.psidox.dockyard.controller.model","deployment":"Deployment:#com.psidox.dockyard.controller.model.dockyard","hosttype":"HostType:#com.psidox.dockyard.controller.model.host","group":"Group:#com.psidox.dockyard.controller.model.application","groupimplementation":"GroupImplementation:#com.psidox.dockyard.controller.model.application","deploymentstrategy":"DeploymentStrategy:#com.psidox.dockyard.controller.model.dockyard","groupdeployment":"GroupDeployment:#com.psidox.dockyard.controller.model.application","metadata":"Metadata:#com.psidox.dockyard.controller.model"},"_structuralTypeMap":{},"_shortNameMap":{},"_ctorRegistry":{},"_incompleteTypeMap":{},"_incompleteComplexTypeMap":{},"_id":0,"_deferredTypes":{}}"
I am pretty sure this error is related to Metadata store not being populated correctly from my metadata. Just wondering why Breeze is not throwing any type of error when it is encountering invalid metadata?
Edit:
After debugging the parse metadata call it appears that Breeze Metadata Schema Documentation is out of date. At a quick glance this is what it looks like has changed:
Key name "structuralTypeMap" has changed to "structuralTypes".
"structuralTypeMap" use to be a object with the key as the EntityTypeName and value was the Entity definition. Now it appears that "structuralTypes" is an array with the Entity definitions.
Suggestions also there should possibly be an exception thrown if the metadata doesn't contain any structuralTypes? Currently it is failing silently which isn't very helpful for debugging.
I fear they you've jumped into the deep end of the pool before learning to swim. I admire your bravery but I'm not surprised that you're struggling to stay afloat. You're not following any of the easy paths we've set out for you. I assume that is because none of these paths are suitable to your situation.
On the bright side, you've reinforced by sense that we soon must make it easier for developers who get their data from a custom REST service.
Problem #1
The Query results do not identify the EntityType and you didn't mention that you wrote a custom JsonResultsAdapter to cope with that. Your question and your MetadataStore contents below suggest that you are using the out-of-the-box Web API adapter which wouldn't know what to do with the JSON query results.
Here is one item in the JSON payload from your query, reformatted for readability
{
"#id": 1,
"id": 1,
"name": "dsad",
"deploymentStrategies": null,
"versions": null,
"groups": null
}
There's nothing in there to indicate to which EntityType this data belongs. Just looking I have no idea what type this is. Breeze won't know either.
You'll need to learn about the "JsonResultsAdapter" which is how Breeze interprets JSON data arriving from the server and maps it into instances of EntityTypes.
The Ruby Sample has a custom JsonResultsAdapter. It depends upon the fact that the server is explicit about the type of each object it returns; see how every Rails view adds a $type node (for example, the sessions:index view). This is the approach to take if you can control what the server sends the client.
The Edmunds Sample has a custom JsonResultsAdapter that has to infer the type by examining the characteristics of the JSON data. It's kind of a forensic exercise that you only want to indulge if you have to do so.
Problem #2
The MetadataStore you serialized is empty of all type information. Here it is reformatted for legibility
{
"namingConvention": {
"name": "camelCase"
},
"localQueryComparisonOptions": {
"name": "caseInsensitiveSQL",
"isCaseSensitive": false,
"usesSql92CompliantStringComparison": true
},
"dataServices": [
{
"serviceName": "http:\/\/www.dockyard.com:8080\/rest\/",
"hasServerMetadata": true,
"jsonResultsAdapter": "webApi_default",
"useJsonp": false
}
],
"_resourceEntityTypeMap": {
"platform": "Platform:#com.psidox.dockyard.controller.model.dockyard",
"application": "Application:#com.psidox.dockyard.controller.model.application",
... a bunch more ...
},
"_structuralTypeMap": {
},
"_shortNameMap": {
},
... more emptiness ...
}
}
I'm not really surprised, having discovered problem #3.
Problem #3
Your raw metadata doesn't match a format that Breeze understands. It looks like you cobbled it together by hand. It sure doesn't look like anything I recognize. It doesn't match the CSDL format from Entity Framework. It doesn't match the "Breeze Metadata Format" that you'd see when you exported a MetadataStore.
It's in trouble almost immediately. Here is how you start the definition of your first type:
"structuralTypeMap": {
"Group:#com.psidox.dockyard.controller.model.application": {
"shortName": "Group",
"namespace": "com.psidox.dockyard.controller.model.application",
Here is how it should begin:
"structuralTypes": [
{
"shortName": "Group",
"namespace": "com.psidox.dockyard.controller.model.application",
I accept your point that the Breeze Metadata Schema Documentation is incorrect. We should fix that.
I'm sympathetic with your argument that Breeze should have thrown an exception. I can see why it didn't throw. It simply ignored all the nodes that it didn't understand. A lot of parsers do that, not that that is a sufficient excuse.
In this case, it ignored the "structuralTypeMap" node and everything it had to say about types. When the parser was done, it had learned nothing at all about the types. Breeze can't know how many types you'll specify but it could act suspicious if there are none.
I confess I personally never thought to use this metadata schema description as my guide. That would be just about the hardest possible way to write metadata.
I suggest that you look at the documentation topic "Metadata by hand".
In sum
Please examine a simple example first. Maybe Edmunds. Maybe Ruby.
Learn to write metadata by hand; it's not hard.
Learn about the JsonResultsAdapter
We do hope soon to offer specific guidance for the developer who has a "vanilla" REST data service.

How to use one REST API for both machines and humans?

I'm interested in building a web service with a REST API. I've been reading about HATEOAS and many of the examples explain the concept by comparing it to what humans do when they surf the web. This has me thinking, why not build the REST API in such a way that it can be easily used by both humans and machines?
For example, I have an internal model of a widget, and this widget has properties like part number, price, etc. When a machine asks for a list of widgets, I can return a JSON representation.
{
widgets: [
{
id: 1,
part_number: "FOO123",
price: 100,
url: "/widget/1"
},
{
id: 2,
part_number: "FOO456",
price: 150,
url: "/widget/2"
},
{
id: 3,
part_number: "FOO789",
price: 200,
url: "/widget/3"
},
...
]
}
When a human requests the same list through his/her web browser, it seems like I should be able to take the same internal model and apply a different view to it to generate an HTML response. (Of course, I would decorate the HTML response with other page elements, like a header, footer, etc.)
Is this a sensible design? Why or why not? Are there any popular sites actually doing it?
The biggest drawback that I see is there is no obvious way for a user to delete a resource. In my use case, I'm not going to let users modify or delete resources, so this is not a deal-breaker, but in general how might you handle that?
#mehaase
First of all i'd suggest to use one of registered JSON hypermedia formats:
Collection+JSON: http://amundsen.com/media-types/collection/format/
Collection.next+JSON:
http://code.ge/media-types/collection-next-json/
HAL - Hypertext Application Language:
http://stateless.co/hal_specification.html
All of them offer explicit semantics for creating links with semantic link relations.
For example with Collection(.next)+JSON you can express your widgets like this:
{"collection": {
"version": 1.0,
"items": [{
"href": "/widget/1",
"data": [{
"name": "id",
"value": 1,
"prompt": "ID"
}, {
"name": "part_number",
"value": "FOO123",
"prompt": "Part number"
}, {
"name": "price",
"value": 100,
"prompt": "Price"
}],
"links": [{
"rel": "self",
"href": "http://...",
}, {
"rel": "edit",
"href": "http://..."
}]
}]
}}
This gives you several advantages:
You do not need to reinvent the wheel for specifying links
You can freely use all registered link relation types:
http://www.iana.org/assignments/link-relations/link-relations.xml
Based on your data structure, you can easily use collection/item semantics of mentioned format
If need be you can describe input forms as well
As you see from example, it has enough information for transforming to HTML(or other formats).
The biggest drawback that I see is there is no obvious way for a user
to delete a resource. In my use case, I'm not going to let users
modify or delete resources, so this is not a deal-breaker, but in
general how might you handle that?
for this read "edit" link relation specification, it implies that resource can be deleted.
There are a couple of things you can do, but the first premise is simply that the modern "generic" web browser is really crummy REST client.
If most of your interaction is guarded and managed by JavaScript, if you write a "rich client" so to speak where you're relying more on JS generated requests than simply links, forms, and the back button, then it can be a better REST client.
If you're stuck with the generic browser experience of forms and links, you can route around the lack of the other HTTP verbs by overloading POST. You lose some guarantees by intermediaries. DELETE is idempotent, POST is not, this has repercussions, but it's not devastating, and you just have to work around it. You can do idempotent things with POST, but intermediaries won't "know" that they are, so they can't assume its idempotent.
If you end up having to go "POST uber alles" you will either restrict your machine clients to the same api, or you offer up parallel services -- those used by POST stupid clients, and those others that have the full gamut available to them.
That said, if you choose an XML based hypermedia format, then what you can do is add XSL transforms to the XML payloads. The browsers will run the XSL on the payloads creating as pretty a page as you like (headers, footers, enough JS to choke a horse, etc.), while machines will ignore that aspect of it and focus solely on data as given.
Gives you a "best of both worlds" in that respect.
You can always build a REST API and then build your own, human-friendly web app around it. This is a common practice because you have out-of-the-box functionality and an extendable system for developers.
It is possible to do so simply by using HTML with RDFa. So humans can read the HTML and machines can read the RDFa annotations. You need a REST vocab like hydra to annotate the links, and other vocabs, like schema.org to annotate the content.
Another option to use different media types for different type of clients, like HTML for humans and JSON-LD for machines.