I'm in the midst of finalising the set of cmdlets for a server application. Part of the application includes security principal management and data object management, and "expiration" of both (timed and manual). After the expiration date, login and access for the security principal is refused and access to the data owned by that principal is optionally prevented (either immediately by deletion or as part of automatic maintenance by marking it as expired).
From the output of Get-Verb, I cannot see an obvious synonym for Expire, which is the most natural choice of verb for the action being undertaken here. Expire on a security principal expires the principal and may also expire all their stored data, while expire of a data object is restricted to that object.
Set- is already in use for both object types, and has a partial overlap in functionality (Expire- forces a date in the past, and removes data, while Set- will allow future or past dates but NOT remove the data).
In this fashion Expire is combining two operations (Set+Remove) and for data-security reasons, we wouldn't want to force separation into the two operations (that's already possible).
For this reason, I also consider that Disable- is not appropriate since it suggests the possibility of reversal with Enable-.
I also think Remove- by itself is inappropriate since there are data records specifically not deleted as part of the operation.
Unpublish seems very close at least for the data, but again it seems that the intent is for Unpublish and Publish to be paired, and in this case it would not be reversible. It also does not make sense when applied to the security principal.
So which (if any) standard verb would you expect to use, if you wanted to expire something?
Looking at the list of approved verbs, two jump out at me:
Deny (dn):
Refuses, objects, blocks, or opposes the state of a resource or process.
Revoke (rk): Specifies an action that does not allow access to a resource. This verb is paired with Grant.
I wouldn't worry too much if there is not a paired operation, since that happens with some of the built-in cmdlets. Stop-Computer, for example, has no paired Start-Computer. There is Remove-Variable, but no Add-Variable (there is New-Variable). I think that it is only important if a paired command exists that it is named consistently.
Another option may be to use something like a Set-ObjectExpiration/Get-ObjectExpiration especially, if it makes sense to want to query when objects are going to expire.
What about Invoke? It could be Invoke-ExpireAppObject Or something like that.
There really isn't an approved verb that fits your scenario based on MS reccomendations
Related
I have data that supports being Archived and Unarchived but none of the Approved Verbs for PowerShell Commands for data management or resource lifecycle seem to be a good fit.
Technically, the relevant data items are actually available over RESTful API and are referenced by ID. The Cmdlets I'm building speak to said API.
EDIT: These data items are more accurately described as records with the act of archiving being some form of recategorisation or relabelling of said records as being in an archived state.
Which verbs are most appropriate and what are some of the implementation factors and considerations that should be taken into account when choosing?
New-DataArchive and Remove-DataArchive
Not sure of the particulars of the underlying API, but often there's a POST (new) and a DELETE (remove).
I'm also a big fan of adding [Alias]s when there's not a great match. For example, I was recently working in a git domain where Fork is a well-known concept, so I picked the "closest" approved verb, but added an alias to provide clarity (aliases can be whatever you want)
function Copy-GithubProject {
[Alias("Fork-GithubProject")]
[CmdletBinding()]
I think this comes down to user-experience (user using said Cmdlets) versus actual-implementation. The Approved Verbs for PowerShell Commands article refers mostly to the actual-implementation when describing verbs, not so much the user-experience of those using Cmdlets. I think choosing PowerShell Verbs based on actual implementation, instead of abstracting that away and focusing on the common-sense user-experience, is the way the Approved PowerShell Verb list should be used.
Set (s): Replaces data on an existing resource or creates a resource that contains some data...
Get (g): Specifies an action that retrieves a resource. This verb is paired with Set.
Although the user maybe archiving something, they may only actually be changing a label or an archive-bit on the resource. In my case, the 'Archiving' is actually just a flag on a row on a backend database, meaning, it is replacing data on an existing resource, so Set-ArchiveState (or equivalent) as Seth suggested is the most appropriate here.
New vs. Set
Use the New verb to create a new resource. Use the Set verb to modify an existing resource, optionally creating it if it does not exist, such as the Set-Variable cmdlet.
...
New (n): Creates a resource. (The Set verb can also be used when creating a resource that includes data, such as the Set-Variable cmdlet.)
I think New would only be applicable if you are creating a new resource based off of the old ones, the new resource representing an archived copy. In my use-case, it's Archival of a resource is represented by a flag, I am changing data on an existing resource primarily, thus, New isn't suitable here.
Publish (pb): Makes a resource available to others. This verb is paired with Unpublish.
Unpublish (ub): Makes a resource unavailable to others. This verb is paired with Publish.
There is an argument to be made that, if Archiving/Unarchiving restricts availability of the resource, Publish/Unpublish would be appropriate but I think this negatively impacts the user-experience even more than Set/Get does by using terminology in an uncommon way.
Compress (cm): Compacts the data of a resource. Pairs with Expand.
Expand (en): Restores the data of a resource that has been compressed to its original state. This verb is paired with Compress.
This is quite implementation specific and I think would only be suitable if the main purpose of the Archive/Unarchive action is for data compression and not for resource lifecycle management.
Odata V4 Spec says that Actions MAY have observable side effects and should be invoked using HTTP POST. But we do have scenarios where we need to use actions which just modifies some status.
For example :
1.You might want to mark status of a document identified by a id as locked
Endpoint - .../Documents({id})/lock().
Since I am doing a partial update here, In my opinion PATCH is more suitable.2. You might want to offer two ways of deleting a document
a) Just Hide Endpoint - ...../Documents({id}) This is with HTTP DELETE (no disputes)
b) Delete PermanentlyEndpoint - ...../Documents({id})/permanentDelete() This as an ODATA action. In my opinion, HTTP Delete would be more appropriate here instead of HTTP POST.
What is the recommended way to do this from Odata standpoint? Any help here is much appreciated.
Below is the information from SPEC.
SPEC
11.5.4 Actions
Actions are operations exposed by an OData service that MAY have side effects when invoked. Actions MAY return data but MUST NOT be further composed with additional path segments.
11.5.4.1 Invoking an Action
To invoke an action bound to a resource, the client issues a POST request to an action URL. An action URL may be obtained from a previously returned entity representation or constructed by appending the namespace- or alias-qualified action name to a URL that identifies a resource whose type is the same as, or derives from, the type of the binding parameter of the action. The value for the binding parameter is the value of the resource identified by the URL prior to appending the action name, and any non-binding parameter values are passed in the request body according to the particular format.
Thanks in advance
--ksp
From an OData standpoint, you always have to invoke an action with a POST, with any other verb, it won't work.
Both of your examples are things that I personally don't think are well suited to an action as there are already ways to do these things with OData and they use the verbs that you are mentioning.Updating a property is supported with a PATCH and deleting an object is supported with a DELETE. You mention that you have two different types of delete operations, this is more difficult but you could use a custom header to distinguish between them.
An action tends to be something that doesn't fit into the normal CRUD operations so it isn't always clear which HTTP verb should be used for this.
In your examples, it does feel right to want to use DELETE and PATCH. However, the problem comes because we need to have a standard to follow, bear in mind that OData actions are all discoverable through the metadata so could be consumed by a client with no knowledge of what the actions actually do and in this case, we need to have something defined. Since we know that actions affect the server in some way, to me, it seems like POST is the least bad option that is consistent for all actions.
At the end of the day, as the author, your API is yours to command. Just as we shouldn't use properties in C# class design that cause side affects in other properties, that doesn't mean we cant, and it can be pretty common.
From an OData standpoint, PATCH is analogous to using property accessors and actions or POST is for accessing methods.
Making the decision between PATCH or POST to affect change is therefor the same design decision between using property mutators (making them writable) or forcing the caller to set their values through related methods.
You might want to mark status of a document identified by a id as locked
Endpoint - .../Documents({id})/lock().
Since I am doing a partial update here, In my opinion PATCH is more suitable.
If your resource has a property called Locked, and your current lock() Action only sets the Locked property to true, then you could simply use PATCH to update just that Locked field.
Some common reasons to use an Action specifically for a lock process:
You want to execute specific logic when lock() is called and you want to maintain this logic in it's own method, rather than having complex conditional logic inside your patch handler.
You don't want the reverse logic, unlock() to be available to everyone, when a resource is locked, presumably only the user who locked it can release the lock or other conditions need to be satisfied.
as with the first point, this sort of logic is generally easier to maintain in its own method, and therefore Action.
You want to use Security Attributes to restrict access to lock() to certain security groups, yet you want all users to have read access the the Locked state field.
When a resource is locked other fields are also set, like LockedBy and perhaps DateLocked.
While all of that logic could be managed in your single PATCH endpoint logic for the controller, I can't stress enough how this can easily make your solution unmanageable over time or as the complexity of your resource increases.
More importantly: the documented and accepted convention is that PATCH will NOT have side effects, and as such there is no need to execute a GET after a PATCH because the same change on the client has been accepted on the server. Conversely, because a POST MAY have side effects, it is reasonable for the client to expect to execute a GET to get all the related changes if the response from the POST does not contain the updated resource.
In the case of Delete vs PermanentlyDelete now things become personal / opinionated...
By convention, the expectation of DELETE is two fold:
After a delete the resource should no longer appear in collection query results from a GET
After a delete the resource should no longer appear in item requests via GET
From a server point of view, if my resource has a soft delete, that simply sets a flag, and a permanent delete that deletes the record from the underlying store, then by convention I would use an Action called Delete() for the soft delete, and the permanent delete should use the DELETE http verb.
The general reasoning for this is the same as for the discussion about locked().
However, if the intention of soft vs permanent delete is to intercept the client's standard delete workflow, such that the permanent delete concept is hidden or otherwise abstracted in a way that it is not part of the usual workflow (the user has to go somewhere different, like the recycle bin to view deleted records and restore or permanently delete), in that scenario, use the HTTP verb DELETE for soft delete, and make a specific, perhaps collection bound Action to accept the request for permanent delete
This action may have to be unbound, or bound to the collection depending on how you implement your filtering, if we have deleted record 'xyz' such the GET: ~/document('xyz') returns NOT-FOUND then we can't really expect POST: ~/document('xyz')/delete() to execute...
We all know the 'standard' way of deleting a single item via REST is to send a single DELETE request to a URI example.com/Items/666. Grand, let's move on to deleting many at once. As we do not require atomic deleting (or true transaction, ie all or nothing) we could just tell the client 'tough luck, make many requests' but that's not very nice is it. So we need a way to allow a client to request many 'Items' be deleted at once.
From my understanding, the 'typical' solution to this problem is a 'two step' approach. First the client POSTs a list of item IDs and is returned a URI such as example.com/Items/Collection/1. Once that collection is created, they call DELETE on it.
Now, I see that this works just fine, except to me, it is a bad solution. Firstly, you are forcing the client to make two requests to accommodate the server. Secondly, 'I thought DELETE was supposed to delete an Item?', shouldn't calling DELETE on this URI effectively cancel the transaction (it's not a true transaction though), how would we even cancel it? Really would be better if there was some form 'EXECUTE' action, but I can't rock the boat that much. It also forces the server to have to consider 'the JSON that was POSTed looks more like a request to modify these Items, but the request was DELETE... so I guess I will delete them'. This approach also starts to impose a sort of state on the client/server, not a true state I will admit, but it is sort of.
In my opinion, a better solution would be to simply call DELETE on example.com/Items (or maybe example.com/Items/Collection to imply this is a multiple delete) and pass JSON data containing a list of IDs that you wish to delete. As far as I can see, this basically solves all the problems the first method had. It is easier to use as a client, reduces the work the server has to do, is truly stateless, is more semantic.
I would really appreciate the feed back on this, am I missing something about REST that makes my solution to this problem unrealistic? I would also appreciate links to articles, especially if they compare these two methods; I am aware this is not normally approved of for SO. I need to be able to disprove that only the first method is truly RESTfull, prove that the second approach is a viable solution. Of course, if I am barking up the wrong tree do tell me.
I have spent the last week or so reading a fair bit on REST, and to the best of my understanding, it would be wrong to describe either of these solutions as 'RESTfull', rather you should say that 'neither solution goes against what REST means'.
The short answer is simply that REST, as laid out in Roy Fielding's dissertation (See chapter 5), does not cover the topic of how to go about deleting resources, singular or multiple, in a REST manor. That's right, there is no 'correct RESTful way to delete a resource'... well, not quite.
REST itself does not define how delete a resource, but it does define that what ever protocol you are using (remember that REST is not a protocol) will dictate the how perform these actions. The protocol will usually be HTTP; 'usually' being the key word as Fielding will point out, REST is not synonymous with HTTP.
So we look to HTTP to say which method is 'right'. Sadly, as far as HTTP is concerned, both approaches are viable. Yes 'viable'. HTTP will allow a client to send a POST request with a payload (to create a collection resource), and then call a DELETE method on this new collection to delete the resources; it will also allow you to send the data within the payload of a single DELETE method to delete the list of resources. HTTP is simply the medium by which you send requests to the server, it would be up to the server to respond appropriately. To me, the HTTP protocol seems to be rather open to interpretation in places, but it does seem to lay down fairly clear guide lines for what actions mean, how they should be dealt with and what response should be given; it's just it is a 'you should do this' rather than 'you must do this', but perhaps I am being a little pedantic on the wording.
Some people would argue that the 'two stage' approach cannot possibly be 'REST' as the server has to store a 'state' for the client to perform the second action. This is simply a misunderstanding of some part. It must be understood that neither the client nor the server is storing any 'state' information about the other between the list being POSTed and then subsequently being DELETEd. Yes, the list must have been created before it can deleted, but the server does not remember that it was client alpha that made this list (such an approach would allow the client to simply call 'DELETE' as the next request and the server remembers to use that list, this would not be stateless at all) as such, the client must tell the server to DELETE that specific list, the list it was given a specific URI for. If the client attempted to DELETE a collection list that did not already exist it would simply be told 'the resource can not be found' (the classic 404 error most likely). If you wish to claim that this two step approach does maintain a state, you must also claim that to simply GET an URI requires a state, as the URI must first exist. To claim that there is this 'state' persisting is misunderstanding what 'state' means. And as further 'proof' that such a two stage approach is indeed stateless, you could quite happily have client alpha POST the list and later client beta (without having had any communication with the other client) call DELETE on the list resources.
I think it can stand rather self evident that the second option, of just sending the list in the payload of the DELETE request, is stateless. All the information required to complete the request is stored completely within the one request.
It could be argued though that the DELETE action should only be called on a 'tangible' resource, but in doing so you are blatantly ignoring the REpresentational part of REST; It's in the name! It is the representational aspect that 'permits' URIs such as http://example.com/myService/timeNow, a URI that when 'got' will return, dynamically, the current time, with out having to load some file or read from some database. It is a key concept that the URIs are not mapping directly to some 'tangible' piece of data.
There is however one aspect of that stateless nature that must be questioned. As Fielding describes the 'client-stateless-server' in section 5.1.3, he states:
We next add a constraint to the client-server interaction: communication must
be stateless in nature, as in the client-stateless-server (CSS) style of
Section 3.4.3 (Figure 5-3), such that each request from client to server must
contain all of the information necessary to understand the request, and
cannot take advantage of any stored context on the server. Session state is
therefore kept entirely on the client.
The key part here in my eyes is "cannot take advantage of any stored context on the server". Now I will grant you that 'context' is somewhat open for interpretation. But I find it hard to see how you could consider storing a list (either in memory or on disk) that will be used to give actual useful meaning would not violate this 'rule'. With out this 'list context' the DELETE operation makes no sense. As such, I can only conclude that making use of a two step approach to perform an action such as deleting multiple resources cannot and should not be considered 'RESTfull'.
I also begrudge somewhat the effort that has had to be put into finding arguments either way for this. The Internet at large seems to have become swept up with this idea the the two step approach is the 'RESTfull' way doing such actions, with the reasoning 'it is the RESTfull way to do it'. If you step back for a moment from what everybody else is doing, you will see that either approach requires sending the same list, so it can be ignored from the argument. Both approaches are 'representational' and 'stateless'. The only real difference is that for some reason one approach has decided to require two requests. These two requests then come with follow up questions, such as how 'long do you keep that data for' and 'how does a client tell a server that it no longer wants that this collection, but wishes to keep the actual resources it refers to'.
So I am, to a point, answering my question with the same question, 'Why would you even consider a two step approach?'
IMO:
HTTP DELETE on existing collection to delete all of its member seems fine. Creating the collection just to delete all of the member sounds odd. As you yourself suggest, just pass IDs of the to be deleted items using JSON (or any other payload format). I think that the server should try to make multiple deletes an internal transaction though.
I would argue that HTTP already provides a method of deleting multiple items in the form of persistent connections and pipelining. At the HTTP protocol level it is absolutely fine to request idempotent methods like DELETE in a pipelined way - that is, send all the DELETE requests at once on a single connection and wait for all the responses.
This may be problematic for an AJAX client running in a browser since few browsers have pipelining support enabled by default. This is not the fault of HTTP, though, it is the fault of those specific clients.
I have been using POST in a REST API to create objects. Every once in a while, the server will create the object, but the client will be disconnected before it receives the 201 Created response. The client only sees a failed POST request, and tries again later, and the server happily creates a duplicate object...
Others must have had this problem, right? But I google around, and everyone just seems to ignore it.
I have 2 solutions:
A) Use PUT instead, and create the (GU)ID on the client.
B) Add a GUID to all objects created on the client, and have the server enforce their UNIQUE-ness.
A doesn't match existing frameworks very well, and B feels like a hack. How does other people solve this, in the real world?
Edit:
With Backbone.js, you can set a GUID as the id when you create an object on the client. When it is saved, Backbone will do a PUT request. Make your REST backend handle PUT to non-existing id's, and you're set.
Another solution that's been proposed for this is POST Once Exactly (POE), in which the server generates single-use POST URIs that, when used more than once, will cause the server to return a 405 response.
The downsides are that 1) the POE draft was allowed to expire without any further progress on standardization, and thus 2) implementing it requires changes to clients to make use of the new POE headers, and extra work by servers to implement the POE semantics.
By googling you can find a few APIs that are using it though.
Another idea I had for solving this problem is that of a conditional POST, which I described and asked for feedback on here.
There seems to be no consensus on the best way to prevent duplicate resource creation in cases where the unique URI generation is unable to be PUT on the client and hence POST is needed.
I always use B -- detection of dups due to whatever problem belongs on the server side.
Detection of duplicates is a kludge, and can get very complicated. Genuine distinct but similar requests can arrive at the same time, perhaps because a network connection is restored. And repeat requests can arrive hours or days apart if a network connection drops out.
All of the discussion of identifiers in the other anwsers is with the goal of giving an error in response to duplicate requests, but this will normally just incite a client to get or generate a new id and try again.
A simple and robust pattern to solve this problem is as follows: Server applications should store all responses to unsafe requests, then, if they see a duplicate request, they can repeat the previous response and do nothing else. Do this for all unsafe requests and you will solve a bunch of thorny problems. Repeat DELETE requests will get the original confirmation, not a 404 error. Repeat POSTS do not create duplicates. Repeated updates do not overwrite subsequent changes etc. etc.
"Duplicate" is determined by an application-level id (that serves just to identify the action, not the underlying resource). This can be either a client-generated GUID or a server-generated sequence number. In this second case, a request-response should be dedicated just to exchanging the id. I like this solution because the dedicated step makes clients think they're getting something precious that they need to look after. If they can generate their own identifiers, they're more likely to put this line inside the loop and every bloody request will have a new id.
Using this scheme, all POSTs are empty, and POST is used only for retrieving an action identifier. All PUTs and DELETEs are fully idempotent: successive requests get the same (stored and replayed) response and cause nothing further to happen. The nicest thing about this pattern is its Kung-Fu (Panda) quality. It takes a weakness: the propensity for clients to repeat a request any time they get an unexpected response, and turns it into a force :-)
I have a little google doc here if any-one cares.
You could try a two step approach. You request an object to be created, which returns a token. Then in a second request, ask for a status using the token. Until the status is requested using the token, you leave it in a "staged" state.
If the client disconnects after the first request, they won't have the token and the object stays "staged" indefinitely or until you remove it with another process.
If the first request succeeds, you have a valid token and you can grab the created object as many times as you want without it recreating anything.
There's no reason why the token can't be the ID of the object in the data store. You can create the object during the first request. The second request really just updates the "staged" field.
Server-issued Identifiers
If you are dealing with the case where it is the server that issues the identifiers, create the object in a temporary, staged state. (This is an inherently non-idempotent operation, so it should be done with POST.) The client then has to do a further operation on it to transfer it from the staged state into the active/preserved state (which might be a PUT of a property of the resource, or a suitable POST to the resource).
Each client ought to be able to GET a list of their resources in the staged state somehow (maybe mixed with other resources) and ought to be able to DELETE resources they've created if they're still just staged. You can also periodically delete staged resources that have been inactive for some time.
You do not need to reveal one client's staged resources to any other client; they need exist globally only after the confirmatory step.
Client-issued Identifiers
The alternative is for the client to issue the identifiers. This is mainly useful where you are modeling something like a filestore, as the names of files are typically significant to user code. In this case, you can use PUT to do the creation of the resource as you can do it all idempotently.
The down-side of this is that clients are able to create IDs, and so you have no control at all over what IDs they use.
There is another variation of this problem. Having a client generate a unique id indicates that we are asking a customer to solve this problem for us. Consider an environment where we have a publicly exposed APIs and have 100s of clients integrating with these APIs. Practically, we have no control over the client code and the correctness of his implementation of uniqueness. Hence, it would probably be better to have intelligence in understanding if a request is a duplicate. One simple approach here would be to calculate and store check-sum of every request based on attributes from a user input, define some time threshold (x mins) and compare every new request from the same client against the ones received in past x mins. If the checksum matches, it could be a duplicate request and add some challenge mechanism for a client to resolve this.
If a client is making two different requests with same parameters within x mins, it might be worth to ensure that this is intentional even if it's coming with a unique request id.
This approach may not be suitable for every use case, however, I think this will be useful for cases where the business impact of executing the second call is high and can potentially cost a customer. Consider a situation of payment processing engine where an intermediate layer ends up in retrying a failed requests OR a customer double clicked resulting in submitting two requests by client layer.
Design
Automatic (without the need to maintain a manual black list)
Memory optimized
Disk optimized
Algorithm [solution 1]
REST arrives with UUID
Web server checks if UUID is in Memory cache black list table (if yes, answer 409)
Server writes the request to DB (if was not filtered by ETS)
DB checks if the UUID is repeated before writing
If yes, answer 409 for the server, and blacklist to Memory Cache and Disk
If not repeated write to DB and answer 200
Algorithm [solution 2]
REST arrives with UUID
Save the UUID in the Memory Cache table (expire for 30 days)
Web server checks if UUID is in Memory Cache black list table [return HTTP 409]
Server writes the request to DB [return HTTP 200]
In solution 2, the threshold to create the Memory Cache blacklist is created ONLY in memory, so DB will never be checked for duplicates. The definition of 'duplication' is "any request that comes into a period of time". We also replicate the Memory Cache table on the disk, so we fill it before starting up the server.
In solution 1, there will be never a duplicate, because we always check in the disk ONLY once before writing, and if it's duplicated, the next roundtrips will be treated by the Memory Cache. This solution is better for Big Query, because requests there are not imdepotents, but it's also less optmized.
HTTP response code for POST when resource already exists
In our system we have accounts which contain items. An item is always associated with a single account but also has a globally unique id in the system. Sometimes it is desirable to work with an item when only its id is known.
Is it incorrect to allow access to a subordinate resource (the item) from outside it's owner (the account)? In other words, is it wrong to have 2 URI's to the same resource? This is a little tricky to explain so here is an example:
POST /inventory/accountId
#Request Body contains new item
#Response body contains new item's id
GET|PUT|DELETE /inventory/accountId/guid #obviously works and makes sense
GET|PUT|DELETE /inventory/guid #does this make sense?
Perhaps I should rethink my resource layout and not use accounts to create items but instead take the account as a query string parameter or field on the item?
POST /inventory
# Request body contains item w/ account name set on it
GET|POST|DELETE /inventory/uuid #makes sense
GET|POST|DELETE /inventory/accountId/uuid #not allowed
I think having two URIs point to the same item is asking for trouble. In my experience, these sorts of things lead to craziness as you scale out (caching, multiple nodes in a cluster going out of sync and so on). As long as the item's ID is indeed globally unique, there's no reason no to simply refer to it as /inventory/uid
POST /inventory/accountId
GET|PUT|DELETE /inventory/accountId/guid #obviously works and makes sense
GET|PUT|DELETE /inventory/guid #does this make sense?
It makes the most sense when /inventory/guid redirects to /inventory/accountId/guid (or, I'd argue, vice-versa). Having a single canonical entity, with multiple URI's redirecting to it, allows your caching scheme to remain the most straightforward. If the two URI's instead return the same data, then a user is inevitably going to PUT a new representation to one and then be confused when it GETs an old copy from the other because the cache was only invalidated for the former. A similar problem can occur for subsequent GETs on the two. Redirects keep that a lot cleaner (not perfectly synchronous, but cleaner).
Whether to make items subordinate to accounts depends on whether items can exist without an account. If the data of an item is a subset of the data of an account, then go ahead and make it subordinate. If you find that an account is just one kind of container, or that some items exist without any container, then promote them to the top level.
In other words, is it wrong to have 2 URI's to the same resource?
No. It is not wrong to have multiple URI's identifying the same resource. I don't see anything wrong with your first approach as well. Remember URI's are unique identifiers and should be opaque to clients. If they are uniquely identifying a resource then you don't have to worry too much about making your URLs look pretty. I am not saying resource modeling is not important but IMO we shouldn't spend too much time on it. If your business needs that you have guid directly under inventory and also under individual accounts, so be it.
Are you concerned about this because of a potential security hole in letting data be available to unauthorized users? Or is your concern purely design driven?
If security is not your concern, I agree that it is perfectly fine to have 2 URIS pointing to the same resource.