REST: update a resource with different fields requiring different user permissions - rest

I have an endpoint /groups
I can create a group by POSTing some info to /groups
A single group can be read by /groups/{id}
I can update some fields in the group by POSTing to /group/{id}
HOWEVER I have different fields that are needed to be updated by users with different permissions, for instance: A group might have the structure
{
"id": 1,
"name": "some name",
"members": [
{
"user_id": 456,
"known_as": "Name 1",
"user": { /* some user object */},
"status": "accepted",
"role": "admin",
"shared": "something"
},
{
"user_id": 999227,
"known_as": "Name 1",
"user": { /* some user object */},
"status": "accepted",
"role": "basic",
"shared": "something"
},
{
"user_id": 9883,
"known_as": "Name 1",
"user": { /* some user object */},
"status": "requested",
"role": "basic",
"shared": "something"
}
],
"link": "https://some-link"
}
As an example I have the following 3 operations for the /group/{id}/members/{id} endpoint:
I want only the user to be able to update his own known_as field
I want only group admins to be able to update each member's role and status fields.
I want both the user and the admin to be able to update the shared field
My options are this:
Should I allow all updates to be done by POSTing to /group/{id}/members/{id} with a subset of the fields for a member and throw an unauthorized error if they try to update a field that they aren't allowed to update?
Or should I break each operation into say /group/{id}/members/{id}/role, /group/{id}/members/{id}/shared and /group/{id}/members/{id}/status? The problem with this is that I don't want to have to make lots of requests to update all the fields (I imagine that there will end up being quite a lot of them).
So just for clarification my question is: Is it considered proper REST to do my option 1 where I can post updates to an endpoint that may fail if you try to change a field that you aren't allowed to?

In my opinion, option 1 is much better than option 2.
As you said option 2 is a waste of bandwidth.
More importantly, with option 1 you can easily implement an atomic update (update "all-or-nothing"). It should either complete successfully or fail entirely. There should never be a partial update.
With option 2 it's very likely the update can be implemented to complete some request successfully and reject another request, even if the two requests are considered a single operation.

Related

what is the best model to save data on elasticsearch?

I have a rails application and use elastic search as a search engine in my rails app. this app collects data from the mobile application and could collect from any kind of mobile app. mobile app sends two types of data user profile details and user actions details. my app admins could search over this data with multiple conditions and operations and fetch the specific results and which are user profile details. after that my app admins could communicate with this profile, for example, send an email, SMS, or even chat online. In my case I have two options to save user data; first of all, I want to save user profiles details and user action details in a separate document with this structure profile doc:
POST profilee-2022-06-09/_doc
{
"profile": {
"app_id": "abbccddeeff",
"profile_id": "2faae1d6-5875-4b36-b119-74a14589c841",
"whatsapp_number": "whatsapp:+61478421940",
"phone": "+61478421940",
"email": "user#mail.com",
"first_name": "john",
"last_name": "doe"
}
}
user actions details:
POST events_app_id_2022-05-17/_doc
{
"app_id": "9vlgwrr6rg",
"event": "Email_Sign_Up",
"profile_id": "2faae1d6-5875-4b36-b119-74a14589c840",
"media": "x1z1",
"date_time": "2022-05-17T11:48:02.511Z",
"device_id": "2faae1d6-5875-4b36-b119-74a14589c840",
"lib": "android",
"lib_version": "1.0.0",
"os": "Android",
"os_version": "12",
"manufacturer": "Google",
"brand": "google",
"model": "sdk_gphone64_arm64",
"google_play_services": "available",
"screen_dpi": 440,
"screen_height": 2296,
"screen_width": 1080,
"app_version_string": "1.0",
"app_build_number": 1,
"has_nfc": false,
"has_telephone": true,
"carrier": "T-Mobile",
"wifi": true,
"bluetooth_version": "ble",
"session_id": "b1ad31ab-d440-435f-ac12-3d03c30ac44f",
"insert_id": "1e285b51-abcf-46ae-8359-9a9d58970cdf"
}
As I said before app admins search over this document to fetch specific profiles and use that result to communicate with them, in this case, the problem is the mobile user could create a profile and a few days or a few months later create some actions so user profile details and user action details are generated in different days so if app admins want to fetch specific result from this data and wrote some complex query I have at least two queries by application on my elastic search in my app it's impossible because each query must save for later use by admin, so As a result of business logic it's impossible to me, and I have to add in some case I need to implement join query that based on elastic search documentation It has cost so it's impossible In the second scenario I decided to save both user profile and action in one docs somethings like this:
POST profilee-2022-06-09/_doc
{
"profile": {
"app_id": "abbccddeeff",
"profile_id": "urm-2faae1d6-5875-4b36-b119-74a14589c841",
"whatsapp_number": "whatsapp:+61478421940",
"phone": "+61478421940",
"email": "user#mail.com",
"first_name": "john",
"last_name": "doe",
"events": [
{
"app_id": "abbccddeeff",
"event": "sign_in",
"profile_id": "urm-2faae1d6-5875-4b36-b119-74a14589c841",
"media": "x1z1",
"date_time": "2022-06-06T11:52:02.511Z"
},
{
"app_id": "abbccddeeff",
"event": "course_begin",
"profile_id": "urm-2faae1d6-5875-4b36-b119-74a14589c841",
"media": "x1z1",
"date_time": "2022-06-06T11:56:02.511Z"
},
{
"app_id": "abbccddeeff",
"event": "payment",
"profile_id": "urm-2faae1d6-5875-4b36-b119-74a14589c841",
"media": "x1z1",
"date_time": "2022-06-06T11:58:02.511Z"
}
]
}
}
In this case, In the same state, I have to do as same as I do in before and I have to generate a profile index per day and append user action to it, so It means I have to update continuously each day, assume I have 100,000 profile and each one have 50 actions it means 100,000 * 50 per day update that have severity on my server so still it's impossible. So Could you please help me what is the best model to save my data in elastic search based on my descriptions?
Update: Does elastic search useful for my requirements? If I switch to other databases like MongoDB or add Hadoop it be more useful in my case?

Is creating a multi entity topic better than having all entities in separate topic?

I have looked into this article and still i have some confusion regarding merging separate topics in one comprehensive topic: https://www.confluent.io/blog/put-several-event-types-kafka-topic
So i have two entities, Account & Client as given below:
Account:
"Account": {
"AccountId": "JKB123456",
"ClientId": "1234567",
"Type": "Savings",
"Currency": "USD"
}
Client:
"Client": {
"ClientId": "1234567",
"Name": "John Doe",
"PhoneNo": "777777777"
}
A Client can have at least one or more accounts associated with it. The events which will work on them ar pretty basic i.e. Create and Update.
This results in four use-cases:
Create a client - A client entity with multiple Account entities are provided. If the client with all its account is created, the use-case is considered successfully executed.
Update Client - A client (only) has some of its details updated, so a client object with the said values is provided. Account is not part of this.
Create Account - An account entity is provided to be created and associated with an existing account.
Update Account - An existing account fields can be updated in this use-case.
A create client json structure will look like this:
{
"Client": {
"ClientId": "1234567",
"Name": "John Doe",
"PhoneNo": "777777777",
"Accounts": [
{
"AccountId": "JKB123456",
"ClientId": "1234567",
"Type": "Savings",
"Currency": "USD"
},
{
"AccountId": "HKB123456",
"ClientId": "1234567",
"Type": "Savings",
"Currency": "EUR"
}
]
}
}
So should i have a single Client topic schema with list of account structure as part of it? I believe then i can publish both create and update events for the client on the same topic.
But would it be possible to publish the Account Create/update events on the same topic as well? Because i think if i do proper ordering (probably on the basis of clientid) then i can also have the all events of the same client on the same partition.
Do you think this is a good approach?

How to Create a Ticket That Belongs To a Process OTRS 5 - REST

Context
I'm developing a custom service in a .Net MVC Web Application that will connect to an OTRS web service to create/list/update tickets.
We are implementing many process workflows to make things works more efficiently.
Problem
I cannot find a way to "attach" a new ticket to a process, I know how to create a normal ticket, but not a process ticket.
I found a perl script that seems to do what I need to, but I cannot find a way to connect the problem with the solution.
Perl Script
ProcessTicketProcessSet()
Set Ticket's ProcessEntityID
my $Success = $ProcessObject->ProcessTicketProcessSet(
ProcessEntityID => 'P1',
TicketID => 123,
UserID => 123,
);
Returns:
$Success = 1; # undef
1 if setting the Activity was executed
undef if setting failed
Normal Ticket
URL:
http://someDomain.com.br/otrs/nph-genericinterface.pl/Webservice/SomeWebServiceName/Ticket?UserLogin=user&Password=abcd
Method: POST
Body:
{
"UserLogin": "user",
"Password": "abcd",
"Ticket": {
"Title": "REST - To Create Ticket",
"Type": "Unclassified",
"QueueID": "5",
"State": "new",
"Priority": "3 normal",
"CustomerUser": "someuser#someemail.com.br"
},
"DynamicField": [{
"Name": "CustomFieldOne",
"Value": "value1"
},
{
"Name": "CustomFieldTwo",
"Value": "value2"
}
],
"Article": {
"Subject": "Rest - Article Ticket",
"Body": "Test Article Creation",
"ContentType": "text/plain; charset=utf8"
}
}
How can I create a ticket that belongs to a process?
To create a ticket which belongs to a process you need to set two dynamic fields of a ticket.
ProcessManagementProcessID (which is representing the process)
ProcessManagementActivityID (which is representing the activity step of the process)
In case you also can set both dynamic fields later to set the process.
In case you do not know what values you need to set, just launch a process ticket via the UI and check via the ticket histories what values are set for both dynamic fields.

MongoDB - how to properly model relations

Let's assume we have the following collections:
Users
{
"id": MongoId,
"username": "jsloth",
"first_name": "John",
"last_name": "Sloth",
"display_name": "John Sloth"
}
Places
{
"id": MongoId,
"name": "Conference Room",
"description": "Some longer description of this place"
}
Meetings
{
"id": MongoId,
"name": "Very important meeting",
"place": <?>,
"timestamp": "1506493396",
"created_by": <?>
}
Later on, we want to return (e.g. from REST webservice) list of upcoming events like this:
[
{
"id": MongoId(Meetings),
"name": "Very important meeting",
"created_by": {
"id": MongoId(Users),
"display_name": "John Sloth",
},
"place": {
"id": MongoId(Places),
"name": "Conference Room",
}
},
...
]
It's important to return basic information that need to be displayed on the main page in web ui (so no additional calls are needed to render the table). That's why, each entry contains display_name of the user who created it and name of the place. I think that's a pretty common scenario.
Now my question is: how should I store this information in db (question mark values in Metting document)? I see 2 options:
1) Store references to other collections:
place: MongoId(Places)
(+) data is always consistent
(-) additional calls to db have to be made in order to construct the response
2) Denormalize data:
"place": {
"id": MongoId(Places),
"name": "Conference room",
}
(+) no need for additional calls (response can be constructed based on one document)
(-) data must be updated each time related documents are modified
What is the proper way of dealing with such scenario?
If I use option 1), how should I query other documents? Asking about each related document separately seems like an overkill. How about getting last 20 meetings, aggregate the list of related documents and then perform a query like db.users.find({_id: { $in: <id list> }})?
If I go for option 2), how should I keep the data in sync?
Thanks in advance for any advice!
You can keep the DB model you already have and still only do a single query as MongoDB introduced the $lookup aggregation in version 3.2. It is similar to join in RDBMS.
$lookup
Performs a left outer join to an unsharded collection in the same database to filter in documents from the “joined” collection for processing. The $lookup stage does an equality match between a field from the input documents with a field from the documents of the “joined” collection.
So instead of storing a reference to other collections, just store the document ID.

Validate referential integrity of object arrays with Joi

I'm trying to validate that the data I am returned it sensible. Validating data types is done. Now I want to validate that I've received all of the data needed to perform a task.
Here's a representative example:
{
"things": [
{
"id": "00fb60c7-520e-4228-96c7-13a1f7a82749",
"name": "Thing 1",
"url": "https://lolagons.com"
},
{
"id": "709b85a3-98be-4c02-85a5-e3f007ce4bbf",
"name": "Thing 2",
"url": "https://lolfacts.com"
}
],
"layouts": {
"sections": [
{
"id": "34f10988-bb3d-4c38-86ce-ed819cb6daee",
"name": "Section 1",
"content:" [
{
"type": 2,
"id": "00fb60c7-520e-4228-96c7-13a1f7a82749" //Ref to Thing 1
}
]
}
]
}
}
So every Section references 0+ Things, and I want to validate that every id value returned in the Content of Sections also exists as an id in Things.
The docs for Object.assert(..) implies that I need a concrete reference. Even if I do the validation within the Object.keys or Array.items, I can't resolve the reference at the other end.
Not that it matters, but my context is that I'm validating HTTP responses within IcedFrisby, a Frisby.js fork.
This wasn't really solveable in the way I asked (i.e. with Joi).
I solved this for my context by writing a plugin for icedfrisby (published on npm here) which uses jsonpath to fetch each id in Content and each id in Things. The plugin will then assert that all of the first set exist within the second.