POST request for JOIN table - rest

Let's say 3 tables
session
-------------------------
id | name | date
-------------------------
speaker
-------------------------
id | name
-------------------------
session_speaker
-------------------------
session_id | speaker_id
-------------------------
I've endpoints already in place to do the insertion
POST /session
POST /speaker
What kind of REST request should I create to specify the intention to insert into the JOIN table using POST /session or any other method (passing session_id and speaker_id)
Note: I've a PATCH request already in place to activate or deactivate a session.
Question:
Basically seeking an ideal REST based solution to handle CRUD based operations for the JOIN table, please advise.

You could use the following REST operation for creating the relationship:
PUT speakers/speaker/{speakerId}/session/{sessionId}/
I don't advise using plural names in URLs (e.g. speakers), I'd recommend a singular name such as SessionSpeaker, but as you can't change it from "speakers" I've used it as requested.
You should also use PUT instead of POST for inserting data, as PUT is idempotent i.e. PUT guards against you inserting the same speaker at a session.
To then retrieve speakers information you could use:
GET speakers/session/{sessionId}
GET speakers/speaker/{speakerId}
Another good answer regarding REST and entity multiplicity is here.

Related

Best approach to connect user to their client schema?

I'm building an application where I would like to use the multi-tenant strategy of creating a schema for each client. Would it be appropriate to store all users in a users table within a single schema that includes a reference column for their respective schemas?
Db_app_01
/schema_public
/schema_public/table_users
/schema_client_1
Where in table_users I have:
|user_id|username|password|schema_id|
--------------------------------------
|1 |user1 |* |1 |
I was thinking with this that I could easily query the correct schema as the schema_id would be available in main users table which is used for authentication.
Your approach looks fine to me, as long as there are not too many different users. When the number of tables and schemas goes into the 10000s, metadata queries will become sluggish, and it won't be much fun any more.
I wouldn't construct dynamic queries out of the schema_id, explicitly referencing the appropriate schema.
Rather, I would set search_path appropriately.

Should I validate relationships in DynamoDB?

Lets say I have an app where users can make posts. I store these in a single DynamoDB table using the following design:
+--------+--------+---------------------------+
| PK | SK | (Attributes) |
+-----------------+---------------------------+
| UserId | UserId | username, profile, etc... | <-- user item
| UserId | PostId | body, timestamp, etc... | <-- post item
+--------+--------+---------------------------+
When a user makes a post, my Lambda function receives the following data:
{
"userId": <UserId>",
"body": <Body>,
etc...
}
My question is, should I first verify that the user exists before adding the post to the table by using dynamodb.get({PK: userId, SK: userId)? This would make sure there won't be any orphaned posts, but also the function will require both a read and write unit.
One idea I have is to just write the post, potentially allowing orphaned posts. Then, I could have another Lambda function that runs periodically to find and remove any orphans.
This is obviously a simple case, but imagine a more complex system where objects have multiple relationships. It seems it could easily get very costly to check for relationship existence in these cases.
"Then, I could have another Lambda function that runs periodically to find and remove any orphans." <-- This could get very expensive over time, especially if you plan to do this by scanning the table.
I develop a system built on DynamoDB that has similar relationships, and I validate relationships before saving data because I do not want to have garbage data in my tables.
One option to consider is implicitly testing for the existence of a valid user via authentication & authorization. If a user has passed your auth tests, then you know that they exists, so you can add their posts with confidence.

How to specify multiple query parameters instead of request body in http verb delete?

I have http verb delete endpoint in which i would like to know how to specify multiple query parameters in http verb delete endpoint.
Based on my research i came to know that it is better to specify
parameters as part of query string than in request body for http verb delete
endpoint.
I am dynamically creating endpoints in which i have htpp delete endpoint.lets say i have delete endpoint like below :
https://root/Customer
Now i have interface for delete configuration where user select columns require to delete a record from table.
So lets say i have a below table in which i have 2 columns(Col1,Col2) uniquely identifying a record:
Customer
Col1
Col2
Col3
User select this Customer table and select columns to delete record from table for http verb delete endpoint.
So in case of single Column which is unique i am generating url like this :
https://root/Customer/Col1
But now when i will have 2 or more than 2 columns uniquely identifying a record and from customer if user select 2 or more than 2 columns then is below url correct as per http standards :
https://root/Customer/Col1?Col2={value1}&Col3={Value2}
Can anybody please guide me for this?

Exposing temporal data - is this RESTful?

Background/Context:
I have a very simple use case that I am attempting to use as an opportunity to build support within my company for hypermedia-rich API development. What I'm trying to determine - with the help of the wonderful stackoverflow community :) - is whether my proposed implementation ticks all the boxes to call itself "RESTful".
As promised, the use case is quite simple - we have a data table where we track some sort of location classification code on a per zipcode basis. The only thing remotely interesting about this table is that we have - via the from/thru dates - the concept of effectivity:
Table State 1
ID | ZIP | CODE | FROM | THRU
====|======|======|===========|===========
548 |90210 |R |2013-01-01 |null
We get updated data every fall that takes effect Jan 1st of the following year. For example, if the above zipcode was being updated to a new "CODE" value, we'd make the following table changes:
Table State 2
ID | ZIP | CODE | FROM | THRU
====|======|======|===========|===========
548 |90210 |R |2013-01-01 |2014-12-31
777 |90210 |U |2015-01-01 |null
Of course, one could expose this interaction as a CRUD service in their sleep, call it "RESTful", and be done with it in short order. PUT to update the old record's "THRU" date; POST to insert a record reflecting the new data.
Easy enough, but as I mentioned, I'm trying to use this opportunity to do a hypermedia case study - here is my proposed design (conveyed via some sample requests). Here is how I would propose getting from Table State 1 to Table State 2:
Request 1:
GET /zips/90210
Response 1:
200 OK
{"zipCode":90210,"pickupLocationCode":"R","fromDate":"2013-01-01","thruDate":null,
"_links":{"self":{"href":"/zips/90210"},
"latest":{"href":"/zips/90210"}}}
Request 2:
PUT /zips/90210
{"zipCode":90210,"pickupLocationCode":"U","fromDate":"2015-01-01","thruDate":null}
Response 2:
204 No Content
Request 3:
GET /zips/90210
Response 3:
200 OK
{"zipCode":90210,"pickupLocationCode":"U","fromDate":"2015-01-01","thruDate":null,
"_links":{"self":{"href":"/zips/90210"},
"previous":{"href":"/zips/90210/0"},
"latest":{"href":"/zips/90210"}}}
Request 4:
GET /zips/90210/0
Response 4:
200 OK
{"zipCode":90210,"pickupLocationCode":"R","fromDate":"2013-01-01","thruDate":"2014-12-31",
"_links":{"self":{"href":"/zips/90210/0"},
"next":{"href":"/zips/90210"},
"latest":{"href":"/zips/90210"}}}
Note that the PUT in Request 2 automagically "expired" the old row (i.e. set the thru date) and then inserted a new row.
I'm attempting to allow clients to (1) locate a given zipcode via a URL using a well-known business key (e.g. "90210") and then (2) via hypermedia ("next", "previous", "latest" link rels), allow the client to navigate through the zipcode's various states at different points in time. Am I headed down the right path here or am I on a fool's errand?
Using this design, if you ever wanted to "delete" a zipcode (I'm guessing this rarely - if ever? - happens, but again, trying to use this as more of an academic exercise than anything), you could do some interesting stuff:
Delete Request:
DELETE /zips/90210
Delete Response:
204 No Content
A subsequent GET to /zips/90210 would give you:
303 See Other
Location: /zips/90210/0
This would allow you to make a valuable distinction between (1) a zipcode for which we no longer have any "active" data for and (2) a zipcode that never existed to begin with - for instance:
GET /zips/99999
Which would give you a good old 404:
404 Not Found
Inserting a new zipcode (a use case that probably does happen semi-frequently) would simply be a matter of making an exploratory GET to confirm that zip doesn't currently exist (i.e. either a 303 See Other or 404 Not Found response) and then making a POST to the base "/zips" URL.
Any feedback would be much appreciated!

Can web2py serve REST data of many-to-may tables via parse_as_rest?

I need to serve REST data about a many-to-many relationship. I've been playing with web2py's lovely parse_as_rest functionality, but can't quite get the many-to-many thing working.
As an example, let's take standard users and groups.
Tables:
user
id
user_name
group
id
group_name
membership
id
user_id
group_id
What pattern do I need to use to serve a url that will give me all group_name's that a user belongs to?
patterns = [
"/user[user]",
"/user[user]/id/{user.id}",
"/user[user]/id/{user.id}/membership[membership.user_id]",
# This is the line that I can't make yet:
#"/user[user]/id/{user.id}/membership[membership.user_id]/group<WHAT GOES HERE>",
"/group[group]",
"/group[group]/id/{group.id}",
]
parser = db.parse_as_rest(patterns, args, vars)
With the non-commented lines above, I can get to these urls:
.../user
.../user/id/1
.../user/id/1/membership
.../group
.../group/id/3
URL #3 shows me all my memberships, and I can then make several separate calls to URL #5 to get the group_name values, but there's got to be a way to do this with one call.
Help me StackOverflow! You're my only hope.
EDIT: Fixed bad cutting and pasting.
This question is in top of Google search for that topic.
Just start building query from the many-to-many table.
/user[membership]/id/{membership.user_id}/groups[group.id]
You don't really need 'user' table for this request.
Then request to "/user/id/22/groups" will give you all groups not only their IDs.