Exposing temporal data - is this RESTful? - rest

Background/Context:
I have a very simple use case that I am attempting to use as an opportunity to build support within my company for hypermedia-rich API development. What I'm trying to determine - with the help of the wonderful stackoverflow community :) - is whether my proposed implementation ticks all the boxes to call itself "RESTful".
As promised, the use case is quite simple - we have a data table where we track some sort of location classification code on a per zipcode basis. The only thing remotely interesting about this table is that we have - via the from/thru dates - the concept of effectivity:
Table State 1
ID | ZIP | CODE | FROM | THRU
====|======|======|===========|===========
548 |90210 |R |2013-01-01 |null
We get updated data every fall that takes effect Jan 1st of the following year. For example, if the above zipcode was being updated to a new "CODE" value, we'd make the following table changes:
Table State 2
ID | ZIP | CODE | FROM | THRU
====|======|======|===========|===========
548 |90210 |R |2013-01-01 |2014-12-31
777 |90210 |U |2015-01-01 |null
Of course, one could expose this interaction as a CRUD service in their sleep, call it "RESTful", and be done with it in short order. PUT to update the old record's "THRU" date; POST to insert a record reflecting the new data.
Easy enough, but as I mentioned, I'm trying to use this opportunity to do a hypermedia case study - here is my proposed design (conveyed via some sample requests). Here is how I would propose getting from Table State 1 to Table State 2:
Request 1:
GET /zips/90210
Response 1:
200 OK
{"zipCode":90210,"pickupLocationCode":"R","fromDate":"2013-01-01","thruDate":null,
"_links":{"self":{"href":"/zips/90210"},
"latest":{"href":"/zips/90210"}}}
Request 2:
PUT /zips/90210
{"zipCode":90210,"pickupLocationCode":"U","fromDate":"2015-01-01","thruDate":null}
Response 2:
204 No Content
Request 3:
GET /zips/90210
Response 3:
200 OK
{"zipCode":90210,"pickupLocationCode":"U","fromDate":"2015-01-01","thruDate":null,
"_links":{"self":{"href":"/zips/90210"},
"previous":{"href":"/zips/90210/0"},
"latest":{"href":"/zips/90210"}}}
Request 4:
GET /zips/90210/0
Response 4:
200 OK
{"zipCode":90210,"pickupLocationCode":"R","fromDate":"2013-01-01","thruDate":"2014-12-31",
"_links":{"self":{"href":"/zips/90210/0"},
"next":{"href":"/zips/90210"},
"latest":{"href":"/zips/90210"}}}
Note that the PUT in Request 2 automagically "expired" the old row (i.e. set the thru date) and then inserted a new row.
I'm attempting to allow clients to (1) locate a given zipcode via a URL using a well-known business key (e.g. "90210") and then (2) via hypermedia ("next", "previous", "latest" link rels), allow the client to navigate through the zipcode's various states at different points in time. Am I headed down the right path here or am I on a fool's errand?
Using this design, if you ever wanted to "delete" a zipcode (I'm guessing this rarely - if ever? - happens, but again, trying to use this as more of an academic exercise than anything), you could do some interesting stuff:
Delete Request:
DELETE /zips/90210
Delete Response:
204 No Content
A subsequent GET to /zips/90210 would give you:
303 See Other
Location: /zips/90210/0
This would allow you to make a valuable distinction between (1) a zipcode for which we no longer have any "active" data for and (2) a zipcode that never existed to begin with - for instance:
GET /zips/99999
Which would give you a good old 404:
404 Not Found
Inserting a new zipcode (a use case that probably does happen semi-frequently) would simply be a matter of making an exploratory GET to confirm that zip doesn't currently exist (i.e. either a 303 See Other or 404 Not Found response) and then making a POST to the base "/zips" URL.
Any feedback would be much appreciated!

Related

Should I validate relationships in DynamoDB?

Lets say I have an app where users can make posts. I store these in a single DynamoDB table using the following design:
+--------+--------+---------------------------+
| PK | SK | (Attributes) |
+-----------------+---------------------------+
| UserId | UserId | username, profile, etc... | <-- user item
| UserId | PostId | body, timestamp, etc... | <-- post item
+--------+--------+---------------------------+
When a user makes a post, my Lambda function receives the following data:
{
"userId": <UserId>",
"body": <Body>,
etc...
}
My question is, should I first verify that the user exists before adding the post to the table by using dynamodb.get({PK: userId, SK: userId)? This would make sure there won't be any orphaned posts, but also the function will require both a read and write unit.
One idea I have is to just write the post, potentially allowing orphaned posts. Then, I could have another Lambda function that runs periodically to find and remove any orphans.
This is obviously a simple case, but imagine a more complex system where objects have multiple relationships. It seems it could easily get very costly to check for relationship existence in these cases.
"Then, I could have another Lambda function that runs periodically to find and remove any orphans." <-- This could get very expensive over time, especially if you plan to do this by scanning the table.
I develop a system built on DynamoDB that has similar relationships, and I validate relationships before saving data because I do not want to have garbage data in my tables.
One option to consider is implicitly testing for the existence of a valid user via authentication & authorization. If a user has passed your auth tests, then you know that they exists, so you can add their posts with confidence.

POST request for JOIN table

Let's say 3 tables
session
-------------------------
id | name | date
-------------------------
speaker
-------------------------
id | name
-------------------------
session_speaker
-------------------------
session_id | speaker_id
-------------------------
I've endpoints already in place to do the insertion
POST /session
POST /speaker
What kind of REST request should I create to specify the intention to insert into the JOIN table using POST /session or any other method (passing session_id and speaker_id)
Note: I've a PATCH request already in place to activate or deactivate a session.
Question:
Basically seeking an ideal REST based solution to handle CRUD based operations for the JOIN table, please advise.
You could use the following REST operation for creating the relationship:
PUT speakers/speaker/{speakerId}/session/{sessionId}/
I don't advise using plural names in URLs (e.g. speakers), I'd recommend a singular name such as SessionSpeaker, but as you can't change it from "speakers" I've used it as requested.
You should also use PUT instead of POST for inserting data, as PUT is idempotent i.e. PUT guards against you inserting the same speaker at a session.
To then retrieve speakers information you could use:
GET speakers/session/{sessionId}
GET speakers/speaker/{speakerId}
Another good answer regarding REST and entity multiplicity is here.

ItemInventoryQuery is not returning all the available fields.

We are using Web Connector on the end where QBPOS 10.0 is installed.
On the server end we issue an ItemInventoryQuery request using QBPOSFC3.0 (QB POS Foundation Classes).
The response we receive from Quickbooks contains most of the fields available on an inventory item, but there are some fields that are not being returned, specifically, "Unit of Measure" is not being returned on the XML we receive from Quickbooks.
Per the on screen reference, the "UnitOfMeasure" is a field available on the response of an ItemInventoryQuery
https://member.developer.intuit.com/qbsdk-current/Common/newOSR/index.html
Nonetheless I am unable to obtain these values, the "UnitOfMeasure" nodes do not even exist on the XML response we get from Quickbooks, everything else is good in the response (e.g. item ListID, name, vendor, etc.)
What am I missing here ?
Here is a sample of the XML response we receive:
http://pastebin.com/pA6KDr0k
I just checked some of my old source code and found that I was explicitly telling it which fields to return. For example:
query.IncludeRetElementList.Add("UnitOfMeasure1");
query.IncludeRetElementList.Add("UnitOfMeasure2");
query.IncludeRetElementList.Add("UnitOfMeasure3");
I don't remember if I did this because of the same problem you're having, but I do know I got the UOM fields in the response. Hope this helps!
Check unit of measure is enabled for the company file in preferences -> items & inventory -> Company preferences tab. It is disabled by default in new companies.
You are missing other fields too such as time created.
If you included any IncludeRetElementList lines in your request that will limit your results.
So You will have to add IncludeRetElements for UOM as Mike suggested.
If that doesn't work I'd suggest posting your request.

SCardTransmit() always return 6d00

I'm trying to read name, card number, expiry date etc on Credit Card. but always return 6d00 when call SCardTransmit.
I'm using pre-define AID, which i have googled to be valid (correct me if i'm wrong). here's the are :
AID_LIST = {
"A0000000421010",
"A0000000422010",
"A0000000031010",
"A0000000032010",
"A0000000041010",
"A0000000042010",
"A00000006900",
"A0000001850002"
}
Thanks in advance.
I am not familiar with this API you are using, but you will have to send the following sequence of APDU commands:
SELECT PSE (for contact card), specified by EMV in Book 1, 11.3. An example is "00A404000E315041592E5359532E444446303100"
With the SFI returned, you can read the records to find out the supported AIDs. But, you can do this by "trial and error" using the pre-defineds AID that you specified and call SELECT AID, following the guidelines on Book 1, 12.3.3.
You may either call the command "GET PROCESSING OPTIONS" to see what records are available to read or you can read all possible records calling the "READ RECORD" command making a scan of the possible records. In one of those records, you will have the data you are looking for.
Usually in the same record you will have stored the Cardholder name, PAN and Track 2 discretionary data (in which is contained the expiration date).
The tags are listed in Book 3.
Application Primary Account Number (PAN) - 5A
Cardholder name - 5F20
Track 2 Discretionary Data - 9F20
Usefull info about Track 2:
http://en.wikipedia.org/wiki/Magnetic_stripe_card
A sample of the sequence above:
http://code.google.com/p/javaemvreader/wiki/ExampleOutput
EMV Specs:
http://www.emvco.com/specifications.aspx?id=223
The possible return codes, such as 61XX, 9000, etc are listed in ISO 7816. Here's a good overview: http://www.cardwerk.com/smartcards/smartcard_standard_ISO7816-4_5_basic_organizations.aspx
You need to lookup/buy ISO 7816, the EMV specifications and your vendors card specifications otherwise you don't know what you are doing.

Storing Embedded Comments vs. Avoiding overhead in MongoDB

Let me explain my problem, and hopefully someone can offer some good advice.
I am currently working on a web-app that stores information and meta-data for a large amount of applications. For each application there could be anywhere from 10 to 100's of comments that are tied to the application and an application version id. I am using MongoDB because of a need for easy future scalability and speed. I have read that comments should be embedded in a collection for read performance reasons, but I'm not sure that this works in my case. I read on another post:
In general, if you need to work with a given data set on its own, make it a collection.
By: #kb
In my case however I don't need to work on the collection by themselves. Let me explain further. I will have a table of apps (that can be filtered) and will dynamically load entries as you scroll, or filter, through the list of apps. If I embed the comments within the application collection, I am sending ALL the comments when I dynamically load the application entry into the table. However, I would like to do "lazy loading" in that I only want to load the comments when the user requests to see them (by clicking on the entry in the table).
As an example, my table might look like the following
| app name | version | rating | etc. | view comments |
------------------------------------------------------
| app1 | v.1.0 | 4 star | etc. | click me! |
| app2 | v.2.4.5 | 3 star | etc. | click me! |
| ...
My question is what would be more efficient? Are reads fast enough on MongoDB that it really doesn't matter that I am pulling all the comments with each application? If a user did not filter any of the applications and scrolled all the way to the bottom, they might load somewhere between 125k to 250k entries/applications.
I would suggest thinking more specifically about your query - you specify which parts of an object you'd like to return. This should allow you to avoid the overhead of getting a bunch of embedded comments when you're only interested in displaying some specific bits of information about the application.
You can do something like: db.collection.find({ appName : 'Foo'}, {comments : 0 }); to retrieve the application object with appName Foo, but specifically exclude the comments object (more likely array of objects) embedded within it.
From the MongoDB docs
Retrieving a Subset of Fields
By default on a find operation, the entire document/object is returned. However we may also request that only certain fields are returned. Note that the _id field is always returned automatically.
// select z from things where x=3
db.things.find( { x : 3 }, { z : 1 } );
You can also remove specific fields that you know will be large:
// get all posts about mongodb without comments
db.posts.find( { tags : 'mongodb' }, { comments : 0 } );
EDIT
Also remember the limit(n) function to retrieve only n apps at a time. For instance, getting n=50 apps without their comments would be:
db.collection.find({}, {comments : 0 }).limit(50);