How to create HTTP GET and POST methods in kdb - kdb

What is the best way to set up HTTP GET and POST methods with a kdb database?
I'd like to be able to extract the column names from a kdb table to create a simple form with fillable fields in the browser, allow users to input text into the fields, and then upsert and save that text to my table.
For example if I had the following table...
t:([employeeID:`$()]fName:`$(); mName:`$(); lName:`$())
So far I know how to open a port \p 9999 and then view that table in browser by connecting to the local host http://localhost:9999 and I know how to get only the column names:cols t.
Though I'm unsure how to build a useful REST API from this table that achieves the above objective, mainly updating the table with the inputted data. I'm aware of .Q.hg and .Q.hp from this blog post and the Kx reference. But there is little information and I'm still unsure how to get it to work for my particular purpose.

Depending upon your front-end(client) technology, you can either use HTTP request or WebSockets. Using HTTP request will require extra work to customize the output of the request as by default it returns HTML data.
If your client supports Websockets like Javascript then it would be easy to use it.
Basically, you need to do 2 things to setup WebSockets:
1) Start your KDB server and setup handler function for WebSocket request. Function for that is .z.ws. For eample simple function would be something like below:
q) .z.ws:{neg[.z.w].Q.s #[value;x;{`$ "'",x}]}
2) Setup message handler function on the client side, open websocket connection from the client and send a request to KDB server.
Details: https://code.kx.com/v2/wp/websockets/
Example: https://code.kx.com/v2/wp/websockets/#a-simpledemohtml

Related

Routing incoming request

I am trying to create a simple API using Go that performs certain operations depending on the data provided.
I was planning to provide JSON data to this API and get details from it for further use.
Since I was trying to provide JSON data I created the routing using gorilla/mux as below:
router.HandleFunc("/msa/dom/perform-factory-reset?json={jsonData}", CallGet)
log.Fatal(http.ListenAndServe(":8080", router))
But while trying to hit the endpoint http://localhost:8080/msa/dom/perform-factory-reset?json={"vrf":"ds","ip":"45","mac":"452","method":"gfd"} I am getting 404 page not found error.
Hence I tried to change the implementation such that new routing is done as:
router.HandleFunc("/msa/dom/perform-factory-reset/json={jsonData}", CallGet)
This works absolutely fine and I am able to perform desired tasks. Could someone tell me why this is happening?
Is the router gorilla/mux? If so, you cannot add query parameters to path like that. You have to:
router.Path("/msa/dom/perform-factory-reset").
Queries("json","{jsonData}").HandlerFunc(CallGet)
If it is some other router, then you still probably have to register path without the query parameters, and then get the query parameter values in the handler from the request.

Should I allow user-provided values to be passed through a query string?

I'm adding a search endpoint to a RESTful API. After reading this SO answer, I'd like the endpoint to be designed like:
GET /users?firstName=Otis&hobby=golf,rugby,hunting
That seems like a good idea so far. But the values that I'll be using to perform the search will be provided by the user via a standard HTML input field. I'll guard against malicious injections on the server-side, so that's not my concern. I'm more concerned about the user providing a value that causes the URL to exceed the max URL length of ~2000 characters.
I can do some max-length validation and add some user prompts, etc, but I'm wondering if there's a more standard way to handle this case.
I thought about providing the values in the request body using POST /users, but that endpoint is reserved for new user creation, so that's out.
Any thoughts? Thanks.
I see these possible solutions:
not actually a solution. Go with the query parameter and accept the length constraints
go with the POST solution that shouldn't be designed as you mention. As you point out, if you POST a user to .../users you will create a new user entity. But this is not what you want to do. You want to submit a search ticket to the server that will return a list of results matching your criteria. I'll design something as such
POST .../search/users passing in the body a representation of your search item
distribute the query both server side and client side. Say you have complex criteria to match. Set up a taxonomy of them so that the most strict ones are handled server side. Thus, the server is able to return a manageable list of items you can subsequently filter on the client side. In this approach you can save space in the query string by sending to the server only a subset of the criteria you want to meet in your search.

Rest POST VS GET if payload is huge

I understand the definition of GET and POST as below.
GET: List the members of the collection, complete with their member URIs for further navigation. For example, list all the cars for sale.
POST: Create a new entry in the collection where the ID is assigned automatically by the collection. The ID created is usually included as part of the data returned by this operation.
MY API searches for some detail in server with huge request payload with JSON Message in that case Which Verb should i use ?
Also can anyone please let me know the length of the characters that can be passed in query string.
The main difference between a GET and POST request is that in the former, the entire request is encoded as part of the URL itself, whereas in the latter, parameters are sent after the header. In addition, in GET request, different browsers will impose different limits on how big the URL can be. Most modern browsers will allow at least 200KB, however Internet Explorer seems to limit the URL size to 2KB.
That being said, if you have any suspicion that you will be passing in a large number of parameters which could exceed the limit imposed on GET requests by the receiving web server, you should switch to POST instead.
Here is a site which surveyed the GET behavior of most modern browsers, and it is worth a read.
Late to the party but for anyone searching for a solution, this might help.
I just came up with 2 different strategies to solve this problem. I'll create proof of concept API and test which one suites me better. Here are the solution I'm currently thinking:
1. X-HTTP-Method-Override:
Basically we would tunnel a GET request using POST/PUT method, with added X-HTTP-Method-Override request header, so that server routes the request to GET call. Simple to implement and does work in one trip.
2. Divide and Rule:
Divide requests into two separate requests. Send a POST/PUT request with all payload, to which server will create necessary response and store it in cache/db along with a key/id to access the data. Then server will respond with either "Location" header or the Key/id through which the stored response can be accessed.
Now send GET request with the key/location given by server on previous POST request. A bit complicated to implement and needs two requests, also requires a separate strategy to clean the cached responses.
If this is going to be a typical situation for your API then a RESTful approach could be to POST query data to a buffer endpoint which returns a URI from which you can GET your results.
Who knows maybe a cache of these will mitigate the need to send "huge" blobs of data about.
Well You Can Use Both To get Results From Server By Passing Some Data To server
In Case Of One Or Two Parameters like Id
Here Only One Parameter Is Used .But 3 to 4 params can Be used This Is How I Used In angularjs
Prefer : Get
Example : $http.get('/getEmployeeDataById?id=22');
In Case It Is Big Json Object
Prefer : Post
Example : var dataObj =
{
name : $scope.name,
age : $scope.age,
headoffice : $scope.headoffice
};
var res = $http.post('/getEmployeesList', dataObj);
And For Size Of Characters That Can Be Passed In Query String Here Is Already Answered
If you're getting data from the server, use GET. If you want to post something, use POST. Payload size is irrelevent. If you want to work with smaller payloads, you could implement pagination.

Recording GET requests to a table from REST API

I would like to record the various GET requests to my API in a table and use that table as part of the calculation of what to return for future GET requests.
Perhaps the easiest test example would be a GET function that returns the number of GET requests in the last hour.
The REST protocol says that GET requests should only have data returns.
Do I need to POST the request and then GET the results of the same request?
You can easily achieve that with nodejs
You should have the requests saved in a json file or database for example and have another service that returns this saved data.
Take a look at expressjs
Best luck

RESTful way to create multiple items in one request

I am working on a small client server program to collect orders. I want to do this in a "REST(ful) way".
What I want to do is:
Collect all orderlines (product and quantity) and send the complete order to the server
At the moment I see two options to do this:
Send each orderline to the server: POST qty and product_id
I actually don't want to do this because I want to limit the number of requests to the server so option 2:
Collect all the orderlines and send them to the server at once.
How should I implement option 2? a couple of ideas I have is:
Wrap all orderlines in a JSON object and send this to the server or use an array to post the orderlines.
Is it a good idea or good practice to implement option 2, and if so how should I do it.
What is good practice?
I believe that another correct way to approach this would be to create another resource that represents your collection of resources.
Example, imagine that we have an endpoint like /api/sheep/{id} and we can POST to /api/sheep to create a sheep resource.
Now, if we want to support bulk creation, we should consider a new flock resource at /api/flock (or /api/<your-resource>-collection if you lack a better meaningful name). Remember that resources don't need to map to your database or app models. This is a common misconception.
Resources are a higher level representation, unrelated with your data. Operating on a resource can have significant side effects, like firing an alert to a user, updating other related data, initiating a long lived process, etc. For example, we could map a file system or even the unix ps command as a REST API.
I think it is safe to assume that operating a resource may also mean to create several other entities as a side effect.
Although bulk operations (e.g. batch create) are essential in many systems, they are not formally addressed by the RESTful architecture style.
I found that POSTing a collection as you suggested basically works, but problems arise when you need to report failures in response to such a request. Such problems are worse when multiple failures occur for different causes or when the server doesn't support transactions.
My suggestion to you is that if there is no performance problem, for example when the service provider is on the LAN (not WAN) or the data is relatively small, it's worth it to send 100 POST requests to the server. Keep it simple, start with separate requests and if you have a performance problem try to optimize.
Facebook explains how to do this: https://developers.facebook.com/docs/graph-api/making-multiple-requests
Simple batched requests
The batch API takes in an array of logical HTTP requests represented
as JSON arrays - each request has a method (corresponding to HTTP
method GET/PUT/POST/DELETE etc.), a relative_url (the portion of the
URL after graph.facebook.com), optional headers array (corresponding
to HTTP headers) and an optional body (for POST and PUT requests). The
Batch API returns an array of logical HTTP responses represented as
JSON arrays - each response has a status code, an optional headers
array and an optional body (which is a JSON encoded string).
Your idea seems valid to me. The implementation is a matter of your preference. You can use JSON or just parameters for this ("order_lines[]" array) and do
POST /orders
Since you are going to create more resources at once in a single action (order and its lines) it's vital to validate each and every of them and save them only if all of them pass validation, ie. you should do it in a transaction.
I've actually been wrestling with this lately, and here's what I'm working towards.
If a POST that adds multiple resources succeeds, return a 200 OK (I was considering a 201, but the user ultimately doesn't land on a resource that was created) along with a page that displays all resources that were added, either in read-only or editable fashion. For instance, a user is able to select and POST multiple images to a gallery using a form comprising only a single file input. If the POST request succeeds in its entirety the user is presented with a set of forms for each image resource representation created that allows them to specify more details about each (name, description, etc).
In the event that one or more resources fails to be created, the POST handler aborts all processing and appends each individual error message to an array. Then, a 419 Conflict is returned and the user is routed to a 419 Conflict error page that presents the contents of the error array, as well as a way back to the form that was submitted.
I guess it's better to send separate requests within single connection. Of course, your web-server should support it
You won't want to send the HTTP headers for 100 orderlines. You neither want to generate any more requests than necessary.
Send the whole order in one JSON object to the server, to: server/order or server/order/new.
Return something that points to: server/order/order_id
Also consider using CREATE PUT instead of POST