Why no Stub in REST? - rest

EDIT : My original title was "Use of Stub in RPC" ; I edited the title just to let others know it is more than that question.
I have started developing some SOAP based services and I cannot understand the role of stubs. To quote Wiki :
The client and server use different address spaces, so conversion of parameters used in a function call have to be performed, otherwise the values of those parameters could not be used, because of pointers to the computer's memory pointing to different data on each machine. The client and server may also use different data representations even for simple parameters (e.g., big-endian versus little-endian for integers.) Stubs are used to perform the conversion of the parameters, so a Remote Function Call looks like a local function call for the remote computer.
This is dumb, but I don't understand this "practically". I have done some socket programming in Java, but I don't remember any step for "conversion of parameters" when my TCP/UDP clients interacted with my server. (I assume raw server-client communication using TCP/UDP sockets does come under RPC)
I have had some experience with RESTful service development, but I can't recognize the Stub analogue with REST either. Can someone please help me ?

Stubs for calls over the network (be they SOAP, REST, CORBA, DCOM, JSON-RPC, or whatever) are just helper classes that give you a wrapper function that takes care of all the underlying details, such as:
Initializing your TCP/UDP/whatever transport layer
Finding the right address to call and doing DNS lookups if needed
Connecting to the network endpoint where the server should be
Handling errors if the server isn't listening
Checking that the server is what we're expecting it to be (security checks, versioning, etc)
Negotiating the encoding format
Encoding (or "marshalling") your request parameters in a format suitable for transmission on the network (CDR, NDR, JSON, XML, etc.)
Transmitting your encoded request parameters over the network, taking care of chunking or flow control as necessary
Receiving the response(s) from the server
Decoding (or "unmarshalling") the response details
Returning the responses to your original calling code (or throwing an error if something went wrong)
There's no such thing as "raw" TCP communication. If you are using it in a request/response model and infer any kind of meaning from the data sent across the TCP connection then you've encoded some form of "parameters" in there. You just happened to build yourself what stubs would normally have provided.
Stubs try to make your remote calls look just like local in-process calls, but honestly that's a really bad thing to do. They're not the same at all, and they should be considered differently by your application.

Related

Any ideas why we're getting Intermittent gRPC Unavailable/Unknown RpcExceptions (C++/C#)

We are using gRPC (version 1.37.1) for our inter-process communication between our C# process and C++ process. Both processes act as a server and client with the other and run on the same machine over localhost using the HTTP/2 transport. All of the calls are use blocking synchronous unary calls and not bi-directional streaming. Some average(ish) stats:
From C++->C#: 0-2 calls per second, 0-40 calls per minute
From C#->C++: 0-5 calls per second, 0-200 calls per minute
Intermittently, we were getting one of 3 issues
C# client call to C++ server comes back with an RpcException, usually “HTTP2/Parse Error”, “Endpoint Read Failed”, or “Transport Closed”
C++ client call to C# server comes back with Unavailable or Unknown
C++ client WaitForConnected call to check the channel fails after 500ms
The top most one is the most frequent and where we have the most information about. Usually, what we’ll see is the Client receives the RPC call and runs into an unknown frame type. Then the subchannel goes into shutdown and everything usually re-connects fine. We also generally see an embedded error like the following (note that we replaced all FILE instances to FUNCTION in our gRPC source):
win_read","file_line":307,"os_error":"The system detected an invalid pointer address in attempting to use a pointer argument in a call.\r\n","syscall":"WSARecv","wsa_error":10014}]},{"created":"#1622120588.494000000","description":"frame of size 262404 overflows local window of 65535","file":"grpc_core::chttp2::TransportFlowControl::ValidateRecvData","file_line":213}]}
What we’ve seen with the unknown frame type, is that it parses the HEADERS, WINDOW_UPDATE, DATA, WINDOW_UPDATE and then gets a TCP: on_read without a corresponding READ and then tries to parse again. It’s this parse where it looks like the parser is at the wrong offset in the buffer, because it gets the unknown frame type, incoming frame size and incoming stream_id all map to the middle of the RPC call that it just parsed.
The above was what we were encountering prior to a change to create a new channel for each rpc call. While we realize it is not great from a performance standpoint, we have seen increased stability since making the change. However, we still do occasionally get rpc exceptions. Now, the most common is “Unknown”/”Stream Removed” rather than the ones listed above.
Any ideas on what might be going wrong is appreciated. We've turned on all gRPC tracing and have even added to it, as well as captured the issue in wireshark but so far aren't getting a great indication of what's causing the transport to close. Are there any good tools to monitor the socket/port for failure?

Making "parse" function RESTful

I have a RESTful service for getting let's say devices. It provides very usual functionality:
GET /devices
GET /devices/:id
POST /devices
PUT /devices/:id
DELETE /devices/:id
The device object might be defined as follows:
{
id: 123,
name: "Smoke detector",
firmware: "21.0.103",
battery: "ok",
last_maintenance: "2017-07-07",
last_alarm: "2014-02-01 12:11:10",
// ...
}
There is an application that might read device state via some device specific reader. The application itself has no idea how to interpret read data, but it might ask server to do it. In our case let's assume that the data contains the following: battery status, firmware version, last alarm.
If I were implementing regular RPC service, I would create function with "parse" meaning. It means it accept the raw data and returns an updated device object (or, alternatively, only the part of the device object containing the parsed state). But I doubt that I could find a good REST solution for such function. Now I am doing it via PATCH, but I personally do not like this solution, and therefore I will not provide it here. I believe there should be good solution for such class of problems.
So the question: how should I fit my "parse" logic in REST paradigm?
POST it to a /parsed-device-state URL, which will return a 201 Created, a Location header pointing to the place where you can get the parsed data from, and if you like, return the parsed data in the 201 as well (along with an additional Content-Location header with the same value as the Location header). Or if it takes a long time to parse, use 202 Accepted, and the same Location header. The caller can then poll that provided location until the results are ready.
So the question: how should I fit my "parse" logic in REST paradigm?
How would you fit your parse logic into a web site?
You'd probably start with a bookmark. GET $BOOKMARK would return a representation of a form. The form might include an input control like a text area element that would allow the consumer to input a representation, or it might include a input control that allows the consumer to link into a file. The consumer would submit the form, and the agent would create a request from the information in the form. That would probably be a POST (you aren't likely to include an arbitrary file's representation onto the query string) to whatever resource was specified as the action of the form. The server's response would provide a representation of the result.
If parsing were a particularly slow process, then the response instead might be a representation including links to resources that could be used to track the progress of the parsing. The whole protocol in this case looks a lot like putting work on a queue, and then polling for updates.
It's the right answer to a problem that is not a great fit for HTTP:
The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.
To some degree, what you are trying to do with your function is transfer compute, which may be why it feels like you are trimming corners off of the peg to fit it in the hole.
An alternative approach, which is a better fit for HTTP, is think about transferring a representation of the behavior. The API client gets a function that understands how to parse apples into oranges, and then runs that code on the information that it keeps locally. Think java script - we get a representation of the behavior from the server (which can embed into that representation information the server has that the client will need), and then execute the result locally. Metadata in the headers describes the lifetime of the representation, in a way that is understood by any standards compliant cache.

SSL vs BIO object in openSSL [duplicate]

I've been reading a lot about OpenSSL, specifically the TLS and DTLS APIs. Most of it makes sense, it's a pretty intuitive API once you understand it. One thing has really got me scratching my head though...
When/why would I use BIOs?
For example, this wiki page demonstrates setting up a barebones TLS server. There isn't even a mention of BIOs anywhere in the example.
Now this page Uses BIOs exclusively, not ever using the read and write functions of the SSL struct. Granted it's from 2013, but it's not the only one that uses BIOs.
To make it even more confusing this man page suggests that the SSL struct has an "underlying BIO" without ever needing to set it explicitly.
So why would I use BIOs if I can get away with using SSL_read() and SSL_write()? What are the advantages? Why do some examples use BIOs and others don't? What Is the Airspeed Velocity of an Unladen Swallow?
BIO's are always there, but they might be hidden by the simpler interface. Directly using the BIO interface is useful if you want more control - with more effort. If you just want to use TLS on a TCP socket then the simple interface is usually sufficient. If you instead want to use TLS on your own underlying transport layer or if you want have more control on how it interacts with the transport layer then you need BIO.
An example for such a use case is this proposal where TLS is tunneled as JSON inside HTTPS, i.e. the TLS frames are encoded in JSON and which is then transferred using POST requests and responses. This can be achieved by handling the TLS with memory BIO's which are then encoded to and decoded from JSON.
First, your Q is not very clear. SSL is (a typedef for) a C struct type, and you can't use the dot operator on a struct type in C, only an instance. Even assuming you meant 'an instance of SSL', as people sometimes do, in older versions (through 1.0.2) it did not have members read and write, and in 1.1.0 up it is opaque -- you don't even know what its members are.
Second, there are two different levels of BIO usage applicable to the SSL library. The SSL/TLS connection (represented by the SSL object, plus some related things linked to it like the session) always uses two BIOs to respectively send and receive protocol data -- including both protocol data that contains the application data you send with SSL_write and receive with SSL_read, and the SSL/TLS handshake that is handled within the library. Much as Steffen describes, these normally are both set to a socket-BIO that sends to and receives from the appropriate remote host process, but they can instead be set to BIOs that do something else in-between, or even instead. (This normal case is automatically created by SSL_set_{,r,w}fd which it should be noted on Windows actually takes a socket handle -- but not any other file handle; only on Unix are socket descriptors semi-interchangeable with file descriptors.)
Separately, the SSL/TLS connection itself can be 'wrapped' in an ssl-BIO. This allows an application to handle an SSL/TLS connection using mostly the same API calls as a plain TCP connection (using a socket-BIO) or a local file, as well as the provided 'filter' BIOs like a digest (md) BIO or a base64 encoding/decoding BIO, and any additional BIOs you add. This is the case for the IBM webpage you linked (which is for a client not a server BTW). This is similar to the Unix 'everything is (mostly) a file' philosophy, where for example the utility program grep, by simply calling read on fd 0, can search data from a file, the terminal, a pipe from another program, or (if run under inetd or similar) from a remote system using TCP (but not SSL/TLS, because that isn't in the OS). I haven't encountered many cases where it is particularly beneficial to be able to easily interchange SSL/TLS data with some other type of source/sink, but OpenSSL does provide the ability.

Lua sockets - Asynchronous Events

In current lua sockets implementation, I see that we have to install a timer that calls back periodically so that we check in a non blocking API to see if we have received anything.
This is all good and well however in UDP case, if the sender has a lot of info being sent, do we risk loosing the data. Say another device sends a 2MB photo via UDP and we check socket receive every 100msec. At 2MBps, the underlying system must store 200Kbits before our call queries the underlying TCP stack.
Is there a way to get an event fired when we receive the data on the particular socket instead of the polling we have to do now?
There are a various ways of handling this issue; which one you will select depends on how much work you want to do.*
But first, you should clarify (to yourself) whether you are dealing with UDP or TCP; there is no "underlying TCP stack" for UDP sockets. Also, UDP is the wrong protocol to use for sending whole data such as a text, or a photo; it is an unreliable protocol so you aren't guaranteed to receive every packet, unless you're using a managed socket library (such as ENet).
Lua51/LuaJIT + LuaSocket
Polling is the only method.
Blocking: call socket.select with no time argument and wait for the socket to be readable.
Non-blocking: call socket.select with a timeout argument of 0, and use sock:settimeout(0) on the socket you're reading from.
Then simply call these repeatedly.
I would suggest using a coroutine scheduler for the non-blocking version, to allow other parts of the program to continue executing without causing too much delay.
Lua51/LuaJIT + LuaSocket + Lua Lanes (Recommended)
Same as the above method, but the socket exists in another lane (a lightweight Lua state in another thread) made using Lua Lanes (latest source). This allows you to instantly read the data from the socket and into a buffer. Then, you use a linda to send the data to the main thread for processing.
This is probably the best solution to your problem.
I've made a simple example of this, available here. It relies on Lua Lanes 3.4.0 (GitHub repo) and a patched LuaSocket 2.0.2 (source, patch, blog post re' patch)
The results are promising, though you should definitely refactor my example code if you derive from it.
LuaJIT + OS-specific sockets
If you're a little masochistic, you can try implementing a socket library from scratch. LuaJIT's FFI library makes this possible from pure Lua. Lua Lanes would be useful for this as well.
For Windows, I suggest taking a look at William Adam's blog. He's had some very interesting adventures with LuaJIT and Windows development. As for Linux and the rest, look at tutorials for C or the source of LuaSocket and translate them to LuaJIT FFI operations.
(LuaJIT supports callbacks if the API requires it; however, there is a signficant performance cost compared to polling from Lua to C.)
LuaJIT + ENet
ENet is a great library. It provides the perfect mix between TCP and UDP: reliable when desired, unreliable otherwise. It also abstracts operating system specific details, much like LuaSocket does. You can use the Lua API to bind it, or directly access it via LuaJIT's FFI (recommended).
* Pun unintentional.
I use lua-ev https://github.com/brimworks/lua-ev for all IO-multiplexing stuff.
It is very easy to use fits into Lua (and its function) like a charm. It is either select/poll/epoll or kqueue based and performs very good too.
local ev = require'ev'
local loop = ev.Loop.default
local udp_sock -- your udp socket instance
udp_sock:settimeout(0) -- make non blocking
local udp_receive_io = ev.IO.new(function(io,loop)
local chunk,err = udp_sock:receive(4096)
if chunk and not err then
-- process data
end
end,udp_sock:getfd(),ev.READ)
udp_receive_io:start(loop)
loop:loop() -- blocks forever
In my opinion Lua+luasocket+lua-ev is just a dream team for building efficient and robust networking applications (for embedded devices/environments). There are more powerful tools out there! But if your resources are limited, Lua is a good choice!
Lua is inherently single-threaded; there is no such thing as an "event". There is no way to interrupt executing Lua code. So while you could rig something up that looked like an event, you'd only ever get one if you called a function that polled which events were available.
Generally, if you're trying to use Lua for this kind of low-level work, you're using the wrong tool. You should be using C or something to access this sort of data, then pass it along to Lua when it's ready.
You are probably using a non-blocking select() to "poll" sockets for any new data available. Luasocket doesn't provide any other interface to see if there is new data available (as far as I know), but if you are concerned that it's taking too much time when you are doing this 10 times per second, consider writing a simplified version that only checks one socket you need and avoids creating and throwing away Lua tables. If that's not an option, consider passing nil to select() instead of {} for those lists you don't need to read and pass static tables instead of temporary ones:
local rset = {socket}
... later
...select(rset, nil, 0)
instead of
...select({socket}, {}, 0)

How to specify that client connect only from a range of local ports to a server in RPC language

I have a legacy source file which describes the protocol to be used for RPC in a file with extension .x file which is fed to rpcgen to generate the necessary stub files for the protocol. However, currently in the generated stub files, the RPC client is free to connect from (or listen on) any port. because in the generated file, I see the following
transp = svctcp_create(RPC_ANYSOCK, 0, 0);
I am a newbie to RPC and related things but trying to modify it anyway .... Since I know that the server listens on a particular port, I deduced that the above line is what is causing the client to connect from arbitrary port. Now I kind of know how to fix it ..I would have to try to open a bunch of sockets whose port will be in the given range of ports until I am successful and pass it as the first argument to svctcp_create...
However this would have to be in the rpcgen generated files which does not make me very comfortable. I would like to modify the ".x" file so as to do it once for all. can anybody help me with this?
Thanks,
Sunil
Why do you need to restrict the local ports to a range? There is no support for this at any layer of the TCP networking APIs. Client port ranges are sometimes specified as firewall rules by netadmins who are unaware of the implementation infeasibility, and who think they are adding security, about which they are also mistaken. What's the reason in your case?