I'm currently writing a server that needs to talk to a webserver. I'm not sure what should I use to bridge the gap between my server and the webserver, SCGI or the uWSGI Protocol. At some point I'd swear I read somewhere in the uwsgi documentation that the uWSGI protocol descends from SCGI but I can't find the line any more.
How do they differ?
They both serialize a simple list of key-value items. SCGI uses a text format for it while uwsgi (lowercase for the protocol) uses a binary encoding where each string is prefixed with a 16bit size: http://uwsgi-docs.readthedocs.org/en/latest/Protocol.html
Just say one point!
The uWSGI project aims at developing a full stack for building hosting
services.
The uwsgi (lowercase!) protocol is the native protocol used by the
uWSGI server.
And answer from Frequently Asked Questions (FAQ)
The uwsgi (all lowercase) protocol is derived from SCGI but with
binary string length representations and a 4-byte header that includes
the size of the var block (16 bit length) and a couple of
general-purpose bytes.
Related
I’m seeing three different “error strings” for the EAFNOSUPPORT / WSAEAFNOSUPPORT errno:
POSIX:
The implementation does not support the specified address family.
BSD (errno.h, _sys_errlist[]):
Address family not supported by protocol family
Windows®/Winsock2:
An address incompatible with the requested protocol was used
While the semantics of the latter two are pretty much identical, the former differs quite a bit (it does not reference a protocol family; rather, it states that the given address family is not supported in a particular place).
I’m assuming both interpretations are valid, especially given EPFNOSUPPORT (“Protocol family not supported”) is marked as nōn-POSIX in the BSD headers, but where does this difference come from? Incidentally, my back-of-the-head/historical(FSVO) understanding of this errno code matches the POSIX semantics more than the BSD/Winsock semantics…
I can imagine that the POSIX semantic is from older BSD sockets, and later EPFNOSUPPORT was added so EAFNOSUPPORT was redesignated in BSD sockets (and Winsock just took that), or POSIX is deliberately written in a different way.
Can anyone shed light on this, perhaps explain the histories (code heritage, etc)?
In my API, I need to provide a file/directory resource (call it a thing) in different formats including a tar.gz and as a squashfs file. I have been looking at the "official" mime types and it looks like application/x-compressed-tar is appropriate for a thing.tar.gz file.
But what about if thing is created using mksquashfs? I am not sure if the vendor-specific mime types are the answer, because I don't think there is a vendor to specify.
Also, the output of mksquashfs is usually a compressed file (default is gzip). So I could use application/x-gzip, but since there are multiple options for compression, I don't want to have to know which compression was used since the API is focused on serving up a previously created squashfs thing and not a create a squashfs with a specific compression as requested by the user.
Is it okay to just make your own mime type?
application/x-squashfs?
application/x-sqsh?
application/vnd.???.squashfs?
application/vnd.???.sqsh+gzip?
The vnd. namespace is reserved for registered vendor types, so don't use that (or go through the long and arduous process of registering this type with IANA before you can use it). In theory, registering it could be useful and you don't have to be a "vendor" really (though I suppose Linux or the SquashFS community could be named as the responsible governing entity).
The x- prefix is now also discouraged (and subsumed by the x. prefix) and never really provided good semantics anyway (either you have an "unofficial standard" which nobody specifies but many people know of, or you have an unknown undocumented thing which doesn't help specify things beyond application/octet-stream at all).
If you want to go by the book, and don't want to go through defining a MIME type via IANA (though they define a lightweight process to encourage this) the tried and true application/octet-stream is still good for random byte streams.
If you do want to go for a lightweight registration, something like application/filesystem with suffixes like +ext2, +ext3, +dmg (for Mac images), +ntfs etc would be my proposal, but this is without much thinking about it.
I am making a project and for that i have to mention a protocol name for that.So I read about Wire Protocols used by Microsoft and Oracle, the thing is I need some names for wire protocol that i can use in postgres.
I had the same question. I would suggest the best "name" for the PostgreSQL wire protocol would be libpq, which is, of course, really just the name of the library used. I recall seeing other line protocols being called by their library APIs.
Since I have not yet completely understood the correct usage of port and interface symbols in component diagrams, a few questions:
I.
Imagine a piece of software which wants to use a very special remote logger service over network (TCP). The messages may be some XML. So the logger exposes an interface which specifies things like handshake, XML structure, XML elements etc. so that the logger will accept a message.
a) Am I right that this interface may be called "ILoggerProtocol", the port may be named after the service it provides ("logging")?
b) So the component in my application implements that interface so that it generates a compliant message for the server?
c) Now an interesting thing: for the communication, there is an additional library "Networking" which provides simple TCP stuff, so it does the TCP connect, sends messages, handles errors etc. Do I need this class when I only want to emphasise the way from the generated messages to the server? Is then MY port the TCP interface?
d) And when I want to draw the complete picture, how can I add the Networking component to the diagram correctly, pointing out that ILoggerProtocol is used AND that it goes over TCP through the Networking component?
II. Ports inside my application: now there are two libraries where one just uses the other; basically, in C/C++, it would #include the other's header file:
e) Is that the correct diagram?
f) Do I need ports here? If yes, what would they actually represent in reality? What names would you give them?
g) Or are the lollipops just sufficient without the port symbols?
III. concerning lollipops:
h) are those two notations basically the same and interchangeable? I have found the name "assembly" for the combined version, so maybe there is a difference...
A short answer first (trying to rip up the rest later): a port is an embedded element which allows to group a number of interfaces. The best I can come up for an example is a complex socket (the port) which bundles things like power supply, communication lines, you name it (the interfaces).
Now for the details.
a) Yes, that's correct. You would usually use a <<delegate>> stereotyped association to show that the outer interface is used(/realized if it's a lollipop) somewhere inside.
b) No. This is a required interface. It is used inside but implemented outside (where the lollipop resides).
c&d) I'd use a <<use>> from MyApplication towards Networking to show that. Normally you would not go into too much detail (unless it is essential). Obvious things like TCP are clearly pictured with the <<use>>
e) You can(/should) use <<include>> or <<use>> instead.
f&g) see the general answer above
h) Yes. The first is a flexible notation of the second.
P.S. Just looking over this once again and I notice that in the top picture the inner directed association should be pointing the other direction and be stereotyped <<delegate>>.
I'm doing some simple socket programming in C#. I am attempting to authenticate a user by reading the username and password from the client console, sending the credentials to the server, and returning the authentication status from the server. Basic stuff. My question is, how do I ensure that the data is in a format that both the server and client expect?
For example, here's how I read the user credentials on the client:
Console.WriteLine("Enter username: ");
string username = Console.ReadLine();
Console.WriteLine("Enter plassword: ");
string password = Console.ReadLine();
StreamWriter clientSocketWriter = new StreamWriter(new NetworkStream(clientSocket));
clientSocketWriter.WriteLine(username + ":" + password);
clientSocketWriter.Flush();
Here I am delimiting the username and password with a colon (or some other symbol) on the client side. On the server I simply split the string using ":" as the token. This works, but it seems sort of... unsafe. Shouldn't there be some sort of delimiter token that is shared between client and server so I don't have to just hard-code it in like this?
It's a similar matter for the server response. If the authentication is successful, how do I send a response back in a format that the client expects? Would I simply send a "SUCCESS" or "AuthSuccessful=True/False" string? How would I ensure the client knows what format the server sends data in (other than just hard-coding it into the client)?
I guess what I am asking is how to design and implement an application-level protocol. I realize it is sort of unique to your application, but what is the typical approach that programmers generally use? Furthermore, how do you keep the format consistent? I would really appreciate some links to articles on this matter as well.
Rather than reinvent the wheel. Why not code up an XML schema and send and receive XML "files".
Your messages will certainly be longer, but with gigabyte Ethernet and ADSL this hardly matters these days. What you do get is a protocol where all the issues of character sets, complex data structures have already been solved, plus, an embarrassing choice of tools and libraries to support and ease your development.
I highly recommend using plain ASCII text if at all possible.
It makes bugs much easier to detect and fix.
Some common, machine-readable ASCII text protocols (roughly in order of complexity):
netstring
Tab Delimited Tables
Comma Separated Values (CSV) (strings that include both commas and double-quotes are a little awkward to handle correctly)
INI file format
property list format
JSON
YAML Ain't Markup Language
XML
The world is already complicated enough, so I try to use the least-complex protocol that would work.
Sending two user-generated strings from one machine to another -- netstrings is the simplest protocol on my list that would work for that, so I would pick netstrings.
(netstrings will will work fine even if the user types in a few colons or semi-colons or double-quotes or tabs -- unlike other formats that choke on certain commonly-typed characters).
I agree that it would be nice if there existed some way to describe a protocol in a single shared file such that that both the server and the client could somehow "#include" or otherwise use that protocol.
Then when I fix a bug in the protocol, I could fix it in one place, recompile both the server and the client, and then things would Just Work -- rather than digging through a bunch of hard-wired constants on both sides.
Kind of like the way well-written C code and C++ code uses function prototypes in header files so that the code that calls the function on one side, and the function itself on the other side, can pass parameters in a way that both sides expect.
Tell me if you discover anything like that, OK?
Basically, you're looking for a standard. "The great thing about standards is that there are so many to choose from". Pick one and go with it, it's a lot easier than rolling your own. For this particular situation, look into Apache "basic" authentication, which joins the username and password and base64-encodes it, as one possibility.
I have worked with two main approaches.
First is ascii based protocol.
Ascii based protocol is usally based on a set of text commands that terminate on some defined delimiter (like a carriage return or semicolon or xml or json). If your protocol is a command based protocol where there is not a lot of data being transferred back and forth then this is the best way to go.
FIND\r
DO_SOMETHING\r
It has the advantage of being easy to read and understand because it is text based.
The disadvantage (may not be a problem but can be) is that there can be an unknown number of bytes being transferred back and forth from the client and the server. So if you need to know exactly how many bytes are being sent and received this may not be the type of protocol you want.
The other type of protocol is binary based with fixed sized messages that are sent in the header. This has the advantage of knowing exactly how much data the client is expected to receive. It also can potentially save you bandwith depending on what your sending across. Although, ascii can also save you space too, it depends on your application requirements. The disadvantage of a binary based protocol is that it is difficult to understand by just looking at it....requiring you to constantly look at documentation.
In practice, I tend to mix both strategies in protocols I have defined based on my application's requirements.