I want to make my API calls as fast as possible using swift. I know that using Alamofire helps with speed but is there anything faster than using Alamofire? I am creating trading software so every millisecond makes a huge difference. These are "post" and "get" requests I am making to execute the trades. Right now there is a lot of variability in the speed of the calls. I know there are platforms like neumob that speed up applications so I wanted to know if there were any concepts like that which I can apply to my application. I am developing it using swift and it will be run on OS X.
I am also using a websocket to get order book data. To connect to the websocket, I am using Starscream. If there is a better way to connect to the socket I would love to know that as well.
If milliseconds matter, you shouldn't be using HTTP. Or even TCP. AFAIK, most trading applications use stream connections of some kind, usually transmitting protobufs instead of JSON, so events come in as fast as they're sent over the wire. Barring that, using URLSession directly may be a few instructions faster than Alamofire, which wraps URLSession, but I doubt it would make a noticeable difference. As far as an HTTP connection goes, URLSession is pretty damn fast, as that's what Safari and the rest of the system use.
Your program is very likely I/O bound, not CPU bound, so the main bottleneck will be your internet connection and the data that it transmits.
If you cannot change the communication protocol because you do not control the API server, the only thing you can do is to run your app in a data center that is geographically close to the API server. As long as the service requires you to use HTTP and web sockets, you can't go much faster than NSURLSession, Alamofire or Starscream without years of optimization.
If you can control the API server, you could switch to plain TCP or even UDP. Then you could come up with a custom communication protocol that uses small binary messages.
Of course, you have to profile first and actually find out, which parts of your code are slow. There may be other code that takes a few milliseconds to run.
Related
I am building an OSX app that needs to get data from server. The easy way, is to make a GET request at some fixed time interval, and process results. Thats not what I want. I want the other way around: e.g. server to send data to my app, when something happens on the server side. That way I do not need to make constant requests from client side. I don't need the data to visually be displayed, just processed.
Can this be implemented in OSX with Swift?
You have two ways to achieve this:
Websocket:
Websocket is a full-duplex communication channel over a TCP-Connection. It's established via HTTP.
Long Polling:
Same as you said before but without responding directly. Your client makes a HTTP request and set a very long timeout timer. The server responds after something is happening. (More)
I would recommend you Websocket since it was built exactly for this use case. But if you have to implement it quickly you should probably go with long polling for now, since the barrier to implement it is much lower and switch to Websocket later.
Writing a one page web application, and knowing that some of the screens would need real-time updates, I am faced to one big general question, whatever API and frontend framework and language I am going to use:
I'll implement data transfer over websocket, should I keep http for any data transfer which would not need real-time updates, or should I just use websocket?
Knowing that websockets are not handled on ALL browsers but most if not all recent ones support it, would it be better for the servers to handle both websocket and http, or should I just use websockets for ALL data transfer?
You will probably end up using both WebSockets and HTTP requests at the end.
WebSockets, because it sounds like you need them (because of the real-time updates) and can afford to require browser support for them (otherwise, you'd be forced to use the older Ajax/Comet based approaches).
HTTP for two possible reasons:
You will sooner or later need a blocking request-response behavior. For example, authenticating a user may need to block for the result before further processing happens, so you need to send a request for authentication and block until you get the result. This can be a bit annoying to handle over WebSockets.
You may need to load heavy data without interrupting the ongoing real-time updates. If you were to load such data over WebSockets (as a single big chunk), it will be queued together with the real-time updates and may delay them.
Both of these issues can be handled over WebSockets, but they are simply easier to solve with simple HTTP Ajax requests.
Use websockets for the following needs,
Server data changes frequently
Multi-user communication
Live feeds etc.,
Refer this for better understanding on websocket usage.
I need to stream data from a web server to clients. The data is location data that is collected and stored on the server. The clients will click a button on an html page to 'opt in' to start receiving the data. This data is never ending and there is at least one of the clients that needs to receive the data 24-7, with as few breaks as possible. The data being streamed will be client specific, as each client wont receive the exact same data.
I've done several multi-threaded tcp servers over sockets, and websockets are the way I would like to attack this, but the requirements are that this has to work in ie9.
The initial requirement was that this be a vb.net cgi executable - but during testing, I havent been able to 'use' the stream from the vb.net executable until the app finishes - like it wasn't able to flush the stdout even though I was specificly using the console.out.flush(). So If this isn't a viable option, and I can support this with facts, then I can get this requirement changed.
I've also read quite a bit about using a third party server to stream the data like Orbit and APE I think was a couple of them, but requirements are for 1 server - the web server. No other hardware can be required.
I'm pretty sure the vb.net CGI isn't the ideal solution based on what i've found, but is it doable or do I need to abandon that solution and move on to a newer technology , ISAPI? Any ideas or suggestions, even if they just point me in the right direction, are greatly appreciated.
You might go few ways.
If you would go C# .Net, then you might look into Silverlight solution. But it requires plugin in browser to be installed (like Flash). Good thing here, is that you are able to send data through normal sockets, in pure realtime from server. In same time Silverlight uses .Net so it makes some code to be shared. That helps development process. As well the way it will work in different browsers will be same.
You might have a look in similar solution using Java Applet with Java backend (can be even .Net, but again, easier to develop when both in same language).
Another option is to have fron-end using WebSockets, but as you know its not supported in IE9 and below (IE10 promises to be), and Opera is not supporting it as well.
Backend can be done in what you prefer. But bear in mind that WebSockets uses framing, and for constant but little packets its not efficient, because if you send 10 bytes, then it will create frame 2-12 bytes, and TCP packet header that is 40 bytes in average.
To support older browsers you might have a look in long-polling, but it is not as reliable as websockets.
As well it is important to calculate the amount of data and approximate amount of users that will use your system. Based on calculations you will have approximate information about how real it is, and what server will be required to handle.
This is probably not the best forum for such a specialized question, but at the moment I don't know of a better one (open to suggestions/recommendations).
I work on a video product which for the last 10+ years has been using proprietary communications protocol (DCOM-based) to send the video across the network. A while ago we recognized the need to standardize and currently are almost at a point of ripping out all that DCOM baggage and replacing it with a fully compliant RTP/RTSP client/server framework.
One thing we noticed during testing over the last few months is that when we switch the client to use RTP/RTSP, there's a noticeable increase in start-up latency. The problem is that it's not us but RTSP.
BEFORE (DCOM): we would send one DCOM command and before that command even returned back to the client, the server would already be sending video. -- total latency 1 RTT
NOW (RTSP): This is the sequence of commands, each one being a separate network request: DESCRIBE, SETUP, SETUP, PLAY (assuming the session has audio and video) -- total of 4 RTTs.
Works as designed - unfortunately it feels like a step backwards because prior user experience was actually better.
Can this be improved? If you stay with the standard, short answer is, NO. However, my team fully controls our entire RTP/RTSP stack and I've been thinking we could introduce a new RTSP command (without touching any of existing commands so we are still fully inter-operable) as a solution: DESCRIBE_SETUP_PLAY.
We could send this one command, pass in types of streams interested in (typically, there's only one video and 0..1 audio). Response would include the full SDP text, as well as all the port information and just like before, server would start streaming instantly without waiting for anything else from the client.
Would this work? any downside that I may not be seeing? I'm curious why this wasn't considered (or was dropped) from official spec, since latency even in local intranet is definitely noticeable.
FYI, it is possible according to the RTSP 1.0 specification:
9.1 Pipelining
A client that supports persistent connections or connectionless mode
MAY "pipeline" its requests (i.e., send multiple requests without
waiting for each response). A server MUST send its responses to those
requests in the same order that the requests were received.
The RTSP 2.0 draft also contains support for pipelining.
However none of the clients/servers I've used implement it AFAIK.
First, here's my original question that spawned all of this.
I'm using Appcelerator Titanium to develop an iPhone app (eventually Android too). I'm connecting to CouchDB's port directly by using Titanium's Titanium.Network.TCPSocket object. I believe it utilizes the Apple SDK's CFSocket/NSStream class.
Once connected, I simply write:
'GET /mydb/_changes?filter=app/myfilter&feed=continuous&gameid=4&heartbeat=30000 HTTP/1.1\r\n\r\n'
directly to the socket. It keeps it open "forever" and returns JSON data whenever the db is updated and matches the filter and change request. Cool.
I'm wondering, is it ok to connect directly to CouchDB's socket like this, or would I be better off opening the socket to node.js instead, and maybe using this CouchDB node.js module to handle the CouchDB proxy through node.js?
My main concern is performance. I just don't have enough experience with CouchDB to know if hitting its socket and passing faux HTTP requests directly is good practice or not. Looking for experience and opinions on any ramifications or alternate suggestions.
It's me again. :-)
CouchDB inherits super concurrency handling from Erlang, the language it was written in. Erlang uses lightweight processes and message passing between those processes to achieve excellent performance under high concurrent load. It will take advantage of all cpu cores, too.
Nodejs runs a single process and basically only does one thing at a time within that process. Its event-based, non-blocking IO approach does allow it to multitask while it waits for chunks of IO but it still only does one thing at a time.
Both should easily handle tens of thousands of connections, but I would expect CouchDB to handle concurrency better (and with less effort on your part) than Node. And keep in mind that Node adds some latency if you put it in front of CouchDB. That may only be noticeable if you have them on different machines, though.
Writing directly to Couch via TCPSocket is a-ok as long as your write a well-formed HTTP request that follows the spec. (You're not passing a faux request...that's a real HTTP request you're sending just like any other.)
Note: HTTP 1.1 does require you to include a Host header in the request, so you'll need to correct your code to reflect that OR just use HTTP 1.0 which doesn't require it to keep things simple. (I'm curious why you're not using Titanium.Network.HTTPClient. Does it only give you the request body after the request finishes or something?)
Anyway, CouchDB can totally handle direct connections and--unless you put a lot of effort into your Node proxy--it's probably going to give users a better experience when you have 100k of them playing the game at once.
EDIT: If you use Node write an actual HTTP proxy. That will run a lot faster than using the module you provided and be simpler to implement. (Rather than defining your own API that then makes requests to Couch you can just pass certain requests on to CouchDB and block others, say, for security reasons.
Also take a look at how "multinode" works:
http://www.sitepen.com/blog/2010/07/14/multi-node-concurrent-nodejs-http-server/