Is this video being cached or not? - webserver

I'm wanting to ensure that videos I'm serving are being cached by the browser.
I've knocked up this test page : https://itype.online/videoCacheTest
I'm a bit puzzled about why I'm not seeing a 304 status in the Network tab for the video. Instead I see status 206.
Is my file being retrieved from the browser cache or is it coming from the server every time?
Thanks,
Peter

The link you shared appears broken (there are some cloud availability issues at this time so maybe that is the problem) but its possible to test with other videos and similar behaviour will be seen.
Its worth looking at the most common ways of streaming a video:
Simple HTTP streaming of a single video file
streaming via a dedicated streaming protocol like HLS or MPEG DASH
For simple HTTP streaming, the video file is typically downloaded using byte range requests. In theory, so long as the cache can handle range requests then the cache should be ok to cache the responses - from the HTTP caching spec (RFC 2616 Fielding, et al):
A response received with a status code of 200, 203, 206, 300, 301 or 410 MAY be stored by a cache and used in reply to a subsequent request, subject to the expiration mechanism, unless a cache-control directive prohibits caching. However, a cache that does not support the Range and Content-Range headers MUST NOT cache 206 (Partial Content) responses.
In practice, it seems that many browsers still do not cache the 206 responses as you have seen. It is made more difficult to observe the behaviour sometimes as some browsers, to speed up download, may add a first request for the full video initially, which is then cancelled and replaced by range requests when the video length is known.
HLS and DASH introduce a further complexity - with ABR you create multiple bit rate versions of the video on the server, each broken into chunks. The player decides which resolution to download the next chunk of the video from based on a number of factors, typically including network conditions and device display size and capabilities.
So when HLS or DASH is being used, replaying a video will not necessarily download the same chunks every time as network conditions etc may have changed.
You can usually see on the browser inspector if a particular item is downloaded on the network or loaded from memory/cache - for example with Safari web inspector:
And for Chrome:
You could also use Wireshark is you want to be 100% sure - this will show the network traffic to and from your device: https://www.unified-streaming.com/academy

Related

RESTful API with COAP, MQTT, or other lightweight protocol

I have a working HTTP RESTful API that will receive an ID, then check against data in the database. Based on the status of the record and related records it will then return either state errors or if everything is ready to begin it will return some information about the records. It has some other functionality as well but my issue is our device we are using to collect this data does not have access to WiFi, we are planning on testing a 2G cellular solution but I know an HTTP request will be far too slow if it even completes.
What lightweight protocol can my device send a 36 char UUID to a server and get a JSON response back. I have been exploring information about MQTT and COAP but don't see much info on asking another device about a specific ID of a record it's more like ask for a hardware's status.
Furthermore, if there is a solution I can get to interface with my existing API this would be ideal.
Thanks for any help.
I'm not sure why the 2G cellular solution wont play well with HTTP(S).
according to another SO answer the size of http is:
Request headers today vary
in size from ~200 bytes to over 2KB. As applications use more cookies
and user agents expand features, typical header sizes of 700-800 bytes
is common.
And according to wiki you can get up to 40kbit/s. I'm not really sure what the issue is with using http(s) for this scenario.
If you use something like UDP it can be quicker and is smaller however, it's not as reliable as HTTP due to packet loss possibilities. Not to mention you can also apply gzip or another form of compression on the HTTP request to make it even smaller.
minor update
If that data is not needed right away you can do it hourly or half day batch uploads, store all the data in a local db and at certain time intervals do 1 main HTTP request that is a bit bigger but will have all the data? I'm not fully sure what your requirements are but HTTP should be fine for your case over 2G

Initial-Burst HTTP header - units

I'm implementing a shoutcast radio client.
My reference client sends "Initial-Burst" HTTP request header to 960000.
I don't know the initial buffer size of my reference client, it's an iOS app, I don't have the source codes. What I know is it starts playing almost instantly, as soon as user selects a channel.
When I raise my initial buffer size above ~100 kbytes, my radio no longer plays instantly, on some streams it waits for the data from server, which lasts a few seconds.
The server says it's running Icecast 2.3.3-kh3 and Linux v1.9.8. Icecast is an open source software, needless to say it has no documentation.
What units has that Initial-Burst header? bytes, bits, ticks, etc?
Are there some recommended values / best practices?
What I suspect is happening is that you are requesting more data than the server has buffered. If you were to request 1MB but your only has 512KB in its buffer, you would then be receiving data as it came from the encoder until your 1MB client-side buffer fills. You can confirm this with a packet sniffer, such as Wireshark.
If you build your own client, you should be able to separate the playback buffer size from the header. Once this is done, you can set your Initial-Burst header as big as you want.
The other possibility (unlikely, I suspect) is that the server is making a server-side buffer and filling it to the requested size before sending. That wouldn't make much sense to me, but again, you can confirm the behavior with a packet sniffer.
What I do is have a fixed buffer size server-side, and ignore any headers related to buffer control. This allows me to flush a large buffer as quickly as possible, without relying on client behavior. I do this with custom code though... I don't think this is configurable in Icecast, but I could be wrong.

RTP/RTSP start up latency: Would this method help to reduce it, and if yes, why we don't have it

This is probably not the best forum for such a specialized question, but at the moment I don't know of a better one (open to suggestions/recommendations).
I work on a video product which for the last 10+ years has been using proprietary communications protocol (DCOM-based) to send the video across the network. A while ago we recognized the need to standardize and currently are almost at a point of ripping out all that DCOM baggage and replacing it with a fully compliant RTP/RTSP client/server framework.
One thing we noticed during testing over the last few months is that when we switch the client to use RTP/RTSP, there's a noticeable increase in start-up latency. The problem is that it's not us but RTSP.
BEFORE (DCOM): we would send one DCOM command and before that command even returned back to the client, the server would already be sending video. -- total latency 1 RTT
NOW (RTSP): This is the sequence of commands, each one being a separate network request: DESCRIBE, SETUP, SETUP, PLAY (assuming the session has audio and video) -- total of 4 RTTs.
Works as designed - unfortunately it feels like a step backwards because prior user experience was actually better.
Can this be improved? If you stay with the standard, short answer is, NO. However, my team fully controls our entire RTP/RTSP stack and I've been thinking we could introduce a new RTSP command (without touching any of existing commands so we are still fully inter-operable) as a solution: DESCRIBE_SETUP_PLAY.
We could send this one command, pass in types of streams interested in (typically, there's only one video and 0..1 audio). Response would include the full SDP text, as well as all the port information and just like before, server would start streaming instantly without waiting for anything else from the client.
Would this work? any downside that I may not be seeing? I'm curious why this wasn't considered (or was dropped) from official spec, since latency even in local intranet is definitely noticeable.
FYI, it is possible according to the RTSP 1.0 specification:
9.1 Pipelining
A client that supports persistent connections or connectionless mode
MAY "pipeline" its requests (i.e., send multiple requests without
waiting for each response). A server MUST send its responses to those
requests in the same order that the requests were received.
The RTSP 2.0 draft also contains support for pipelining.
However none of the clients/servers I've used implement it AFAIK.

iPhone: Strategies for uploading large files from phone to server

We're running into issues uploading hires images from the iPhone to our backend (cloud) service. The call is a simple HTTP file upload, and the issue appears to be the connection breaking before the upload is complete - on the server side we're getting IOError: Client read error (Timeout?).
This happens sporadically: most of the time it works, sometimes it fails. When a good connection is present (ie. wifi) it always works.
We've tuned various timeout parameters on the client library to make sure we're not hitting any of them. The issue actually seems to be unreliable mobile connectivity.
I'm thinking about strategies for making the upload reliable even when faced with poor connectivity.
The first thing that came to mind was to break the file into smaller chunks and transfer it in pieces, increasing the likelihood of each piece getting there. But that introduces a fair bit of complexity on both the client and server side.
Do you have a cleverer approach? How would you tackle this?
I would use the ASIHTTPRequest library. It's have some great features like bandwidth throttling. It can upload files directly from the system instead of loading the file into memory first. Also I would break the photo into like 10 parts. So for a 5 meg photo, it would be like 500k each. You would just create each upload using a queue. Then when the app goes into background, it can complete the part it's currently uploading. If you cannot finish uploading all the parts in the allocated time, just post a local notification reminding the user it's not completed. Then after all the parts have been sent to your server, you would call a final request that would combine all the parts back into your photo on the server-side.
Yeah, timeouts are tricky in general, and get more complex when dealing with mobile connections.
Here are a couple ideas:
Attempt to upload to your cloud service as you are doing. After a few failures (timeouts), mark the file, and ask the user to connect their phone to a wifi network, or wait till they connect to the computer and have them manually upload via the web. This isn't ideal however, as it pushes more work to your users. The upside is that implementationwise, it's pretty straight forward.
Instead of doing an HTTP upload, do a raw socket send instead. Using raw socket, you can send binary data in chunks pretty easily, and if any chunk-send times out, resend it until the entire image file is sent. This is "more complex" as you have to manage binary socket transfer but I think it's easier than trying to chunk files through an HTTP upload.
Anyway that's how I would approach it.

Streaming Data

I unsuccessfully searched Google for a good definition and understanding of streaming data and its characteristics. My questions are:
What is streaming data?
How can it be detected?
Correction:
"How can it be detected" is not an appropriate question. Instead my question is:
How is it different from buffered data and other data transfer mechanisms?
It depends in what context you mean but basically streaming data is analagous to asynchronous data. Take the Web as an example. The Web (or HTTP specifically) is (basically) a request-response mechanism in that a client makes a request and receives a response (typically a Web page of some kind).
HTTP doesn't natively support the ability for servers to push content to clients. There are a number of ways this can be faked, including:
Polling: forcing the client to make repeated requests, typically inconspicuously (as far as the client is concerned);
Long-lived connections: this is where the client makes a normal HTTP request but instead of returning immediately the server hangs on to the request until there's something to send back. When the request times out or a response is sent th eclient sends another request. In this way you can fake server push;
Plug-ins: Java applets, Flash, Silverlight and others can be used to achieve this.
Anything where the server effectively sends data to the client (rather than the client asking for it)--regardless of the mechanism and whether or not the client is polling for that data--can be characterised as streaming data.
With non-HTTP transports (eg vanilla TCP) server push is typically easier (but can still run afoul of firewalls and th elike). An example of this might be a sharetrading application that receives market information from a provider. That's streaming data.
How do you detect it? Bit of a vague question. I'm not really sure what you're getting at.
When you say streaming data I think of the following, although I'm not sure if this is what you're getting at. To me it's playing a video/audio file while it's downloading. That's what happens when you go to YouTube and watch a video and it starts playing even though you haven't downloaded the whole video yet. But you can see the video downloading - I'm sure you're familiar with the seek bar filling up as the file downloads. It doesn't necessarily have to be a video or audio file but that's the most common.