Is it better to host our own live streaming server on AWS c1.xlarge or use a third-party service? - mongodb

I have an c1.xlarge EC2 instance which according to this article has 100 MB/S upload and download speed.
And i would be streaming a video at 720p or 1080p from this server. I am running MongoDB, NGINX on my instance.
According to this article the following are the consumptions for the bandwidth
720p
Bits Per Second (down): 20+ Mbps
Bits Per Second (up): 320Kbps
Data used per 5 minute video: 37.5MB
1080p
Bits Per Second (down): 20+ Mbps
Bits Per Second (up): 320 Kbps
Data used per 5 minute video: 62MB
According to Wiki
Bitrate for 720p: ~18.3 Mbit/s
Bitrate for 1080p: ~25 Mbit/s
According to stackoverflow bitrate calculation
25 Mbit/s * 3,600 s/hr = 3.125 MB/s * 3,600 s/hr = 11,250 MB/hr ≈ 11 GB/hr
So for a minute it would be
(25 (Mbit / s)) * 1 minute = 187.5 megabytes
My assumption is that above mentioned calculation is on a per viewer base.
Q1. So is the following calculation correct that only 1 user can be hosted per minute ?
(187.5 (mb / s)) / (100 (mb / s)) = 1.87500
Q2. Should i stream from my own server or use a third-party. If third party then what do you recommend?

Related

How far is it safe to regularly write to the ESP32 flash, considering the flash MTBF?

What would be the best practice for writing to the flash on a regular basis.
Considering the hardware I am working on is supposed to have 10 to 20 years longevity, what would be your recommandation? For example, is it ok I write some state variables every 15 minutes thru Preferences?
That depends on
number of erase cycles your Flash supports,
size of the NVS partition where you store data and
size and structure of the data that you store.
Erase cycles mean how many times a single sector of Flash can be erased before it's no longer guaranteed to work. The number is found in the datasheet of the Flash chip that you use. It's usually 10K or 100K.
Preferences library uses the ESP-IDF NVS library. This requires an NVS partition to store data, the size of which determines how many Flash sectors get reserved for this purpose. Every time you store a value, NVS writes the data together with its own overhead (total of 32 bytes for primitive data types like ints and floats, more for strings and blobs) into the current Flash sector. When the current sector is full, it erases the next sector and proceeds to write there; thereby using up sectors in a round robin fashion as write requests come in.
If we assume that your Flash has 100K erase cycles, size your NVS partition is 128 KiB and you store a set of 8 primitive values (any int or float) every 15 minutes:
Each store operation uses 8 * 32 = 256 bytes (32 B per data value).
You can repeat that operation 131072 / 256 = 512 times before you've written to every sector of your 128KiB NVS partition (i.e. erased every sector once)
You can repeat that cycle 100K times so you can do 512 * 100000 = 51200000 or roughly 5.1M store operations before you've erased every sector its permitted maximum number of times.
Considering the interval of 15 minutes creates 365 * 24 * 4 = 35040 operations per year, you'd have 51200000 / 35040 = 1461 years until Flash is dead.
Obviously, if your Flash chip is rated at 10K erase cycles, it drops to only 146 years.
There's probably some NVS overhead in there somewhere that I didn't account for, and the Flash erase cycle ratings are not 100% reliable so I'd cut it in half for good measure - I would expect 700 or 70 years in real life.
If you store non-primitive values (strings, blobs) then the estimate changes based on the length of that data. I don't know to calculate the exact Flash space used by those but I'd guess 32B plus length of your data multiplied by 10% of NVS overhead. Plug in the numbers, see for yourself.

What is "Base Capacity Unit-Hour" in IBM Event Streams?

When choosing the Enterprise Plan for IBM Event Streams, there is a huge cost associated for Base Capacity Unit-Hour which costs more than $5K per month if I put 720 hours in it (assuming 1 month is 720 hours).
This makes it way too expensive and made me wonder if I understood correctly what "Base Capacity Unit-Hour" means.
Just got this from an IBM rep:
The price is 6.85/Base Capacity Unit, and that's by the hour. So broken down we have, 6.85 * 24 * 30 = 4932/month, which makes your estimate correct. Base Capacity Unit covers 150 MB/s(3 brokers) and 2TB storage. If you find you need to scale up, then the rate will increase from there.

Interrupt time in DMA operation

I'm facing difficulty with the following question :
Consider a disk drive with the following specifications .
16 surfaces, 512 tracks/surface, 512 sectors/track, 1 KB/sector, rotation speed 3000 rpm. The disk is operated in cycle stealing mode whereby whenever 1 byte word is ready it is sent to memory; similarly for writing, the disk interface reads a 4 byte word from the memory in each DMA cycle. Memory Cycle time is 40 ns. The maximum percentage of time that the CPU gets blocked during DMA operation is?
the solution to this question provided on the only site is :
Revolutions Per Min = 3000 RPM
or 3000/60 = 50 RPS
In 1 Round it can read = 512 KB
No. of tracks read per second = (2^19/2^2)*50
= 6553600 ............. (1)
Interrupt = 6553600 takes 0.2621 sec
Percentage Gain = (0.2621/1)*100
= 26 %
I have understood till (1).
Can anybody explain me how has 0.2621 come ? How is the interrupt time calculated? Please help .
Reversing form the numbers you've given, that's 6553600 * 40ns that gives 0.2621 sec.
One quite obvious problem is that the comments in the calculations are somewhat wrong. It's not
Revolutions Per Min = 3000 RPM ~ or 3000/60 = 50 RPS
In 1 Round it can read = 512 KB
No. of tracks read per second = (2^19/2^2)*50 <- WRONG
The numbers are 512K / 4 * 50. So, it's in bytes. How that could be called 'number of tracks'? Reading the full track is 1 full rotation, so the number of tracks readable in 1 second is 50, as there are 50 RPS.
However, the total bytes readable in 1s is then just 512K * 50 since 512K is the amount of data on the track.
But then it is further divided by 4..
So, I guess, the actual comments should be:
Revolutions Per Min = 3000 RPM ~ or 3000/60 = 50 RPS
In 1 Round it can read = 512 KB
Interrupts per second = (2^19/2^2) * 50 = 6553600 (*)
Interrupt triggers one memory op, so then:
total wasted: 6553600 * 40ns = 0.2621 sec.
However, I don't really like how the 'number of interrupts per second' is calculated. I currently don't see/fell/guess how/why it's just Bytes/4.
The only VAGUE explanation of that "divide it by 4" I can think of is:
At each byte written to the controller's memory, an event is triggered. However the DMA controller can read only PACKETS of 4 bytes. So, the hardware DMA controller must WAIT until there are at least 4 bytes ready to be read. Only then the DMA kicks in and halts the bus (or part of) for a duration of one memory cycle needed to copy the data. As bus is frozen, the processor MAY have to wait. It doesn't NEED to, it can be doing its own ops and work on cache, but if it tries touching the memory, it will need to wait until DMA finishes.
However, I don't like a few things in this "explanation". I cannot guarantee you that it is valid. It really depends on what architecture you are analyzing and how the DMA/CPU/BUS are organized.
The only mistake is its not
no. of tracks read
Its actually no. of interrupts occured (no. of times DMA came up with its data, these many times CPU will be blocked)
But again I don't know why 50 has been multiplied,probably because of 1 second, but I wish to solve this without multiplying by 50
My Solution:-
Here, in 1 rotation interface can read 512 KB data. 1 rotation time = 0.02 sec. So, one byte data preparation time = 39.1 nsec ----> for 4B it takes 156.4 nsec. Memory Cycle time = 40ns. So, the % of time the CPU get blocked = 40/(40+156.4) = 0.2036 ~= 20 %. But in the answer booklet options are given as A) 10 B)25 C)40 D)50. Tell me if I'm doing wrong ?

iPhone 4S - BLE data transfer speed

I've been tinkering around with the BLE (Bluetooth Low Energy) connectivity classes quiet a bit lately and haven't been able to make it transfer data any faster than 1KB / 5 seconds. I believe, in the documentation, it says the max speed is 60 bytes per 20 milliseconds. With data transfer and counting the Ack transfer after each set of packets, I believe we should be able to go as fast as 1.5KB per second. So my code is around 7-8 times slower than it should be.
I'm just wondering if anyone has been able to do data transfer in BLE as fast as the documentation says it should be able to do. What sort of speed are you getting if faster than mine?
Thanks a lot
see at the guidlines of apple and you will see that a connection update request is required to speed up your connection.
https://developer.apple.com/hardwaredrivers/BluetoothDesignGuidelines.pdf
I have min=20ms max 40 ms
I hope I could help
Roman
If you are able to use higher MTU size (negotiated by the iOS) then you would be able to increase the bandwidth even more, because there is a 4 byte L2CAP header and a 3 byte ATT header that wouldn't be transmitted more than in one packet.
If you are able to transmit 6 packets pr connection interval, then you would be able to put in 35 byte extra per connection interval (the 7 byte header would still be there for the first packet) The MTU size could also be split over several connection intervals, increasing the throughput with 7 more bytes pr connection interval. (Just takes longer time to assemble the packet again.) The max MTU size allowed by ATT is 515 bytes (Max size of att is 512 bytes + 3 byte header for opcode + handle)

Leaky bucket problem help?

I'm trying to review for my final and I'm going over example problems given to me by my professor. Can anyone explain to me the concept of how leaky bucket works. Also Here's a review problem my professor gave to me about leaky buckets.
A leaky bucket is at the host network interface. The data rate in the network is 2 Mbyte/s and the data rate from the application to the bucket is 2m5 Mbyte/s
A.) Suppose the host has 250 Mbytes to send onto the network and it sends the data in one burst. What should the minimum capacity of the bucket (in byte) in order that no data is lost?
B.) Suppose the capacity of the bucket is 100M bytes. What is the longest burst time from the host in order that no data is lost?
Leaky bucket symbolizes a bucket with a small hole allowing water (data) to come out at the bottom. Since the top of the bucket has a greater aperture than the bottom, you can put water in it faster that it goes out (so the bucket fills up).
Basically, it represents a buffer on a network between 2 links with different rates.
Problem A
We can compute that sending the data will take 250Mbyte / (2,5Mbyte / s) = 100 s.
During that 100 s, the bucket will have retransmitted (leaked) 100s * 2Mbyte/s = 200Mbytes
So the bucket will need a minimum capacity of 250MB - 200MB = 50MB in order not to lose any data
Problem B
Since the difference between the 2 data rates is 2.5MB/s - 2.0MB/s = 0.5MB/s, it means the bucked fills up by 0.5MB/s (when both links transmit at full capacity).
You can then calculate that the 100MB capacity will be filled after a burst of 100MB / 0.5MB/s = 200s = 3m 20s
Interesting problem - here's my attempt at solving A (no gurantees it's right though!)
So rate in = 2.5, rate out = 2.0, where the rate is in Mbyte/s.
So in 1 second, the bucket will contain 2.5 - 2.0 = 0.5 Mbyte.
1) If the host sends 250 Mbytes. This will take 100 seconds to transfer into the bucket at 2.5 Mbytes/s.
2) If the bucket drains at 2.0 Mbytes/s then it will have drained 100 * 2 = 200 Mbytes.
So I think you need a bucket which is 50 Mbytes capacity.