ratio of bits/hz of digitized voice channel - bandwidth

We need to transmit 100 digitized voice channels using a passband
channel of 30 KHz. What should be the ratio of bits/Hz if we use no
guard band?
What I understand and get the bandwidth are :
30 KHz / 100 = 300 Hz
And the ratio of bit/Hz if no guard band is 640000/300 = 213.333... bits/Hz
(because digitized voice channel has a date rate of 64 Kbps)
Is that right answers?

digitization scheme. High quality PCM - 144 kbps, PCM (DSO) - 64 kbps, CVSD - 32 kbps, Compressed - 16 kbps, LPC - 2.4 kbps.
Assuming DSO, you have 100 channels at 64 kbps or 6400 kbps to send over a 20 kHz channel. Thus 6400 kbps / 20 kHZ = 320 b/Hz. Rather high packing but could be done over a noise-free channel.
Entropic multiplexing (don't send silent time) could reduce this by a factor of 5-10 for 32-64 k/HZ.

Related

Linphone opus codec sampling rate

I would like to use opus codec in linphone
But I have a few problems using it. If someone with opus codec knowledge could help me out would appreciate it.
How I can force audio sampling scheme to 8000 Hz? Currently, it uses 48000 Hz only.
Thanks in advance
If you look at rfc7587 Section 4.1, you can read this:
Opus supports 5 different audio bandwidths, which can be adjusted
during a stream. The RTP timestamp is incremented with a 48000 Hz
clock rate for all modes of Opus and all sampling rates. The unit
for the timestamp is samples per single (mono) channel. The RTP
timestamp corresponds to the sample time of the first encoded sample
in the encoded frame. For data encoded with sampling rates other
than 48000 Hz, the sampling rate has to be adjusted to 48000 Hz.
Reading more in the rfc7587, you will find out that, in SDP, you will always see the codec being using "OPUS/48000/2", no matter the real sampling rates.
No matter the real sampling rate, as explained above, the RTP timestamp will always be incremented with a 48000 Hz clock rate.
If you wish to control the real sampling rate for the codec (and thus, the bandwidth), you can use the following SDP parameters: maxplaybackrate and maxaveragebitrate are the ones to be used.
Section 3.1.1 is listing the relation between maxaveragebitrate and the sampling rate:
3.1.1. Recommended Bitrate
For a frame size of 20 ms, these are the bitrate "sweet spots" for Opus in various configurations:
o 8-12 kbit/s for NB speech,
o 16-20 kbit/s for WB speech,
o 28-40 kbit/s for FB speech,
o 48-64 kbit/s for FB mono music, and
o 64-128 kbit/s for FB stereo music.
Conclusion: to use only 8000Hz in OPUS, you must negotiate with such parameters, where 12kbit/s is the maximum setup for opus in NB speech:
m=audio 54312 RTP/AVP 101
a=rtpmap:101 opus/48000/2
a=fmtp:101 maxplaybackrate=8000; sprop-maxcapturerate=8000; maxaveragebitrate=12000
I don't know if linphone is following all the parameters, but this is the theory!

Compress data from MEMS accelerometer and transfer it with NBIoT

I have raw hex data from MEMS accelerometer with sampling rate of 1 sample per millisecond for a measurement duration of 2.5 seconds for each trace.The data contains first 12 bits for timestamp and then continued with 2 for each axis of acceleration X,Y and Z. I want to reduce this hex data with size around 31kb to 120 bytes for wireless data transfer. I want to achieve this through python

OPUS packet size

I have an application, that reads opus packets from a file. The file confirms opus packets in ogg format. My application sends each opus packet every 20 millisecond (it is configurable).
For 20 millisec, it sends packets of sizes ranging from 200 bytes to 400 bytes, say average size is 300 bytes.
Sending 300 bytes for 20millsec, is it right or its too much of data. How can I calculate for 20millisec how much data (in bytes) I can send to remote.
Can somebody help me to understand how to calculate number of bytes I need to send to remote party per 20millisec.
300 bytes/packet × 8 bits/byte / 20 ms/packet = 120 kbit/s
That is enough for good quality stereo music. Depending on the quality that you need, or if you are only sending mono or voice, you could potentially reduce the bitrate of the encoder. However if you are reading from an Ogg Opus file then the packets are already encoded, so it is too late to reduce the bitrate of the encoder unless you decode the packets and re-encode them at a lower bitrate.

Processing audio signals in Matlab vs Sensor

I'm trying to do target recognition using the target acoustic signal. I tested my code in matlab, however, i'm trying to simulate that in C to test it in tinyOS using sensor simulator.
In matlab, i used wav records (16 bits per sample, 44.1 sample rate), so for example, i have a record for a certain object, lets say cat sound which of 0:01 duration, in matlab that will give me a total of 36864 samples of type int16 ,and size 73728 bytes.
In sensor, if i have [Mica2 sensor: 10 bits ADC (but i'll use 8 bits ADC), 8 MHz microprocessor, and 4 Kb RAM. This means that when i detect an object, i'll fill the buffer with 4000 samples of type uint8_t (if i used 8 KHz sample rate and 8 bits ADC).
So, my question is that:
In matlab i used a large number of samples to represent the target audio signal(36864 samples), but in the sensor i'm limited to only 4000 samples, would that be enough to record the whole target sound?
Thank you very much, highly appreciate your advice

Is it better to host our own live streaming server on AWS c1.xlarge or use a third-party service?

I have an c1.xlarge EC2 instance which according to this article has 100 MB/S upload and download speed.
And i would be streaming a video at 720p or 1080p from this server. I am running MongoDB, NGINX on my instance.
According to this article the following are the consumptions for the bandwidth
720p
Bits Per Second (down): 20+ Mbps
Bits Per Second (up): 320Kbps
Data used per 5 minute video: 37.5MB
1080p
Bits Per Second (down): 20+ Mbps
Bits Per Second (up): 320 Kbps
Data used per 5 minute video: 62MB
According to Wiki
Bitrate for 720p: ~18.3 Mbit/s
Bitrate for 1080p: ~25 Mbit/s
According to stackoverflow bitrate calculation
25 Mbit/s * 3,600 s/hr = 3.125 MB/s * 3,600 s/hr = 11,250 MB/hr ≈ 11 GB/hr
So for a minute it would be
(25 (Mbit / s)) * 1 minute = 187.5 megabytes
My assumption is that above mentioned calculation is on a per viewer base.
Q1. So is the following calculation correct that only 1 user can be hosted per minute ?
(187.5 (mb / s)) / (100 (mb / s)) = 1.87500
Q2. Should i stream from my own server or use a third-party. If third party then what do you recommend?