Bandwidth comparision over different channels - bandwidth

I wanted to compare how much bandwidth is consumed when I make a group call(voice) over Whatsapp, Discord or Teams. I wanted to do the analysis and choose one platform which will consume less bandwidth. Is it possible to do such analysis using a tool or some method?

If you are on windows, you can use "Resource monitor". It is already installed on every windows machine and it shows the bandwidth used by specific programs. On iOS it is not possible and for android, there are a lot of apps that do the job

Related

best approach to connect multiple temperature sensors to a mobile web app

I am a web developer and I am starting to learn about the world of IoT.
Because of the vaccines arrival to my country (Argentina) I got asked to build 80 temperature sensors to monitor them and I have some questions about it.
What would be the best way to connect all of them to the cloud?
If I use for example aws iot platform, do you know how much it would cost monthly for just sending
and storing temperature logs for each sensor (remember, there are 80 of them)?
Is there any language/environment/protocol that works better for IoT? Because it's a constant flow
of lightweight data...
Is there a better way to connect them to the internet besides using esp32 modules for
each?(I saw a tutorial that said it's possible to connect some more to a single esp32 module)
If you have any advice I'd love to hear it. I know how to code but when it comes to backend and specially server stuff I have a lot to learn.
Costs are directly related to the amount of data you send, process and store. You'd have to check the price lists for each cloud service you plan to use. If we assume that you'll be sending 1 temperature reading (with associated data such as timestamp, device id, ...) every 10 minutes using reasonable protocols (MQTT, JSON) then the total costs for all 80 devices would be perhaps a few dollars per month. The total database storage will accumulate over time and you'll be charged more, but honestly the amount of data under these conditions is ridiculously low.
An ESP32 is cheap, has WiFi and enough performance to send data to cloud. You can connect this micro to AWS IoT or Google Cloud IoT using the relevant libraries from either: AWS library or Google IoT library. These libraries decide the questions of language and protocol on the microcontroller side - it's C and MQTT/HTTPS (but avoid the HTTPS, MQTT is much more practial). You can use JSON for the actual temperature data message. The microcontroller development takes place with either ESP IDF (a bit lower-level C environment) or Arduino (a bit higher-level C/C++ environment). Those use FreeRTOS as the OS on micro (note that the IoT libraries work on almost anything).
A practical alternative to ESP IDF and Arduino (especially for a web dev) is Mongoose OS where you can do much of the development work in JavaScript (not all, though). It has high-level libraries for both AWS and Google IoT (which still use the same underlying MQTT/HTTPS client, I assume).
By far the easiest way to connect the ESP32 modules to Internet is to have each connect to a WiFi AP. If the single WiFi AP doesn't cover all devices, add more until they do. ESP32 does have a mesh networking library, but I would hesitate to recommend it to newbies.

For what programmatic reason do IoT-programmed devices always require cloud/server access?

I live in an area where net access is mobile or nothing. While I can occasionally get access by tethering a mobile to that network, it isn't often connected, and when it isn't connected, no local device will function on its own, no matter which protocol it uses. Why isn't there any kind of server/cloud resiliency built in where devices can communicate in a peer fashion like Apple's Bonjour (Rendezvous? I can't remember)? If I have an Echo device, I should be able to switch it on through an Alexa interface. I'm OK without speech processing which requires interpretation of commands through an AWS or Google or Apple or whatever cloud, but being able to locally control a switch seems as though the interface could be smart enough to route locally. I guess I may have just answered my question. It seems as though routes could be internally stored so as to not to definitely require a server. Can you imagine shipping a colony to Mars and all the IoT devices stop working? If you ask me, they should not require a branch variation or special programming in order to function.
From the experience of having sat down and built a few, there are a some key reasons why viable IoT gadget products for the general market typically end up having to have a cloud-mediated mode, no matter what was envisioned when the design effort originally commenced:
General consumers (at least think) they want the option to control things when outside the home
Often even at home, a mobile phone may be on the mobile network not wifi, meaning that even if the user is physically inside their home, in network terms, they are not.
Firmware updates, dynamic content, etc are easier when they don't have to be relayed through a mobile phone or PC, especially a mobile that might sometimes have to jump networks partway through the process.
Ironically, having once set out to build an IoT product that could work entirely offline, the further the project progressed, the more and more difficulties that approach presented for general users, and the more the cloud path that was added as an option, started to look preferable in terms of how things should work all the time so that it could become the exclusive focus of development efforts.
My conclusion is that it's very hard to build an offline IoT gadget. Not only the developer, but also the users and marketing people need to understand and accept what sorts of difficulties and limitations that can mean.
So where does it happen? In the situations where the "users" are the "developers" - eg. open source. If you look around a bit, you'll find plenty of gadgets either built form scratch, or more commonly reverse engineered so they can run a custom firmware. Want a local RESTful API? Done! Want could relay via MQTT over SSL to your own broker? Done!
When you control the code, you control the mode.
But with products for the general market, most customers want things to work, not a lengthy technical explanation of why the details of their network setup mean they cannot.

RTMP vs RTSP/RTP: Which to choose for an interactive livestream?

If you are trying to develop an interactive livestream application, you rely on ultra low (real-time) latency. For example for a video conference or a remote laboratory.
The two protocols, which should be suitable for this circumstances are:
RTSP, while transmitting the data over RTP
RTMP
*WebRTC: As I'm trying to give a bigger audience the possibility to interact with each other, WebRTC is not suitable. Because as far as I know it is not designed for a bigger audience.
My questions:
Which one should I choose for this use-case? RTSP/RTP or RTMP?
Which protocol delivers better results regarding end-to-end latency, session start-up time?
Which one consumes more hardware resources?
RTMP seems to use a persistent TCP connection. But which protocol is used for the transmission? It cannot be TCP, because this could not ensure real-time latency?
What are in general the pros and cons for using either of the protocols?
I did not find any comparison of these two protocols in scientific papers or books. Only that the famous mobile live-streaming app Periscope is using RTMP.
Other apps like Instagram or Facebook are for example providing text-based interaction with the streamer. If developers want to build the next "killer application" based on interactive live-streams: I think this question is essential to answer.
You make a lot of assumptions in your answer.
WebRTC: As I'm trying to give a bigger audience the possibility to interact with each other, WebRTC is not suitable. Because as far as I know it is not designed for a bigger audience.
That's simply not true. WebRTC doesn't know or care how you structure your applications server-side. There are plenty of off-the-shelf services for handling large group calls and low latency video distribution via WebRTC.
You should also know that for the media streams, WebRTC is RTP under the hood.
It cannot be TCP, because this could not ensure real-time latency?
Of course it can. There's some overhead with TCP, but nothing that prevents you from using it in a real time scenario. The overhead with TCP is minimal.
UDP is traditionally used for these sorts of scenarios, as reliability isn't required, but that doesn't mean TCP can't be used almost as performantly.
RTMP
RTMP is a dead protocol for Flash. No browsers support it. Other clients only support it for legacy reasons. You shouldn't use it for anything new going forward.
Only that the famous mobile live-streaming app Periscope is using RTMP.
Well, that's not a reason to do much of anything.
Which protocol delivers better results regarding end-to-end latency, session start-up time?
WebRTC
Which one consumes more hardware resources?
That's not the right question to ask. Your overhead in almost any other parts of the application is going to be far more than the transport overhead of the protocol used for distribution.
The real list of things you need to think about:
Client compatibility. What sort of clients must you support?
Do you really need low latency everywhere? Do you understand the tradeoffs you're making with that demand? Are you willing to destroy any sense of video quality and reliability for all your users if only a handful of them are going to be interactive?
What's your budget? Off-the-shelf solutions for distribution are much cheaper. If you can push off your stream to YouTube for non-interactive users, you can save yourself a ton of money. If you can't use existing infrastructure, be prepared to spend mountains of cash.
What are your actual latency requirements? Are you prepared to reduce the number of people that can use your application when these latency requirements cannot be met on crappier networks and mobile devices?
What are your quality requirements?
Where will you transcode video to a variety of bitrates?
Do your viewers need adaptive bitrate viewing?
Do you need to push streams to other platforms simultaneously?
Do you need to record the streaming for on-demand watching or going back in time?
You might also find my post here helpful: https://stackoverflow.com/a/37475943/362536
In short, check your assumptions. Understand the tradeoffs. Make decisions based on real information, not sweeping generalizations.

Ocropus Engine on iPhone and/or Android

What is the best way to get ocropus running on iOS and/or android?
I'm interested in using Ocropus to digitize some content on mobile devices. I'm largely interested in using a trained 'language' model to make predictions on the device. Training will occur offline and off device. I know a few people have got tesserect running on mobile devices, but I'm unable to find much information on doing the same with Ocropus. I'd greatly appreciate a slice of your collective wisdom in an effort to avoid wasting days taking the wrong path.
Would it be easier to just prototype the algorithm using the scripts, then grab the specific c++ code of interest and include it directly in my application. Or best to compile as a static/dynamic library?
It would be better setting up a simple web service that uses Ocropus or any OCR library for that matter. Then have your smartphone application make requests to the web service. OCR is a CPU intensive process, so it's appropriate to move it off of the phone.

iPhone Platform Constraints

I'm analyzing the iPhone platform (for a paper). I've made a list with issues,
developers/architects have to consider, before working with the iPhone SDK.
The questions aims at people, who want to release iPhone software. What constraints restrict them in comparsion to other mobile platforms, such as Android, Windows Mobile, Symbian, etc.
Feel free to add hurdles, which I may have forgotten to list.
Thanks.
iPhone platform constrains/hurdles:
No physical keyboard
No replacable battery
One Application A Time
Sandbox File System
Restricted Deployment Cycle (Dev program...)
App Store Approval Process
No replaceable battery is no concern for software developers whatsoever, as there are no APIs for battery manipulation or replacement. This is no more of a concern for iPhone developers than "access to electricity" is a practical concern for developing for other platforms.
Others I would add:
Requires a Mac. Fairly obvious one, not a terrible barrier to entry compared to other closed systems like game consoles, but still higher than some other phone/mobile platforms like Windows Mobile, J2ME or Brew.
Costs money to debug on real hardware. You can only run and debug in the simulator unless you buy a $99 developer program subscription, which lets you pair iPhone and iTouch hardware with your Xcode install and run apps on it.
Objective-C as the programming language. It really shouldn't deter anyone but a lot of developers get really grumpy about learning anything new or different.
Must accommodate interruptions (i.e., the user may get a call at any time and the app must be prepared to save any state necessary and quit within a fixed time limit).
Not specific to iPhone but like any platform, you are constrained by the CPU/GPU/RAM the device has, and in the iPhone's case this is obviously quite a bit less hardware than people with a desktop background are accustomed to.
Restrictive wording in EULA regarding embedded scripting languages. It is apparently forbidden to execute any scripts via an iPhone application, which is quite a bummer as embedded scripting languages are quite common these days and very useful.
Limited CPU speed
Limited RAM
Objective-C is effectively the main
dev language
Power management concerns (I'm not sure if lack
of a replaceable battery is a concern
of mine). High CPU utilization can be a drain on the battery (and cause extra heat). In other words there are CPU intensive things I choose not to do, in order not to drain the battery too fast.
Only one IDE
inability to access other apps' data
easily