Does anyone know if USB modems are capable of detecting if there is a dial tone on a phone line without taking the line off the hook? I have read that you need to have the modem "open" the line and then there is a command that can be sent to check for dial tone. Is opening the line the same as taking the phone off the hook? If I do this while a call is coming in, will the modem in essence be answering the call?
If this is not the way to go, are USB modems capable of voltage detection? If so, is there a specific voltage that indicates the presence of a dial tone?
Thanks!
A useful primer on telephony.
To determine if there is dial tone, you have to take it off hook and listen. The presence of different voltage levels can be useful in knowing there is a valid circuit or if there is another use of the line, but not reliable to know if it is safe to dial.
yes, I believe opening the line is the same as taking the line off hook. Note, there isn't a dial tone until you go off hook, the telco detects and starts playing audio as an availability indicator.
If you do this while a call is coming in, you would be answering it. When working with automated systems, this is a problem and why inbound and outbound lines are usually segregated. I've seen some techniques that assume an inbound call if there isn't dial tone. If the empty line/human never responds, you only loose the availability of the line for the period of time it takes to determine a lack of response (timeout+retries).
Modems do perform voltage detection (ringing and other conditions). Voltage does not indicate dial tone. Again, it isn't present until you complete the circuit and the telco switch responds. Side note, there is usually a limit to the number of concurrent channels that can continuously play dial tone and this can sometimes cause interesting issues when opening large amounts of channels for an extended period of time.
Related
I'm researching and trying to building a RC car that can be controlled by the internet. I've started looking into how communication over the web works, but I seem to be going nowhere. My goal for the project is straight forward:
The RC car has an on-board camera and 4g wifi router that enables communication (driving commands, video streaming) over the internet. A Raspberry Pi will serve as the on-board computer.
I will be able to control the car with my PC even across the globe, as long as I'm connected.
I want to preferably do as much by myself as possible without relying too much on other people's code.
So here are my questions:
How does an application communicate over the internet? What is the interface between the application's logic (e.g pressing "w" to go forward), and transmitting/receiving that command over the internet?
How is video data stream handled?
I've looked into WebRTC and WebSockets for communication, but they are aimed at providing real time communication to web browsers and mobile, not something like a raspberry pi, and I'm still in the blind as for exactly what technology should I use, and in general the overview and architecture of real time communication.
All I've achieved so far was an app that sends text messages between devices through a server on my network, with very primitive reading/writing using java Socket.
In short, what does messenger/skype/zoom do in the background when you send a message or video call?
Any guidance would be greatly appreciated.
First things first. You cannot do real-time control over Internet, period. There is absolutely no way to guarantee the delivery latency. Your control commands can arrive with a delay from milliseconds to seconds, or never. No way around it.
Now, you can still do a number of reasonable steps to absorb that unpredictable latency as much as possible and safe-guard your remote robot from the consequences of the unreliable communication.
For example, instead of sending the drive commands directly - as in, acceleration, deceleration, turn angle, etc., you can send a projected trajectory that is calculated from your drive commands locally on a model. Your RC car must be sufficiently smart to do some form of localisation - at the very least, wheel odometry, and with a good enough time sync between the sender and the RC car you'll be able to control the behaviour remotely without nasty consequences of drive commands executed at an unpredictable delay.
You can add a heart-beat to your protocol, to monitor the quality of the communication line, and if hear-beat is delayed or missing, initiate emergency stop.
Also, don't bother with TCP, use UDP only and maintain your own sequence counter to monitor missing packets. Same applies to the telemetry stream, not just command channel.
Our application uses the Twilio voice SDKs for iOS, Android, and Web. Our use case relies on precise device synchronization and time stamping. We are playing an audio stream on multiple adjacent devices (in a Twilio conference call) and we need that audio playback to be in sync. Most of the time, it works great, but every now-and-then, one of the devices falls a little bit behind and throws off the whole experience. We want to detect when a device is falling behind (receiving packets late) so we can temporarily mute it so it does not throw off the user experience we are going for.
We believe that Twilio voice uses real time communication (web RTC) and real-time transport protocol (RTP) under the hood. We also believe RTP has time stamping information for when packets are sent out and when packets are received.
We are looking for any suggestions for how we might read this timestamp information (both sent & received) to detect device synchronization issues.
Our iOS and Android clients are built using Flutter & Dart, so any way to look at this packet information using Dart would be great. If not, we can use native channels through Swift and Kotlin. For the web, we would need a way to look at this timestamp data using javascript.
If possible, we'd like to access this information through the SDK. I don't see anything about timestamps in Twilio's voice documentation. So, if not possible, we might have to sniff for packets on the devices? This way, we could look at the RTP packets coming from Twilio to see what information is available. As long as this does not break Twilio terms of service, of course :)
Even if you could get this information I don't think it will be useful. The timestamp field in RTP has little to do with real time. In voice it's actually a sample offset into the audio stream. With a typical narrowband codec with a fixed bit rate and no silence suppression it's completely predictable from the RTP sequence number. For example, with 20ms packets of G.711 it will increment by exactly 160 each packet.
RTP receivers expect there to be random variation between the receipt time of a packet and its timestamp - known as jitter. This is introduced by delays at the sender, in the network and at the receiver. This is why receivers use jitter buffers to reduce the likelihood of buffer underrun on playing. The definition of jitter for RTCP - the interarrival jitter - is a calculation that measures this. That is - the variation between the (predictable) RTP timestamp and the measured wallclock time at the receiver.
Maybe you need something more like an NTP protocol between your client and your server.
I need to check whether or not a device can support voip calls at a certain level of quality. My approach (of which I accept there may be a better one) is to conduct an Internet connection speed test on a user's iOS device immediately before the call is placed. The speed test should, as accurately as possible, determine whether the impending voip call will be of good or poor quality.
The voip call includes live video (similar to Skype).
I'm aware of the following techniques to measure connection speed:
Download or upload a file and measure how long it takes.
as outlined here. Will measuring download or upload speed give an accurate picture of what voip call quality will be like for the immediate call? Also, files may be cached or speeds may even be throttled by the ISP.
Use ICMP packets (ie. ping) a reliable server (eg. google.com).
One potential problem with this approach is that (I've heard that) some routers are configured to give ICMP packets a lower priority than others. Therefore they cannot be used as an accurate measure of bandwidth/speed/reachability etc. Is this so?
Is measuring network connection speed an effective way to predict voip call quality? If so, what is an effective and quick (ie. less than 3 seconds) way to measure Internet connection speed for this purpose?
The actual data of a VoIP call is carried over RTP, which really only takes 24-64Kbps (depending upon codec) and requires UDP addresses going each way. Occasional RTCP packets are sent to report status, metrics, etc, but are not really needed.
SIP is used for call setup and teardown.
The RTCP packets carry (minimal) call quality metrics.
Several parameters influence call quality, including choice of codec, available bandwidth, network latency, packet loss (RTP is over UDP so no retransmission), and jitter (inter-packet arrival delay, out of order delivery).
(Cicso) Switches implement RED, a technique to reduce queue depth by randomly discarding network packets. For TCP network connections, that is acceptable, because TCP has retransmission through a sliding window protocol. And many UDP based protocols implement application retransmission. But RTP does not afford that luxury. So random discard of voice packets impairs the connection quality. One solution to RED would be to tunnel VoIP over a TCP connection, but that wasn't the choice made.
Congested networks are a huge source of VoIP call quality problems, and that can be measured during the initial few seconds of a call. Dropped packets due to jitter and delayed packets (high network latency) are two main causes of call quality degredation. I worked on a VoIP quality of service monitor system, and we observed the worst calls had high jitter and high latency (above ~70ms is bad). Avoid high latency, congested networks. Choice of coded can have a huge impact on quality. Higher compression codecs lose more to packet loss than less 'efficient' codecs, so pick a codec that uses higher bandwidth (good luck).
IP networks need QoS guarantees to provide best VoIP quality. And until TCPIP is redefined to include QoS, VoIP will have (potential) problems.
Your approach is close. But you want to measure:
UDP
Packet loss
Congestion
Latency
Packet jitter
You need to timestamp and number your packets, and detect high latency, interarrival jitter, avoid measuring over TCP (packet retransmission will skew your quality numbers, and TCP reorders packets, even though it introduces delay). You also want to know the quality on both. You might find that codec selection would be a huge factor in improving the calls.
The company I worked for building the monitor (Telchemy) licensed their VQMon software as a product to measure quality, so the tool you want already exists.
There are some ios sip applications who are able to communicate with a UDP only SIP Server.
As I know iOS allows only TCP connection to remain open in the background but most of the SIP providers are supporting only UDP.
I have noticed that iOS application 3CXPhone has a "NAT helper mode" and it is able to keep the communication in background with a 3CX Phone system who is UDP only. Dose anyone know what trick do they use? I am developing an SIP app and I have to make it work for the UDP only SIP providers.
I know there are multiple questions regarding UDP socket in background on SO but none of them has a useful answer or the solution proposed there dose not work anymore (starting from iOS 6).
Until now I am aware of 2 possible solutions:
1. Use some GPS events and during that events maintain the socket communication too. After that try to trick apple and get your app in the store.
2. Use a SIP proxy in the middle (B2BUA). But in the 3CXPhone "NAT helper mode" I am not seeing any sip proxy configuration.
If you really need a UDP socket you will need a few things:
UIRequiresPersistentWiFi: to ensure that iOS connects to Wi-Fi and doesn't turn it off after some time (I'm assuming you want Wi-Fi as well, if not just ignore this one)
Play an empty audio in the background in a loop to keep your application active.
Have a timer that pops every ten seconds or so and sends a small (e.g. crlf) message to the server.
The last step is needed to keep the UDP connection open in the network. If you don't send anything often, someone in the network (e.g. a router) will close it.
The empty audio file is the only way to ensure you can do something in the background in short intervals (the ten second timer).
After writing all that: this will consume a lot of battery. Users will not leave your app running for long.
Most modern SIP servers support TCP. You should really spend your time on a TCP solution. The UDP one won't be accepted by your users.
I write a project with C# and i want to dial by AT COMMAND to other but my project don't work correctly because it make connection correctly and i hear sound of other side but that not hear my sound
my modem is voice
and i use ATDT0941221225425;
Your question is a little confusing, but what it seems like you're saying is that your modem does dial and connect, but you don't hear the touch tones?
If this is correct, you may need to add some more commands to your command string.
Try sending ATM1L3 to your modem before you send the DT command. This should set the modem monitor speaker to be on until the carrier signal is detected (M1) and be at maximum loudness (L3).
Unfortunately, I don't have a modem at hand to test this and am primarily working from 15+ year old knowledge.
There's a reference of Hayes modem commands at http://seriss.com/people/erco/unixtools/hayes.html, and the Wikipedia article on the Hayes Command set at http://en.wikipedia.org/wiki/Hayes_command_set also has a list.