Why would I want a USB alternate interface with no endpoints? - interface

The example code that Atmel gives for USB devices has an interface with two alternate settings. The first one has no endpoints, the second has 6 endpoints. Is there any reason for this - why not just have one alternate setting with all the endpoints?
I found a vague post somewhere on the internet suggesting it might be something to do with power saving. Does anyone have any idea?

Ah so it seems like it is because an interface with isochronous endpoints reserves bandwidth on the USB bus. But having a default alternate with no isochronous endpoints you avoid that issue.
Sources:
http://www.makelinux.net/ldd3/chp-13-sect-1
The initial state of a interface is in the first setting, numbered 0. Alternate settings can be used to control individual endpoints in different ways, such as to reserve different amounts of USB bandwidth for the device. Each device with an isochronous endpoint uses alternate settings for the same interface.
https://msdn.microsoft.com/en-us/library/windows/hardware/jj124028(v=vs.85).aspx
This test verifies that when any device has an interface that consumes isochronous bandwidth, that device supports multiple alternate settings for that interface, and that alternate setting 0 (zero) does not consume isochronous bandwidth.

For audio, you must always provide a zero bandwidth alternate setting for when the device is not in use:
Whenever an AudioStreaming interface requires an isochronous data
endpoint, it shall at least provide the default Alternate Setting
(Alternate Setting 0) with zero bandwidth requirements (no isochronous
data endpoint defined) and an additional Alternate Setting that
contains the actual isochronous data endpoint.
From UAC 3.0
Same for video:
All devices that transfer isochronous video data must incorporate a
zero-bandwidth alternate setting for each VideoStreaming interface
that has an isochronous video endpoint, and it must be the default
alternate setting (alternate setting zero). A device offers to the
Host software the option to temporarily relinquish USB bandwidth by
switching to this alternate setting. The zero-bandwidth alternate
setting does not contain a VideoStreaming isochronous data endpoint
descriptor.
From UVC 1.5

Related

Should I use RTP or WebRTC for local network audio communication

I have a set of Raspberry Pi Zeros that I would like to use as a home intercom. I initially set them up to send audio to each other using golang with gRPC and bidirectional streaming, which works for short calls, but the lag builds up over time, so I think I need to switch to a real-time protocol like RTP or WebRTC. Since I already know the IP address of each device, and the hardware/supported codecs for each is the same, and they are all on the same network, is there any advantage to using WebRTC over using plain RTP? My understanding is that WebRTC mainly provides some additional security and connection orchestration like ICE and SDP, which I wouldn't necessarily need. I am trying to minimize resource usage since these devices are not as powerful as a phone or desktop. If I do use WebRTC, I can do the SDP signaling with gRPC or some other direct delivery method. Since there are more than 2 devices, I'm also curious about multicast functionality, which seems pure-RTP specific, while WebRTC (which uses RTP), doesn't necessarily support multicasting, and would require (n-1)! p2p connections. I'm very unclear/unsure about this point.
Also, does either support mixing audio channels natively, or would that need to be handled in the custom software?
You could use WebRTC, but you'd need to rig a signalling server, and a STUN / TURN server. These can be super simple and low capacity because everything is on a private network, but you still need 'em. The signalling server handles the necessary SDP interchange. Going full WebRTC might be overengineering this. (But of course learning to get WebRTC working can be useful.)
You've built out a golang infrastructure. Seeing as how you're on a private network, you could change up that program to send multicast UDP packets or RTP packets. Then you can rig your listeners to listen to them.
No matter what you do, you'll need to deal with the lag. A good way to do it in the packet world: don't build a queue of buffers ready to play. Instead, always put each received packet as the next-to-play packet, even if you have to overwrite a previously received packet. (That is, skip ahead.) You may get a pop once in a while, but with reasonably short packets, under 50ms, it shouldn't affect the user experience significantly. And the lag won't build up.
The oldtimey phone system ran on a continent-wide 8K synchronous clock. So lag was not an issue. But it's always a problem when audio analog-to-digital and digital-to-analog clocks aren't synchronized. That's true whenever they are on different devices. The slightest drift builds up over time. (RPis don't have fifty-dollar clock parts in them with guaranteed low drift.)
If all your audio sources run at the same sample rate, you can average them to mix them. That should get you started. (If you're using WebRTC in a browser, it will mix multiple sources for you. )
Since you are using Go check out offline-browser-communication. This removes the need for Signaling and STUN/TURN. It uses mDNS and pre-generated certificates. It is also being discussed in the WICG Discourse no idea if/when it will land.
'Lag' is a pretty common problem to have when doing media over TCP. You have lots of queues and congestion control you are dealing with. WebRTC (and RTP in general) is great at solving this. You have the following standardized things to solve it.
RTP packets have the relative timestamp
RTP Sender reports have a mapping of relative to NTP timestamp. Use this for sync/timing.
RTP Receiver reports give you packet loss/jitter. Use this to assert your network health.
Multicast is a fantastic suggestion as well. You reduce the complexity of having to signal all those 1:1 connections, and reduce the amount of bandwidth required. It does make security a little bit more delicate/roll your own though.
With Pion we decoupled all the RTP/RTCP stuff Pion Interceptor. So you don't have to use the full WebRTC stack to get the media transport things mentioned above.

SIP and RTP in VoLTE

I am investigating the SIP signaling and RTP media in VoLTE traffic. I can see RTP header but was told that the RTP payload and the SIP packets are all encrypted in IPsec. Is this true? If yes, at what interface I can see the decrypted packets?
Thanks.
LTE is based on IMS (IP Multimedia Subsystem) which is a very broad and encompassing set of specifications for an architectural framework that enables multimedia communication between IP connected end points.
Because it is so broad and all encompassing there are actually many different security points and interfaces - for example there are security specs for communication between an access network connected device (such as a mobile phone) and the core, for communication between different nodes within a single core network, for communication between different operator's or organisation's core networks etc.
3GPP and LTE build on the IMS specs and include specific security specs for the Mobile world also. There is a 3GPP spec which looks at access security for IMS (3GPP TS 33.203) and it includes the following diagram:
Each of the numbers in the diagram above is a different security 'association' and the above standard references one or more specifications for each one.
The result of all this security complexity and these many security layers is that the answer to your question depends on the point in the network you are looking at. For example, if you intercept the traffic between the phone and the base station you will not be able to see anything as it will all be encrypted at a lower layer (notwithstanding the latest GSM/3G security hacks etc). Similarly if you are looking at the traffic between the core network nodes or between different networks this may be over IPSEC tunnels etc and again you will not be able to see it.
If your aim is to intercept and eavesdrop on VoLTE voice calls then you are going to find this very hard as many of the above mechanisms are designed to prevent this - I won't say it is impossible as I'm sure someone will reference a hack or a 'government backdoor' example for similar technology etc.
If your interest is academic, or in profiling the performance of the network etc then you may be able to achieve what you want using one of the open source IMS solutions - e.g. http://www.openimscore.org.
Or, if you are working for, or with, one of the network equipment vendors then you may be in a position to insert or leverage network management and/or OSS 'hooks' or mechanisms which allow you gather info from some unencrypted data at certain points in the end to end flow.

CANopen profile for multiple interfaces card

I want to build a microcontroller-based CAN node card that has interfaces like UART, SPI and I²C, to which connect different peripherals and interfaces, like say a EIA-485 counter or a SPI digital I/O expander. I'd like to define a profile for the card that's flexible enough to adapt to any possible configuration and include any device that can be connected to such node card. Since CANopen profiles seem to be pretty rigid, I researched CANopen virtual devices but that seems not the answer either.
Is there a standard for such functionality or I'm sailing unknown waters?
You are sailing unknown waters unless you consider a CANopen bootloader a possible solution. There is no existing device profile that fits your criteria. CANopen is remarkably flexible but arbitrary extensibility is beyond it.
You could export the registers of your microcontroller 1:1 through the object dictionary and issue interrupts through PDOs. It would certainly be a fun exercise if just a bit impractical.
From the CAN in Automation website:
CANopen generic I/O modules are standardized in the CiA 401 device
profile specification. The profile supports a granularity of 1-, 8-,
16-, and 32-bit for digital I/Os and a resolution of 1-, 2-, and
4-byte for analog I/Os.
However, it may be easier to implement a custom device, based on the general CiA 301 CANopen application layer and communication profile standard. You could implement a set of general purpose IOCTL functions using Manucturer objects (2000h to 5FFFh) and possibly use SDO Block Transfer to 'stream' data to specific OD objects representing connected device end points.
You will need to consider that, even with a bitrate of 1 Mbps at the physical layer, the CANopen protocol is never going to be able to keep up with a USB2 HS device when it comes to streaming data! Also bear in mind that if you use PDOs for 'real-time' exchange of OD values there will be a significant lag and that the time quantum for PDO exchange is in the order of 25 ms or greater.
The final consideration is what CANopen master are you going to use? If the product is to be commercially available then you will need to specify and document your EDS very carefully (maybe even providing an OPC or similar API).

How to sync an application state over multiple iphones in the same network?

I am developing an iPhone application that allows to basically click through a series of actions. These series are predefined and synced with a common configuration server.
That app might be running on multiple devices at the same time. All devices are assumed to have the same series of actions defined on them. All devices are considered equal, there is not a server and multiple clients or something like that.
(Only) one of these devices is used by a person at any given time, it is however possible that the person switches to a different device at any given time. All "passive" devices need to be synchronized with the active one, so that they display the same action.
The whole thing should happen as automatically as possible. No selection of devices, configuration, all devices in the same network take part in the same series of actions.
One additional requirement is that a device could join during a presentation (a series of actions) and needs to jump to the currently active action.
Right now, I see two options to implement the networking/communication part of that:
Bonjour. I have implemented a working prototype that can automatically connect with one (1) other device in the network and communicate with that. I am not sure at this point how much additional work the "multiple devices" requirement is. Would I have to open a set of connections for every device and manually send the sync events to all of them? Is there a better way or does bonjour provide anything to help me with that? What does Bonjour provide given that I want to communicate with every device in the network anyway?
Multicast with AsyncUdpSocket. Simply define a port and send multicast sync events out to that port. I guess the main issue compared to using bonjour with tcp would be that the connection is not safe and packets could be lost. This is however in a private, protected wlan network with low traffic if that would really be an issue. Are there other disadvantages that I'm not seeing? Because that sounds like a relatively easy option at this point...
Which one would you suggest? Or is there another, better alternative that I'm not thinking of?
You should check out GameKit (built in to iOS)--they have a lot of the machinery you need in a convenient package. You can easily discover peers on the network and easily send data back for forth between clients (broadcast or peer to peer)
In my experience Bonjour is perfect for what you want. There's an excellent tutorial with associated source code: Chatty that can be easily modified to suit your purposes.
I hobbled together a distributed message bus for the iphone (no centralized server) that would work great for this. It should be noted that the UI guy made a mess of the code, so thar' be dragons there: https://code.google.com/p/iphonebusmiddleware/
The basic idea is to use bonjour to form a network with leader election. The leader becomes the hub through which all the slaves subscribe to topics of interest. Then any message sent to a given topic is delivered to every node subscribed to said topic. A master disconnection simple means restarting the leader election process.

How i can access original data from the ethernet?

I wanted to access the original value from the ethernet, actully by some hardware programming i am sending a value to system via ethernet ,but it's not possible to access what exactly i'm going to send
You could use a packet sniffer like Wireshark, running on either source or destination system. This will let you examine packets as they leave or enter, respectively.
It sounds like what you need is a protocol analyzer. On Windows, look at Microsoft's Network Monitor.