what is the usage of dvb triplet in Settop box? - dvb

I am seeing that the way to identify the service is through DVB triplet. How DVB triplet is formed, and how it guarantees to be unique?

According TS 102 539, a DVB Triplet is composed of:
Original Network Id
Transport Stream Id
Service Id
This triplet is unique and allows to fully identify a service because:
2 programs will have different S-ID within a TS
2 TS will have different TS-ID within a Network
Each Network has his unique ON-ID provided by DVB.

Some supplementary information in addition to Coconop's answer, which is basically correct.
TS 102 539 is for IPTV applications.
The canonical references for the triplet in broadcast applications are EN 300 468 and TS 101 211. In this scenario, it depends on which market the broadcast signal originates from. If TS 101 211 is not part of the broadcast profile, then the triplet uniquely identifies a service. If TS 101 211 is part of the broadcast profile, then you can leave out the transport stream ID, since in that case the service ID must be unique within an original network ID.
It is thus safe in all cases to use the full triplet to identify a service.
There is also a standard for a dvb URL scheme in TS 102 851, which uses the triplet as part of the URL. AFAIR, the Xine player understands dvb:// URLs.

Related

Single Channel LoRaWAN systematically accepts just one packet out of 3 sent by node

I just built and tested a single channel LoRaWAN gateway which is connected to TTN as per the instructions of thing4U/esp-1ch-gateway with a single channel node both based on TTGO-ESP32Lora and eventually configured both on www.thethingsnetwork.org. Everything works nicely but I do not understand why despite the node sending data at pace of 2 minutes, the gateway receives just one packet out of three. So if I trasnmit: packets 0,3,6,9 etc. the data at ttn is updated every 6 minutes instead of 2.
That is correct. LoRaWAN uses the first three channels as main channels for communication. More can be configured for use. These three exist in part because they then can always be used for OTAA.
So if you have a single channel gateway and it is listening to 868.100 MHz and your node sends on 868.300 MHz then your gateway won't hear it because it is listening on the wrong frequency.
There are several solutions:
configure your node to only send on the single frequency your gateway is listening for.
Add two more single channel gateways who listen on the other main frequencies.
Add a multi channel gateway.
Frequencies are only meant as an example, these frequencies are applicable to EU and may differ in your own region but the principle still stands.

Limit on DDS topic names

I am currently using RTI DDS and am writing some specifications. However, I don't want any of the topic names in my specifications to over exceed the limit for the topic name in DDS. I tried searching and could not find if any such limit exist. Does anyone happen to know if DDS limits your topic name to a certain length of if it is just to not exceed the limit on the length of a String for whatever language you are programming in?
The OMG standard for DDS (rev 1.2) does not supply an arbitrary limit to Topic name length.
A Topic is identified by its name, which must be unique in the whole Domain.
According to the RTI documentation (5.1.0 Users Guide pdf, Section 5.1.1, page 170), RTI's implementation of the Standard implements an arbitrary limit:
topic_name | Name for the new Topic, must not exceed 255 characters
This appears to be the length max for any Entity (you can name entities in its QoS, so that tools can report human-readable names for which Entity they are reporting on).
While it is true that the DDS API specification does not mention a limit for Topic names. The complementary DDS Wire Protocol specification: Real-Time Publish Subscribe (RTPS) protocol (http://www.omg.org/spec/DDSI-RTPS/2.2), does state that Topic names shall not exceed 256 characters (see Table 9.12).
So the 256 character limit on Topic name lengths imposed by the RTI DDS implementation is not arbitrary. It is precisely what is required to be interoperable with other DDS implementations.
Gerardo

How to manage Squid based on per user user bandwidth

I want to manage the bandwidth and traffic based on user activities on Squid Server Proxy.
I made some research but couldn't find the solution that I want.
For example, users who have more than 256K traffic should be restricted from server.
Can you help me?
Thanks
I'm assumed squid 3.x:
To provide a way to limit the bandwidth of certain requests based on any list of criteria.
class:
the class of a delay pool determines how the delay is applied, ie, whether the different client IPs are treated separately or as a group (or both)
class 1:
a class 1 delay pool contains a single unified bucket which is used for all requests from hosts subject to the pool
class 2:
a class 2 delay pool contains one unified bucket and 255 buckets, one for each host on an 8-bit network (IPv4 class C)
class 3:
contains 255 buckets for the subnets in a 16-bit network, and individual buckets for every host on these networks (IPv4 class B )
class 4:
as class 3 but in addition have per authenticated user buckets, one per user.
class 5:
custom class based on tag values returned by external_acl_type helpers in http_access. One bucket per used tag value.
Delay pools allows you to limit traffic for clients or client groups, with various features:
Can specify peer hosts which aren't affected by delay pools, ie,
local peering or other 'free' traffic (with the no-delay peer
option).
delay behavior is selected by ACLs (low and high priority traffic,
staff vs students or student vs authenticated student or so on).
each group of users has a number of buckets, a bucket has an amount
coming into it in a second and a maximum amount it can grow to; when
it reaches zero, objects reads are deferred until one of the object's
clients has some traffic allowance.
any number of pools can be configured with a given class and any set
of limits within the pools can be disabled, for example you might
only want to use the aggregate and per-host bucket groups of class 3,
not the per-network one.
In your case can you use:
For a class 4 delay pool:
delay_pools pool 4
delay_parameters pool aggregate network individual user
The last delay_pool, can be configure in your squid server proxy:
for example; each user will be limited to 128Kbits/sec no matter how many workstations they are logged into:
delay_pools 1
delay_class 1 2
delay_access 1 allow all
delay_parameters 4 32000/32000 8000/8000 600/64000 16000/16000
Please read more:
http://wiki.squid-cache.org/Features/DelayPools
http://www.squid-cache.org/Doc/config/delay_parameters/

I don't understand the paragraph about multicast

This paragraph if from UNP,
chapter 21.3 page 555
A host running an application that has joined some multicast group whose
corresponding Ethernet address just happens to be one that the interface
receives when it is programmed to receive 01:00:5e:00:01:01 (e.e., the
interface card performs imperfect filtering). This frame will be discarded
either by datalink layer or by the IP layer.
I just don't know which special case is the author talking about. Could you help me explain it clearly?
IN IPV4. A multicast Address (old class D) consists of 4 bits fixed for identifying it as multicast(1110), and the remaining 28 bits to Identify the group.
Since there are only 23 Bits available in a MAC Address (the high order 25 bits are fixed), when you map the lower order 23 bits of the multicast address into the lower order 23 bits of the mac you lose 5 bits of addressing information. So multiple Multicast addresses all have the same MAC address.
for example
237.138.0.1
238.138.0.1
239.138.0.1
all map to MAC address: 01:00:5e:0a:00:01 (There are more, this is just a subset to illustrate)
so if you join group 237.138.0.1, your ethernet card will start sending frames up the stack for that MAC. Since it is an imperfect match (since we discarded those 5 bits), the ethernet card will also send 238.138.0.1 and 239.138.0.1 up the stack as well. But since you are not interested in those frames they will be discard at Layer 2 (data link) or Layer 3 (Network) when they can be matched exactly.
So the special case is that if you have multiple multicast streams that occupy the same lower 23 bits of address space, all hosts on the network segment are going to have to process the packets higher up in the stack and thus do more work to tell if the packet they got is one they are interested in).
normally you just need to make sure when planning your multicast deployments, that you try to avoid overlapping addresses.

SIP channel format. Asterisk

I get Bridge event from Asterisk. Channel2:SIP/727-000000e3. 727 is a number of phone. What does the rest part mean(-000000e3)? Thank you.
UPD
I have found this in http://www.voip-info.org/wiki/view/Asterisk+SIP+channels
When you have an established SIP connection, its channel name will be in this format:
SIP/peer-id
peer is the identified peer and id is a random identifier to be able to uniquely identify multiple calls from a single peer.
So for each call that part will be unique?
That is a unique identifier for that channel technology type over the lifetime of an Asterisk instance. So if you stop Asterisk, you are no longer guaranteed to not have repeats of that identifier.
As an implementation detail, the number is not unique, but a monotonically increasing hexadecimal number with respect to the channel technology type.