Difference between Centralized radio resource management and Distributed radio resource management in LTE - lte

What is the difference between Centralized RRM and Distributed RRM?
I heard that LTE has Distributed RRM. Previous generation had Centralized RRM. Can some please explain the differences between them.

Finally, I found the answer for this question.
In GSM, GPRS/EDGE, UMTS, HSPA there is a master controller, that controls all the base stations. There is Base Station Controller (BSC) in case of GSM, GPRS/EDGE, and the Radio Network Controller for 3G systesms (UMTS, HSPA).
2G/2.5G-->BSC,
3G-->RNC.
This is termed as Centralized Radio Resource Management.
But in LTE, most of the functions of RNC are integrated into eNodeBs(Base stations in LTE). So, there is no master controller, which controls the base stations (eNodeB) in LTE. This is called Distributed Radio Resource Management. Also called as flat architecture.
Reference:https://www.youtube.com/watch?v=1_x9axf0jlk&list=PLE6yE0jB6BTOeXSdhXeOxVQPhnZ0R3ltM

The Centralized RRM will be located in a separate node or kind of gateway entity of network and shall control many cells (having Distributed RRM). The number of distributed RRM shall be less than that of Centralized RRM in a typical deployment of a network operator. Distributed RRM will be located in cell and communicate with one centralized RRM (typically communication will be with one centralized RRM and in some case it can be more that in it as per deployment).
The Centralized RRM shall mostly include functionalities related to Radio admission control, Radio Bearer Control, Dynamic Resources Allocation based on interaction with Distrbuted RRM. The Distributed RRM shall encompass functionalities that are very near to the radio technologies w.r.t MAC and PHY, like QoS management, MAC configuration, PHY configuration etc...

Related

Getting Beyond 50 Replica Set Members in Mongodb

I’m looking to build a distributed Access Control system for a microservice platform. I’m considering using Mongodb as my database technology. My system design objectives are as follows:
Policy Enforcement should be distributed - If any given Policy
Enforcement Point (PEP) experiences downtime, only the application
that the PEP serves should be affected.
Policy Decisions should be
distributed - We don’t want the whole platform to experience downtime
because a central Policy Decision Point (PDP) is experiencing
downtime. We only want it to affect the application that it serves.
Policy Administration should be centralized - Creating a centralized
policy administration interface provides the ability for any system
(including a UI) to understand what rights an individual has, and by
establishing a common interface it allows us to more easily audit
changes to access across a whole platform.
Policy Information (context) is distributed - We don’t get to choose this if we are
building a distributed microservice platform. We can centralize the
retrieval of additional context by aggregating data that is needed to
make access control decisions into a single place, but the data
sources are still distributed.
I’m considering building a system like the one shown below. The idea is that Access Policies are administered by a central Policy Admin API. This API manages Policies that are persisted to a mongodb cluster with a 3 member replica set backing it. I would like other APIs in the platform to have a dedicated policy-query-api (Policy Decision Point) that is deployed along side it to make Access Control decisions pertinent to the API. The idea is that if any one of the policy-query-apis goes down, only the API that it serves will be affected.
I want changes to Policies to be governed by the Policy Admin API and I would like the changes to be replicated across each mongo instance that is used by each of the policy-query-apis.I don’t want the mongo replicas for each policy-query-api to affect a write to the primaries.
I also don’t need immediate data consistency (less than 5 sec latency), but I would like the data replication to be handled at the database layer if possible. The technology is already built to handle this and I don’t want to reinvent the wheel at the application layer if possible.
I’ve looked at the documentation on Replica Set Members and I’ve pretty thoroughly reviewed the documentation on Replica Sets in Mongo. It seems like having a Hidden Member or Delayed Member would be a good fit for my use case. Do you agree? Also, I’m concerned about the 50 member replica set limit 1. Since each one of these replicas would serve an API in my platform, if there exceeded more than 50 microservices (which is quite likely) how would I manage replication like this?
Just so that I understand, you are asking about:
one standalone (?? your picture suggests standalone but you are asking about 50 node RS limit) node per application, data mirrored to standalone from the master RS
the application only queries its local standalone
MongoDB provides read preference nearest for the use case of reading data from local nodes. Importantly the nearest read preference still provides availability if your local node is unavailable - the next closest (roughly) node will be used in this case. Your proposed architecture would take the application down every time its local database node needs to be restarted for version upgrades.
You may also look into tag sets.
Additionally, MongoDB allows specifying priorities on nodes for election purposes. If you put all of your MongoDB nodes into the same RS, you can use priorities to have one of the 3 designated "main" servers be primaries if any of them are available.

SIP and RTP in VoLTE

I am investigating the SIP signaling and RTP media in VoLTE traffic. I can see RTP header but was told that the RTP payload and the SIP packets are all encrypted in IPsec. Is this true? If yes, at what interface I can see the decrypted packets?
Thanks.
LTE is based on IMS (IP Multimedia Subsystem) which is a very broad and encompassing set of specifications for an architectural framework that enables multimedia communication between IP connected end points.
Because it is so broad and all encompassing there are actually many different security points and interfaces - for example there are security specs for communication between an access network connected device (such as a mobile phone) and the core, for communication between different nodes within a single core network, for communication between different operator's or organisation's core networks etc.
3GPP and LTE build on the IMS specs and include specific security specs for the Mobile world also. There is a 3GPP spec which looks at access security for IMS (3GPP TS 33.203) and it includes the following diagram:
Each of the numbers in the diagram above is a different security 'association' and the above standard references one or more specifications for each one.
The result of all this security complexity and these many security layers is that the answer to your question depends on the point in the network you are looking at. For example, if you intercept the traffic between the phone and the base station you will not be able to see anything as it will all be encrypted at a lower layer (notwithstanding the latest GSM/3G security hacks etc). Similarly if you are looking at the traffic between the core network nodes or between different networks this may be over IPSEC tunnels etc and again you will not be able to see it.
If your aim is to intercept and eavesdrop on VoLTE voice calls then you are going to find this very hard as many of the above mechanisms are designed to prevent this - I won't say it is impossible as I'm sure someone will reference a hack or a 'government backdoor' example for similar technology etc.
If your interest is academic, or in profiling the performance of the network etc then you may be able to achieve what you want using one of the open source IMS solutions - e.g. http://www.openimscore.org.
Or, if you are working for, or with, one of the network equipment vendors then you may be in a position to insert or leverage network management and/or OSS 'hooks' or mechanisms which allow you gather info from some unencrypted data at certain points in the end to end flow.

Centralized/Distributed/Service oriented Architecture/Application

I am doing a system architecture and my knowledge from college doesn't help me when it comes to understand the subtle differences between centralized, distributed and service oriented architecture/application.
If I take a typical client/server architecture, the client sends requests to a server, the server then sends responses to the client. That is a centralized architecture.
An application that handles both server and client sides will be a distributed application (because working on different platforms), but that is still a centralized architecture.
Therefore, a distributed architecture must involve a distributed application.
Questions: am I right? What does all that become when it comes to service oriented architectures / applications?
Distributed: The whole process involved in a computation task is divided into pieces and assigned to multiple computational nodes. Each node when doing its part of the processing does not have access to the whole information of the system that is necessary to achieve a globally optimized result. The aggregate of the results from multiple nodes will converge towards a global optimal result through usually multiple iterations of computations distributed across multiple nodes.
A good example is a router system in which each router has only the information it exchanges with its neighbours. At the start the neighbours known only part of the whole network system. Once a router gets more information from its neighbours it incorporates the new information into its view of the whole system, then spreads its view to its neighbours. Through multiple iteration of these steps, each separately computed by individual routers, all routers would settle on a consistent global view of the whole network system.
Another example could be a web ordering system where the browser initially gets a list of commonly order goods. The browser may have logic to track user viewing behaviour and make a decision to fetch from the server a different category of goods list, but does not send all the user behaviour parameters to the server. In this imaged example, the browser knows something the serer does not know, and the server does too. Thus the whole application would be a distributed system. In addition to this part of distributed nature, the user authentication could be done on one of the servers, the inventory is done on another server, and reservation on yet another one. Each of the servers involved would not have the whole information of the specific user browsing and ordering instance. But the aggregate work from all these nodes will fulfill the business need to sell more goods and satisfy more customers.
Opposite to the distributed, is the centralized, in that the computation logic would be always able to get the information of the whole picture.
Given this view, a client-server application can be a viewed as a distributed system if you think the client side involves non-trivial decision making. Or can be a centralized system if you think the client is dumb.
The service-oriented term is more about how the functional processing power is integrated into the system. In a service-oriented system, new capability may be introduced into the system at runtime by new API functionality discovery, or new logic capability discovery behind an unchanged API. Think about it, you could build an application that initially has no much built-in capability, then it expands its capability by discovering and incorporating new capability on the service providers. In contrast a traditional system needs to be built at build time, typically as a consequence of human-involving discussion-design-documentation activity. A service-oriented design is a good fit to a distributed system.

Model multiple instances of hardware with UML

Imagine that we have a simple structure of a client machine and a sever which are connected through the Internet. It is well-known how to show such structure with a deployment diagram.
Now I want to show that it is possible to have unlimited number of such clients and servers where each client has a possibility to be connected to each server. Moreover I want to show that each client machine has the same software client on it and each server has the same database scheme, but with one of different implementations (MySQL, Oracle, ...).
What the best and detailed way of model this with UML?
You can draw deployment diagram as depicted on picture below.
Communication path between nodes defines that many clients and many servers communicate with each other. (top part of diagram)
There are instances of nodes from top part of diagram to define that there are concrete instances of database server on your server machines. Internet is not node of deployment from this point of view. Internet is form of communication realization between nodes.
If it is needed to define deployed code on nodes, use artifacts.
See, the database server is node as well, but of execution environment type !
here is my diagram. I hope it will help you
Use instances of Components and Nodes:
The fact that each client can connect to any server could be shown as a simple note (pragmatic approach) or in some other diagram - collaboration for example (formal approach). If there is a dispatching algorithm in between, I sugest tha latter. If not, the note will be enough, as this is somehow expected.
I suggest to use a plain deployment diagram to show the dependencies

Maintaining state between two machines

We have two industrial controllers that are used to control critical systems. The idea is that on failure of one controller, the other controller will automatically take over. To ensure the swap over is seamless, each the standby controller must mirror the state of the online controller at all time.
We have a solution, which is poorly coded and documented. The question is, is there a common design pattern that implements such a system or open source software that achieves a similar thing thaty could be used to create a generic solution that could be used for controllers or PC's and can be extended to allow any number of controllers to act as standby routines.
On approach is "cache coherence". Commercial products -- Tangosol, for example -- do this.
Another approach is a light-weight version of an Enterprise Service Bus (ESB) or Service Oriented Architecture (SOA). Almost all the SOA vendors have products for this. I'd start with Tibco, which has a lightweight component set that you can use for this.
Since SOA isn't that hard, you can roll your own using the HTTP protocol so one controller can POST status to it's shadow controllers.
There is a difference between failover and transparent failover. Do you really have requirements for transparent failover? If so, you're going to end up paying for it (in both cost and complexity).
That being said, take a look at this post on Buddy Replication for an elegant solution to the problem.
There is the standard Master-Slave pattern used my almost all DBMS' that support clustering, distributed architectures and replication (http://en.wikipedia.org/wiki/Database_replication).
So, very basically in your situation you could have the Master machine maintaining state, and the slave sitting there doing nothing except updating its own state from that of the master. If the master goes down, the slave sees the master is no longer there, and can take over the control of state, with the master only being used again once it has updated its own state from that of the slave (which has maintained state while the master has not been active).
The traditional approach taken in controlling realtime critical systems is to run the two units in lockstep. Tandem have been building some very impressive fault-tolerant machines using this technique for years.
However, lockstep is very much a hardware-level solution; i don't think you could implement classic lockstep purely at the software level. Or at least, not straightforwardly. Maybe using state machines synchronised by exchange of vector clocks or something equally propeller-headed?
There is an analogous situation with the space shuttle computers. In that situation, they used 5 computers and if one machine was late or different from the others, it was (in essence) voted off of the island.
In your situation, how do you determine which controller has gone bad? Is the determining machine also considered for single-point failure?
What level of communications are available between the two controllers? Shared memory, Ethernet, or something even slower?
How fast does state information change between the two?
Is it possible to feed identical information to both controllers and would both controllers calculate the same state transitions?
Maybe a shared SQLite database or something similar?