Soft hand off in CDMA cellular networks - cdma

Hi,
In the CDMA cellular networks when MS (Mobile Station) need to change a BS(Base Station), exactly necessary for hand-off, i know that is soft hand-off (make a connection with a target BS before leaving current BS-s). But i want to know, because connection of MS remaining within a time with more than one BS, MS use the same code in CDMA to communicate with all BS-s or different code for different BS-s ?
Thanks in advance

For the benefit of everyone, i have touched upon few points before coming to the main point.
Soft Handoff is also termed as "make-before-break" handoff. This technique falls under the category of MAHO (Mobile Assisted Handover). The key theme behind this is having the MS to maintain a simultaneous communication link with two or more BS for ensuring a un-interrupted call.
In DL direction, it is achieved using different transmission codes(transmit same bit stream) on different physical channels in the same frequency by two or more BTS wherein the CDMA phone simultaneously receives the signals from these two or more BTS. In the active set, there can be more than one pilot as there could be three carriers involved in soft hand off. Also, there shall also be a rake receiver that shall do maximal combining of received signals.
In UL direction, MS shall operate on a candidate set where there could be more than 1 pilot that have sufficient signal strength for usage as reported by MS. The BTS shall tag each of the user's data with Frame reliability indicator that can provide details about the transmission quality to BSC. So, even though the signals(MS code channel) are received by both base stations, it is achieved by routing the signals to the BSC along with information of quality of received signals, which shall examine the quality based on the Frame reliability indicator and choose the best quality stream or the best candidate.

Related

Most power efficient way to determine when sensor is worn

I'm working with Movesense 2.0.0 on a HR+ sensor and I have to minimize the power consumption when device is not worn.
I can't turn it completely off since I need it to keep the correct time so, to reduce the battery usage, when I don't receive a HR notification for a certain amount of time I unsubscribe from all sensors.
What's the most power efficient way to determine when device is worn again? I was thinking about subscribing to accelerometer (as I understand it is the sensor with the lowest power consuption) and when I detect movement I resubscribe to HR and check for incoming data.
Is it a valid approach?
I also noticed that when device isn't worn but still connected to the strap I sometimes receive incorrect HR notifications, like the strap is acting as an antenna for electromagnetic noise. Is there a way to detect when the device is in that status except for looking at HR data to see if they make sense?
Your question is a bit vague in what you mean by "wear a sensor" (I'm assuming you mean HR-strap on chest). In that case if you look at the power consumption documentation (see the PowerOff measurements compared to no-wakeup) you'll notice that
HR wakeup (/System/States/2 (=Connector)) is ~0.2 uA
Movement wakeup (/System/States/0 (=Movement)) is ~4 uA
All other measurements are much higher starting from 10 uA for Acc # 13 Hz.
So the easiest and lowest power determination is to SUBSCRIBE the /System/States/2.
If you base your firmware on version >=2.1 and you measure HR or ECG you also get updates during measurement when the connection is lost (so called Leads-Off detection), so this should help to filter out the spurious HR detections. For firmware 2.0 and earlier you get Connector state 2 (=Unknown) when measuring.
Note: the leads on detection (/System/State/2 when no HR measurement is ongoing) is very sensitive and can give "connected" state when the HR-strap is sweaty.
Full disclosure: I work for the Movesense team

LoRa point-to-point limitations

I want to set up a point-to-point communication link between two Raspberry Pi using LoRa.
I know for lorawan there is (at least in Europe, where I live) a duty cycle limitation so the nodes can transmit only for an average of 30 seconds uplink time on air, per day, per device.
Is this valid also for point-to-point lora communications? Because my sender keeps on sending.
I am using the code provided here.
Yes, this is also valid for your LoRa application, since it is emitting radio waves. You can look up limits for europe for specific frequency bands in the ERC Recommendation 70-03 (page 7). In the ERC Recommendation 70-03 on page 42 you can then look up which of the frequecny bands are allowed for each country.
Example
Let's say you live in Germany and you want to use frequency 869,400 MHz to 869,650 MHz (this frequency band is called h1.6):
A quick lookup in the ERC Recommendation 70-03 page 39 shows that this band is allowed to be used in Germany:
Further this specific band allows you to use 10% time-on-air (duty-cycle) for your transmitter. This basically means you are allowed to transmit 1 second and are obligated to pause 9 seconds after that.

Beacon size vs message size in Wireless Ad-Hoc Networks

I'm working on neighbor discovery protocols in wireless ad-hoc networks. There are many protocols that rely only on beacon messages between nodes when the discovery phase is going on. On the other hand, there are other approaches that try to transmit more information (like a node's neighbor table) during the discovery, in order to accelerate it. Depending on the time needed to listen to those messages the discovery latency and power consumption varies. Suppose that the same hardware is used to transmit them and that there aren't collisions.
I read that beacons can be sent extremely fast (less than 1ms easily) but I haven't found anything about how long it takes to send/receive a bigger message. Let say a message carrying around 50-500 numbers representing all the info about your neighbors. How much extra power is needed?
Update
Can this bigger message be divided into a bunch of beacon size messages? If it does, then I suppose the power used to transmit/listen grows linearly.
One possible solution is to divide the transmission in N different beacon-like messages with a small extra information to be able to put them back together. In this way, the power used grows linearly as N grows.

Data transmission using RF with raspberryPi

I have a project that consisted of transmitting data wirelessly from 15 tractors to a station, the maximum distance between tractor and station is 13 miles. I used a raspberry pi 3 to collect data from tractors. with some research I found that there is no wifi or GSM coverage so the only solution is to use RF communication using VHF. so is that possible with raspberry pi or I must add a modem? if yes, what is the criterion for choosing a modem? and please if you have any other information tell me?
and thank you for your time.
I had a similar issue but possibly a little more complex. I needed to cover a maximum distance of 22 kilometres and I wanted to monitor over 100 resources ranging from breeding stock to fences and gates etc. I too had no GSM access plus no direct line of sight access as the area is hilly and the breeders like the deep valleys. The solution I used was to make my own radio network using cheap radio repeaters. Everything was battery operated and was driven by the receivers powering up the transmitters. This means that the units consume only 40 micro amps on standby and when the transmitters transmit, in my case they consume around 100 to 200 milliamps.
In the house I have a little program that transmits a poll to the receivers every so often and waits for the units to reply. This gives me a big advantage because I can, via the repeater trail (as each repeater, the signal goes through, adds its code to the returning message) actually determine were my stock are.
Now for the big issue, how long do the batteries last? Well each unit has a 18650 battery. For the fence and gate controls this is charged by a small 5 volt solar panel and after 2 years running time I have not changed any of them. For the cattle units the length of time between charges depends solely on how often you poll the units (note each unit has its own code) with one exception (a bull who wants to roam and is a real escape artist) I only poll them once or twice a day and I swap the battery every two weeks.
The frequency I use is 433Mhz and the radio transmitters and receivers are very cheap ( less then 10 cents a pair if you by them in Australia) with a very small Attiny (I think) arduino per unit (around 30 cents each) and a length on wire (34.6cm long as an aerial) for the cattle and 69.2cm for the repeaters. Note these calculations are based on the frequency used i.e. 433Mhz.
As I had to install lots of the repeaters I contacted an organisation in China (sorry they no longer exist) and they created a tiny waterproof and rugged capsule that contained everything, while also improving on the design (range wise while reducing power) at a cost of $220 for 100 units not including batterys. I bought one lot as a test and now between myself and my neighbours we bought another 2000 units for only $2750.
In my case this was paid for in less then three months when during calving season I knew exactly were they were calving and was on site to assist. The first time I used it we saved a mother who was having a real issue.
To end this long message I am not an expert but I had an idea and hired people who were and the repeater approach certainly works over long distances and large areas (42 square kilometres).
Following on from the comments above, I'm not sure where you are located but spectrum around the 400mhz range is licensed in many countries so it would be worth checking exactly what you can use.
If this is your target then this is UHF rather than VHF so if you search for 'Raspberry PI UHF shield' or 'Raspberry PI UHF module' you will find some examples of cheap hardware you can add to your raspberry pi to support communication over these frequencies. Most of the results should include some software examples also.
There are also articles on using the pins on the PI to transmit directly by modulating the voltage them - this is almost certainly going to interfere with other communications so I doubt it would meet your needs.

Does server's location affect players' ping?

I want to reduce a latency for my players in a multiplayer game.
Would it be beneficial to make a server in each continent to reduce latency? E.g. players are in US, but the server is in Europe, so I make it be in US.
How big could the difference be?
Yes absolutely, the closest your server is from the user the better the ping is because the travelling distance / time is reduced.
Specially between Europe and America, because of the sea ;)
The difference really depend on your server, but at least 150ms I think.
Cable . . . ( raw-fiber on Layer 1 [PHY] ) mileage rulez
Recent transatlantic cables ( deploying shorter fibre-meandres in a tighter inner-tubing ) benefit from about the 2012+ generation of cables shorter transatlantic latencies somewhere under 60 milliseconds, according to Hibernia Atlantic.
One has also to account for lambda-signal amplification and retiming units, that add some additional Layer 1 [PMD] latency overheads down the road.
Yes, but...
ping is a trivial tool to test RTT records ( a packet round-trip-time ) derived from counting a time, the packet proceeds through a network infrastructure there and back.
Thus sending such ping packet across a few meter cable distance would typically cost less time than having to wait till another such packet makes it to a target on the opposite side of the Globe and has next successfully crawled back again, but... as Heraclitus from Ephesos wisdom: "You can't step twice into the same river" .. repeating the ping probes to the same target will yield in many principally very different time-delays ( latencies are non-stationary ).
But there are many additional cardinal issues, besides the geographical distance from A-to-B, that influence the end-to-end latency ( and an expected smoothness ) of the application-layer services.
What bothers?
Transport network congestions ( if actual traffic overloads an underlying network capacities, buffering delays and/or packet drops start to occur )
Packet re-transmission(s) ( if any packet got lost, the opposite side will ask for re-transmission, packets received in the meantime are being recorded for reception, but only after some remarkably longer time, the missing packet arrives, for which the receiving process was yet waiting, because without a packet 6, one cannot decode the full message and packets 7, 8, 9, 10, ... simply had to wait until the #6 was asked for by the receiver's side at the sender's side, was re-transmitted from the sender's side and have made it again, hopefully with a successful delivery this time, to the receiver's hands to fit the puzzle-gap. That costs a lot more time, than a smooth error-free data-flow )
Selective class-of-traffic prioritisation ( if your class-of-traffic is not prioritised, your packets will be policed to wait in queues, until higher priorities allow for some more lower-priority traffic to fit in )
Packet deliveries do not guarantee to take the same path over the same network-vectors and individual packets can be, in general, transported over multiple different trajectories ( add various prioritisation policies + various buffering drop-outs + various intermittent congestions and spurious flows ... and the resulting latency per se + timing variance, i.e. the uncertainty of a final delivery time of the next packet only grows up and up ).
Last but not least, the server system processing bottlenecks. Fine-tuning a server to avoid any such adverse effect ( performance bottlenecks and even worse, any blocking-state episode(s) ) belongs to a professional controlled-latency infrastructure design & maintenance.
The Devil comes next!
You might have already noticed, that besides a static latency scale ( demonstrated by ping ) realistic gaming is even more adversely affected by latency jitter ... as an in-game context magically UFO-es forwards and back in TimeDOMAIN ... which cause unrealistically jumping planes right in front of your aiming cross, "shivering" characters, deadly enemy-fire bullets that causes one's damages without ever seen / a yet visible attacker body and similar disturbing artefacts.
Server-colocation proximity per se will help in the former, but will let on you to fight the latter