I'm working with Movesense 2.0.0 on a HR+ sensor and I have to minimize the power consumption when device is not worn.
I can't turn it completely off since I need it to keep the correct time so, to reduce the battery usage, when I don't receive a HR notification for a certain amount of time I unsubscribe from all sensors.
What's the most power efficient way to determine when device is worn again? I was thinking about subscribing to accelerometer (as I understand it is the sensor with the lowest power consuption) and when I detect movement I resubscribe to HR and check for incoming data.
Is it a valid approach?
I also noticed that when device isn't worn but still connected to the strap I sometimes receive incorrect HR notifications, like the strap is acting as an antenna for electromagnetic noise. Is there a way to detect when the device is in that status except for looking at HR data to see if they make sense?
Your question is a bit vague in what you mean by "wear a sensor" (I'm assuming you mean HR-strap on chest). In that case if you look at the power consumption documentation (see the PowerOff measurements compared to no-wakeup) you'll notice that
HR wakeup (/System/States/2 (=Connector)) is ~0.2 uA
Movement wakeup (/System/States/0 (=Movement)) is ~4 uA
All other measurements are much higher starting from 10 uA for Acc # 13 Hz.
So the easiest and lowest power determination is to SUBSCRIBE the /System/States/2.
If you base your firmware on version >=2.1 and you measure HR or ECG you also get updates during measurement when the connection is lost (so called Leads-Off detection), so this should help to filter out the spurious HR detections. For firmware 2.0 and earlier you get Connector state 2 (=Unknown) when measuring.
Note: the leads on detection (/System/State/2 when no HR measurement is ongoing) is very sensitive and can give "connected" state when the HR-strap is sweaty.
Full disclosure: I work for the Movesense team
Related
I'm trying to implement action.devices.types.AC_UNIT in google home.
My hub is the IR transmitter, it saves previous states and commands with a packet.
As you might understand the device does not know anything about room temperature.
It seems like thermostatTemperatureAmbient is required for setTemperature execution.
I can report the desired temperature as the ambient one, but then google reports this value when I ask for the temperature at home.
You should be using thermostatTemperatureSetpoint for your desired temperature setting. Ambient temperature should be the actual temp reading from the device. However, since your device does not actually read the temperature, you may consider using the TemperatureControl trait instead.
There's already a good answer on the technical details and constraints of timing the gyro measurement:
Movesense, timestamp source of imu data, and timing issues in general
However, I would like to ask more practical question from the Android app developer perspective working with two sensors and requirement for high accuracy with Gyro measurement timing.
What would be the most accurate way to synchronize/consolidate the timestamps from two sensors and put the measurements on the same time axis?
The sensor SW version 1.7 introduced Time/Detailed API to check the internal time stamp and the UTC time set on the sensor device. This is how I imagined it would play out with two sensors:
Before subscribing anything, set the UTC time (microseconds) on the sensor1 and sensor2 based on Android device time (PUT /Time)
Get the difference of the "Time since sensor turned on" (in milliseconds) and "UTC time set on sensor" (in microseconds) (on sensor1 and sensor2) (GET /Time/Detailed).
Calculate the difference of these two timestamps (in milliseconds)(for both sensors).
Get the gyro values from the sensor with the internal timestamp. Add the calculated value from step 3 to the internal timestamp to get the correct/global UTC time value.
Is this procedure correct?
Is there a more efficient or accurate way to do this? E.g. the GATT service to set the time was mentioned in the linked post as the fastest way. Anything else?
How about the possible drift in the sensor time for gyro? Are there any tricks to limit the impact of the drift afterwards? Would it make sense to get the /Time/Detailed info during longer measurements and check if the internal clock has drifted/changed compared to the UTC time?
Thanks!
Very good guestion!
Looking at the accuracy of the crystals (+- 20 ppm) it means that typical drift between sensors should be no more than 40 ppm. That translates to about 0.14 seconds over an hour. for longer measurements and or better accuracy, a better synchronization is needed.
Luckily the clock drift should stay relatively constant unless the temperature of the sensor is changing rapidly. Therefore it should be enough to compare the mobile phone clock and each sensor UTC at the beginning and end of the measurement. Any drift of each of sensors should be visible and the timestamps easily compensated.
If there is need to even more accurate timestamps, taking regular samples of /Time/Detailed from each sensor and comparing it to the phone clock should provide a way to estimate possible sensor clock drift.
Full Disclosure: I work for the Movesense team
I have a project that consisted of transmitting data wirelessly from 15 tractors to a station, the maximum distance between tractor and station is 13 miles. I used a raspberry pi 3 to collect data from tractors. with some research I found that there is no wifi or GSM coverage so the only solution is to use RF communication using VHF. so is that possible with raspberry pi or I must add a modem? if yes, what is the criterion for choosing a modem? and please if you have any other information tell me?
and thank you for your time.
I had a similar issue but possibly a little more complex. I needed to cover a maximum distance of 22 kilometres and I wanted to monitor over 100 resources ranging from breeding stock to fences and gates etc. I too had no GSM access plus no direct line of sight access as the area is hilly and the breeders like the deep valleys. The solution I used was to make my own radio network using cheap radio repeaters. Everything was battery operated and was driven by the receivers powering up the transmitters. This means that the units consume only 40 micro amps on standby and when the transmitters transmit, in my case they consume around 100 to 200 milliamps.
In the house I have a little program that transmits a poll to the receivers every so often and waits for the units to reply. This gives me a big advantage because I can, via the repeater trail (as each repeater, the signal goes through, adds its code to the returning message) actually determine were my stock are.
Now for the big issue, how long do the batteries last? Well each unit has a 18650 battery. For the fence and gate controls this is charged by a small 5 volt solar panel and after 2 years running time I have not changed any of them. For the cattle units the length of time between charges depends solely on how often you poll the units (note each unit has its own code) with one exception (a bull who wants to roam and is a real escape artist) I only poll them once or twice a day and I swap the battery every two weeks.
The frequency I use is 433Mhz and the radio transmitters and receivers are very cheap ( less then 10 cents a pair if you by them in Australia) with a very small Attiny (I think) arduino per unit (around 30 cents each) and a length on wire (34.6cm long as an aerial) for the cattle and 69.2cm for the repeaters. Note these calculations are based on the frequency used i.e. 433Mhz.
As I had to install lots of the repeaters I contacted an organisation in China (sorry they no longer exist) and they created a tiny waterproof and rugged capsule that contained everything, while also improving on the design (range wise while reducing power) at a cost of $220 for 100 units not including batterys. I bought one lot as a test and now between myself and my neighbours we bought another 2000 units for only $2750.
In my case this was paid for in less then three months when during calving season I knew exactly were they were calving and was on site to assist. The first time I used it we saved a mother who was having a real issue.
To end this long message I am not an expert but I had an idea and hired people who were and the repeater approach certainly works over long distances and large areas (42 square kilometres).
Following on from the comments above, I'm not sure where you are located but spectrum around the 400mhz range is licensed in many countries so it would be worth checking exactly what you can use.
If this is your target then this is UHF rather than VHF so if you search for 'Raspberry PI UHF shield' or 'Raspberry PI UHF module' you will find some examples of cheap hardware you can add to your raspberry pi to support communication over these frequencies. Most of the results should include some software examples also.
There are also articles on using the pins on the PI to transmit directly by modulating the voltage them - this is almost certainly going to interfere with other communications so I doubt it would meet your needs.
Hi,
In the CDMA cellular networks when MS (Mobile Station) need to change a BS(Base Station), exactly necessary for hand-off, i know that is soft hand-off (make a connection with a target BS before leaving current BS-s). But i want to know, because connection of MS remaining within a time with more than one BS, MS use the same code in CDMA to communicate with all BS-s or different code for different BS-s ?
Thanks in advance
For the benefit of everyone, i have touched upon few points before coming to the main point.
Soft Handoff is also termed as "make-before-break" handoff. This technique falls under the category of MAHO (Mobile Assisted Handover). The key theme behind this is having the MS to maintain a simultaneous communication link with two or more BS for ensuring a un-interrupted call.
In DL direction, it is achieved using different transmission codes(transmit same bit stream) on different physical channels in the same frequency by two or more BTS wherein the CDMA phone simultaneously receives the signals from these two or more BTS. In the active set, there can be more than one pilot as there could be three carriers involved in soft hand off. Also, there shall also be a rake receiver that shall do maximal combining of received signals.
In UL direction, MS shall operate on a candidate set where there could be more than 1 pilot that have sufficient signal strength for usage as reported by MS. The BTS shall tag each of the user's data with Frame reliability indicator that can provide details about the transmission quality to BSC. So, even though the signals(MS code channel) are received by both base stations, it is achieved by routing the signals to the BSC along with information of quality of received signals, which shall examine the quality based on the Frame reliability indicator and choose the best quality stream or the best candidate.
I know the iPhone Bluetooth capabilities won't be accessible through the SDK until 3.0, but how long should it take to find devices in the area? Is it dependent on the number of devices in the area? If there are around 5 devices in range, should a scan to discover all of them take <5 seconds, or >30 seconds?
I know there are a lot of unknown factors, but I'm trying to determine if I can do a Bluetooth scan on startup if the time is minimal, or if I have to tell the user it is about to do a scan and there could be a long delay. I am unable to test this in the real world as the other Bluetooth devices aren't available, but I am trying to get a sense of how it could be designed.
Not sure what the API will let you do but the Bluetooth Host Controller Interface (HCI) command underlying this is the 'Inquiry Command'
This will let you inquire about devices either for a fixed time and/or a fixed number of responses.
I'm a Bluetooth neophyte, not an expert but...
To get at least 1 response from a Bluetooth device that is in a low power mode takes 1.28 seconds, so inquiry time is in multiples of that period up to a maximum of 61.44 seconds (48 periods), so the time range is 1 (1.28 seconds) to 48 (61.44 seconds).
There might be several devices that could respond in a single 1.28 second period though.
You can also specify the number of responses you will accept (1..255) or 0 for unlimited e.g. until the time runs out.
You can also cancel an inquiry, if you found a particular device you were looking for.
Unscientific test from my desk using a CSR bluetooth chip with Bluetooth 2.1 +EDR firmware running inquiry on the chip with debug output via the chip UART. Ran each inquiry 10 times and took an average of the results:
1 period inquiry time (1.28 seconds)
yeilded an average of 10 unique
bluetooth addresses.
5 period inquiry
time (6.4 seconds) yielded an average
of 23 unique bluetooth addresses.
10
period inquiry time (12.8 seconds)
yielded an average of 29 unique
bluetooth addresses.
I say 'unique', actually the results repeated a lot of the same addresses over and over, this may be implementation dependent though and the Apple API may only return unique addresses.
However, this is not representative of the 'real world' as most of the Bluetooth Devices around here (my office) are not in a low power mode. I guess, I could filter out PCs, laptops and test kit by Class of Device. That would get mobile phones, headsets that were discoverable etc...
Inquiry can also be combined with RSSI to get the the devices with the strongest signal but they may not necessarily be the closest.
For your scenario you might want to do an inquiry bases on time and number of devices e.g 4 * 1.28 seconds or 10 devices.
To summarise:
The shortest time you can do an inquiry for is 1.28 seconds and that could get 10+/-? devices in the area IF they are awake and near by.
If you've got a saturated Bluetooth environment or (a microwave oven going in the same room) it could take longer to find all the devices within range.
I know this is an old question, but I thought I might add something for anyone who finds this question later.
As Simon Peverett mentions, device discovery is performed by an underlying "Inquiry Command" that is carried out by the Bluetooth Host Controller Interface. In the Bluetooth spec V4.0, Volume 2, Part E, Section 6.1.4, the spec says:
When general inquiry is initiated by a Bluetooth device, the INQUIRY state
shall last TGAP(100) or longer, unless the inquirer collects enough responses
and determines to abort the INQUIRY state earlier.
Elsewhere, TGAP(100) is explained to be 10.24 seconds and is described as the recommended value for the time span that a Bluetooth device performs device discovery.
In other words, a good baseline for the minimum amount of time to perform an inquiry is 10.24 seconds, or 8 of the 1.28 second periods that the Inquiry Command measures time by.