Google Nest subscriptionWithMessageFoundHandler fires once? - swift

I am trying to code around the fact that the messageLostHandler doesn't fire for many minutes after a device is out of range using Audio (or Earshot for Android).
I was hoping that every few secs a message would be received from another device. It fires once. Is this expected? Since I can't rely on the messageLost handler - how do I know when a device is truly out of range of the ultrasonic?
I coded up a timer after receiving the subscriptionWithMessageFoundHandler hoping another message coming in I could just invalidate or restart the timer. If the timer fired, I'd know that x seconds passed and that the other device must be out of range. No such luck.
UPDATE: Here is the code in question:
let strategy = GNSStrategy.init(paramsBlock: { (params: GNSStrategyParams!) -> Void in
params.discoveryMediums = .Audio
})
publication = messageMgr.publicationWithMessage(pubMessage, paramsBlock: { (pubParams: GNSPublicationParams!) in
pubParams.strategy = strategy
})
subscription = messageMgr.subscriptionWithMessageFoundHandler({[unowned self] (message: GNSMessage!) -> Void in
self.messageViewController.addMessage(String(data: message.content, encoding:NSUTF8StringEncoding))
// We only seem to get a 1x notification of a message. So this timer is folly.
print("PING") //Only 1x per discovery.
}, messageLostHandler: {[unowned self](message: GNSMessage!) -> Void in
self.messageViewController.removeMessage(String(data: message.content, encoding: NSUTF8StringEncoding))
}, paramsBlock: { (subParams: GNSSubscriptionParams!) -> Void in
subParams.strategy = strategy
})
Notice that the "PING" only prints once.

When a device goes out of range, Nearby waits for 2 minutes before flushing the other device's token from its cache. So if you wait for 2 minutes, the messageLost handler should be called. Can you verify this? Also, is it safe to assume that you'd like to have a timeout shorter than 2 minutes? This timeout has been a topic of discussion, and there's been some talk of adding a parameter so apps can choose a value that's more appropriate for its use case.

Related

How to add a timeout to an awaiting function call

What's the best way to add a timeout to an awaiting function?
Example:
/// lets pretend this is in a library that I'm using and I can't mess with the guts of this thing
func fetchSomething() async -> Thing? {
// fetches something
}
// if fetchSomething() never returns then doSomethingElse() is never ran. Is there anyway to add a timeout to this system?
let thing = await fetchSomething()
doSomethingElse()
I wanted to make the system more robust in the case that fetchSomething() never returns. If this was using combine, I'd use the timeout operator.
One can create a Task, and then cancel it if it has not finished in a certain period of time. E.g., launch two tasks in parallel:
// cancel the fetch after 2 seconds
func fetchSomethingWithTimeout() async throws -> Thing {
let fetchTask = Task {
try await fetchSomething()
}
let timeoutTask = Task {
try await Task.sleep(nanoseconds: 2 * NSEC_PER_SEC)
fetchTask.cancel()
}
let result = try await fetchTask.value
timeoutTask.cancel()
return result
}
// here is a random mockup that will take between 1 and 3 seconds to finish
func fetchSomething() async throws -> Thing {
let duration: TimeInterval = .random(in: 1...3)
try await Task.sleep(nanoseconds: UInt64(TimeInterval(NSEC_PER_SEC) * duration))
return Thing()
}
If the fetchTask finishes first, it will reach the timeoutTask.cancel and stop it. If timeoutTask finishes first, it will cancel the fetchTask.
Obviously, this rests upon the implementation of fetchTask. It should not only detect the cancelation, but also throw an error (likely a CancellationError) if it was canceled. We cannot comment further without details regarding the implementation of fetchTask.
For example, in the above example, rather than returning an optional Thing?, I would instead return Thing, but have it throw an error if it was canceled.
I hesitate to mention it, but while the above assumes that fetchSomething was well-behaved (i.e., cancelable), there are permutations on the pattern that work even if it does not (i.e., run doSomethingElse in some reasonable timetable even if fetchSomething “never returns”).
But this is an inherently unstable situation, as the resources used by fetchSomething cannot be recovered until it finishes. Swift does not offer preemptive cancelation, so while we can easily solve the tactical issue of making sure that doSomethingElse eventually runs, if fetchSomething might never finish in some reasonable timetable, you have deeper problem.
You really should find a rendition of fetchSomething that is cancelable, if it is not already.
// You can use 'try and catch'. Wait for the fetch data inside the try block. When it fails the catch block can run a different statement. Something like this:
await getResource()
try {
await fetchData();
} catch(err){
doSomethingElse();
}
// program continues

AltBeacon - background scan configuration

Sorry for my English :)
I have no idea how configure scanner to work properly in background (using ScanJob). I noticed that if the ScanJob starts more than 15 minutes after the previous scan is finished, a passive scan occurs even though there are beacons nearby. The reason for this is that the max age of the region is set to 15 minutes and the region is not restored after ScanJob starts. For now, I do so that after the scanner returns the results, I check if the list from monitoring regions is not empty and if it is, I do
if(beaconManager.monitoredRegions.isEmpty()) {
beaconManager.startRangingBeacons (region)
beaconManager.startMonitoring (region)
}
to set the region again. If I do not do this, passive scan starts every ScanJob stops .
If I invoke
beaconManager.startRangingBeacons (region)
beaconManager.startMonitoring (region)
each time if application starts then ScanJob is canceled immediately.
I wonder if there is any pattern to the background scan setup?
Maybe just remove condition in MonitoringStatus class?
if (millisSinceLastMonitor> MAX_STATUS_PRESERVATION_FILE_AGE_TO_RESTORE_SECS * 1000) {
LogManager.d (TAG, "Not restoring monitoring state because it was recorded too many milliseconds ago:" + millisSinceLastMonitor);
}
The scan job strategy is the default way the Android Beacon Library operates. It uses Android scheduled jobs to schedule scans and requires no configuration:
val beaconManager = BeaconManager.getInstanceForApplication(this)
val region = Region("all-beacons-region", null, null, beaconManager.startMonitoring(region)
beaconManager.startRangingBeacons(region)
In the foreground, scanning is constant and ranging callbacks come every ~1 sec.
In the background, scans are scheduled only once every ~15 minutes due to Android 8+ job scheduling limitations, so ranging callbacks come at that frequency. It is not designed for constant background scanning.
For constant background scanning you may configure the library to use a foreground service or the intent scan strategy.
I've wrote an simple application.
App: Aplication looks like this:
class App: Application(), MonitorNotifier {
val beaconManager by lazy { BeaconManager.getInstanceForApplication(this) }
val parser: BeaconParser =
BeaconParser().setBeaconLayout("m:2-3=0215,i:4-19,i:20-21,i:22-23,p:24-24")
override fun onCreate() {
BeaconManager.setDebug(true)
super.onCreate()
beaconManager.beaconParsers.add(parser)
beaconManager.addMonitorNotifier(this)
beaconManager.addRangeNotifier { mutableCollection: MutableCollection<Beacon>, region: Region ->
}
}
override fun didEnterRegion(region: Region?) {
}
override fun didExitRegion(region: Region?) {
}
override fun didDetermineStateForRegion(state: Int, region: Region?) {
}
}
MainActivity looks like this:
class MainActivity : AppCompatActivity() {
val region = Region("all-beacons-region", null, null, null)
val beaconManager by lazy { BeaconManager.getInstanceForApplication(this) }
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
}
override fun onResume() {
super.onResume()
beaconManager.startRangingBeacons(region)
beaconManager.startMonitoring(region)
}
}
in this app configuration, two cases are possible with the assumption that there are beacons nearby
case 1. ScanJob has been started up to 15 minutes after the last scan. After finishing work, you can see in the logs
ScanJob: Checking to see if we need to start a passive scan
ScanJob: We are inside a beacon region. We will not scan between cycles.
and that is correct behavior
case 2. ScanJob has been started more than 15 minutes after the last scan. After finishing work, you will see in the logs:
ScanJob: Checking to see if we need to start a passive scan
it means, the passive scan has started. After a while, StartupBroadcastReceiver is triggered with the results of the passive scan. ScanJob starts to process the passive scan results. After finishing work, you will see in the logs:
ScanJob: Checking to see if we need to start a passive scan
it means, the passive scan has been started again and after a while the StartupBroadcastRetuver will be triggered again with the results of the passive scan. And so it will be over and over again for a while
when you start MainActivity again and call
beaconManager.startRangingBeacons(region)
beaconManager.startMonitoring(region)
In my opinion it is caused by the fact that monitoring state is not restored after 15 minutes from the last scan in the MonitoringStatus class (condition below)
else if (millisSinceLastMonitor > MAX_STATUS_PRESERVATION_FILE_AGE_TO_RESTORE_SECS * 1000) {
LogManager.d(TAG, "Not restoring monitoring state because it was recorded too many milliseconds ago: "+millisSinceLastMonitor);
}
and after the scan is complete, the method is called
private void startPassiveScanIfNeeded() {
if (mScanState != null) {
LogManager.d(TAG, "Checking to see if we need to start a passive scan");
boolean insideAnyRegion = mScanState.getMonitoringStatus().insideAnyRegion();
if (insideAnyRegion) {
// TODO: Set up a scan filter for not detecting a beacon pattern
LogManager.i(TAG, "We are inside a beacon region. We will not scan between cycles.");
}
else {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
if (mScanHelper != null) {
mScanHelper.startAndroidOBackgroundScan(mScanState.getBeaconParsers());
}
}
else {
LogManager.d(TAG, "This is not Android O. No scanning between cycles when using ScanJob");
}
}
}
}
and insideAnyRegion is false because the monitoring state has not been restored, so passive scanning is started even though the beacons are nearby.
To get this case up pretty quickly, I suggest you set MAX_STATUS_PRESERVATION_FILE_AGE_TO_RESTORE_SECS = 1, then when ScanJob start for the first time, you will see what I mean.
The operating system does not always start ScanJob after 15 minutes
protected void updateMonitoringStatusTime(long time) {
File file = mContext.getFileStreamPath(STATUS_PRESERVATION_FILE_NAME);
file.setLastModified(time);
restoreMonitoringStatusIfNeeded();
}
private void restoreMonitoringStatusIfNeeded() {
if(mRegionsStatesMap.isEmpty()){
restoreMonitoringStatus();
}
}

How to cancel a tokio tcp connecting gracefully?

When we connect to a remote host via tcp, it can be a time-consuming operation. And while waiting for a connection, the user may cancel the operation at any time.
When connecting using async tokio, TcpStream::connect() returns a Future<TcpStream, io::Error> object, assumed to be called tcps_ft.
There are two parts, one is the normal logic of the program, which should call .awati() on tcp_ft above, and the other part is the UI of the program, where the user wants to call drop(tcps_ft) if he clicks the cancel button. But this seems impossible to do, because both calls consume tcps_ft.
#[tokio::test]
async fn test_cancel_by_drop() {
let addr = "192.168.1.100:8080";
let tcps_ft = TcpStream::connect(addr);
let mut tcps = tcps_ft.await.unwrap();
// simulate user's operation.
let cancel_jh = tokio::spawn(async move {
tokio::time::sleep(Duration::from_millis(100)).await;
drop(tcps_ft); // can not compile:: tcps_ft moved when await
});
// simulate user's program
tcps.shutdown().await;
cancel_jh.await;
}
So I considered using Task to do it, after all the Task::abort() function will not consume the atjh: Future<JoinHandle> object corresponding to this task. But I still can't call atjh.await before abort() returns synchronously, and in any case, await will consume the variable, making it impossible to call abort() asynchronously. (In other words, the call to abort() must be executed synchronously before await.)
#[tokio::test]
async fn test_cancel_by_abort() {
let addr = "192.168.1.100:8080";
let atjh = tokio::spawn(async move { TcpStream::connect(addr).await.unwrap() });
// simulate user's operation.
let cancel_jh = tokio::spawn(async {
tokio::time::sleep(Duration::from_millis(100)).await;
&atjh.abort();
});
// simulate user's program
let mut tcps = atjh.await.unwrap(); // can not compile:: atjh moved when await
tcps.shutdown().await;
cancel_jh.await;
}
Of course, one less direct way is to use callback functions. In my asynchronous connection task, when connect().await returns, the user's callback function is called to notify the user to call atjh.await.
But here the callback function is introduced again, and I know await/async itself is designed to solve the callback hell problem.
Further, for user-supplied asynchronous callback functions, the compiler may impose very many requirements, such as implementing Send, avoiding cross-thread safety issues, etc. This is certainly not something that async would like to encounter.
How can I do it asynchronously and gracefully to cancel this asynchronous connection process? Is there a suggested model to handle it?

process Swift DispatchQueue without affecting resource

I have a Swift DispatchQueue that receives data at 60fps.
However, depending on phones or amount of data received, the computation of those data becomes expensive to process at 60fps. In actuality, it is okay to process only half of them or as much as the computation resource allows.
let queue = DispatchQueue(label: "com.test.dataprocessing")
func processData(data: SomeData) {
queue.async {
// data processing
}
}
Does DispatchQueue somehow allow me to drop some data if a resource is limited? Currently, it is affecting the main UI of SceneKit. Or, is there something better than DispatchQueue for this type of task?
There are a couple of possible approaches:
The simple solution is to keep track of your own Bool as to whether your task is in progress or not, and when you have more data, only process it if there's not one already running:
private var inProgress = false
private var syncQueue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".sync.progress") // for reasons beyond the scope of this question, reader-writer with concurrent queue is not appropriate here
func processData(data: SomeData) {
let isAlreadyRunning = syncQueue.sync { () -> Bool in
if self.inProgress { return true }
self.inProgress = true
return false
}
if isAlreadyRunning { return }
processQueue.async {
defer {
self.syncQueue.async { self.inProgress = false }
}
// process `data`
}
}
All of that syncQueue stuff is to make sure that I have thread-safe access to the inProgress property. But don't get lost in those details; use whatever synchronization mechanism you want (e.g. a lock or whatever). All we want to make sure is that we have thread-safe access to the Bool status flag.
Focus on the basic idea, that we'll keep track of a Bool flag to know whether the processing queue is still tied up processing the prior set of SomeData. If it is busy, return immediately and don't process this new data. Otherwise, go ahead and process it.
While the above approach is conceptually simple, it won't offer great performance. For example, if your processing of data always takes 0.02 seconds (50 times per second) and your input data is coming in at a rate of 60 times per second, you'll end up getting 30 of them processed per second.
A more sophisticated approach is to use a GCD user data source, something that says "run the following closure when the destination queue is free". And the beauty of these dispatch user data sources is that it will coalesce them together. These data sources are useful for decoupling the speed of inputs from the processing of them.
So, you first create a data source that simply indicates what should be done when data comes in:
private var dataToProcess: SomeData?
private lazy var source = DispatchSource.makeUserDataAddSource(queue: processQueue)
func configure() {
source.setEventHandler() { [unowned self] in
guard let data = self.syncQueue.sync(execute: { self.dataToProcess }) else { return }
// process `data`
}
source.resume()
}
So, when there's data to process, we update our synchronized dataToProcess property and then tell the data source that there is something to process:
func processData(data: SomeData) {
syncQueue.async { self.dataToProcess = data }
source.add(data: 1)
}
Again, just like the previous example, we're using syncQueue to synchronize our access to some property across multiple threads. But this time we're synchronizing dataToProcess rather than the inProgress state variable we used in the first example. But the idea is the same, that we must be careful to synchronize our interation with a property across multiple threads.
Anyway, using this pattern with the above scenario (input coming in at 60 fps, whereas processing can only process 50 per second), the resulting performance much closer to the theoretical max of 50 fps (I got between 42 and 48 fps depending upon the queue priority), rather than 30 fps.
The latter process can conceivably lead to more frames (or whatever you're processing) to be processed per second and results in less idle time on the processing queue. The following image attempts to graphically illustrate how the two alternatives compare. In the former approach, you'll lose every other frame of data, whereas the latter approach will only lose a frame of data when two separate sets of input data came in prior to the processing queue becoming free and they were coalesced into a single call to the dispatch source.

What is the best way to "rate limit" consuming of an Observable?

I have a bunch of events coming in and I have to execute ALL of them without a loss, but I want to make sure that they are buffered and consumed at the appropriate time slots. Anyone have a solution?
I can't find any operators in Rx that can do that without the loss of the events (Throttle - looses events). I've also considered Buffered, Delay, etc... Can't find a good solution.
I've tried to put a timer in the middle, but somehow it doesn't work at all:
GetInitSequence()
.IntervalThrottle(TimeSpan.FromSeconds(5))
.Subscribe(
item =>
{
Console.WriteLine(DateTime.Now);
// Process item
}
);
public static IObservable<T> IntervalThrottle<T>(this IObservable<T> source, TimeSpan dueTime)
{
return Observable.Create<T>(o =>
{
return source.Subscribe(x =>
{
new Timer(state =>
o.OnNext((T)state), x, dueTime, TimeSpan.FromMilliseconds(-1));
}, o.OnError, o.OnCompleted);
});
}
The question is not 100% clear so I'm making some presumptions.
Observable.Delay is not what you want because that will create a delay from when each event arrives, rather than creating even time intervals for processing.
Observable.Buffer is not what you want because that will cause all events in each given interval to be passed to you, rather than one at a time.
So I believe you're looking for a solution that creates some sort of metronome that ticks away, and gives you an event per tick. This can be naively constructed using Observable.Interval for the metronome and Zip for connecting it to your source:
var source = GetInitSequence();
var trigger = Observable.Interval(TimeSpan.FromSeconds(5));
var triggeredSource = source.Zip(trigger, (s,_) => s);
triggeredSource.Subscribe(item => Console.WriteLine(DateTime.Now));
This will trigger every 5 seconds (in the example above), and give you the original items in sequence.
The only problem with this solution is that if you don't have any more source elements for (say) 10 seconds, when the source elements arrive they will be immediately sent out since some of the 'trigger' events are sitting there waiting for them. Marble diagram for that scenario:
source: -a-b-c----------------------d-e-f-g
trigger: ----o----o----o----o----o----o----o
result: ----a----b----c-------------d-e-f-g
This is a very reasonable issue. There are two questions here already that tackle it:
Rx IObservable buffering to smooth out bursts of events
A way to push buffered events in even intervals
The solution provided is a main Drain extension method and secondary Buffered extension. I've modified these to be far simpler (no need for Drain, just use Concat). Usage is:
var bufferedSource = source.StepInterval(TimeSpan.FromSeconds(5));
The extension method StepInterval:
public static IObservable<T> StepInterval<T>(this IObservable<T> source, TimeSpan minDelay)
{
return source.Select(x =>
Observable.Empty<T>()
.Delay(minDelay)
.StartWith(x)
).Concat();
}
I know this could just be too simple, but would this work?
var intervaled = source.Do(x => { Thread.Sleep(100); });
Basically this just puts a minimum delay between values. Too simplistic?
Along the lines of Enigmativity's answer, if all you want to do is just Delay all of the values by a TimeSpan, I cant see why Delay is not the operator you want
GetInitSequence()
.Delay(TimeSpan.FromSeconds(5)) //ideally pass an IScheduler here
.Subscribe(
item =>
{
Console.WriteLine(DateTime.Now);
// Process item
}
);
How about Observable.Buffer? This should return all the events in the 1s window as a single event.
var xs = Observable.Interval(TimeSpan.FromMilliseconds(100));
var bufferdStream = xs.Buffer(TimeSpan.FromSeconds(5));
bufferdStream.Subscribe(item => { Console.WriteLine("Number of events in window: {0}", item.Count); });
It might be what you're asking isnt that clear. What is your code supposed to do? It looks like you're just delaying by creating a timer for each event. It also breaks the semantics of the observable as the next and complete could occur before the next.
Note this is also only as accurate at the timer used. Typically the timers are accurate to at most 16ms.
Edit:
your example becomes, and item contains all the events in the window:
GetInitSequence()
.Buffer(TimeSpan.FromSeconds(5))
.Subscribe(
item =>
{
Console.WriteLine(DateTime.Now);
// Process item
}
);