In wpa_supplicant, if the driver has the capability to support scheduled scan, it triggers a scheduled scan to driver else a PNO scan to the driver when host is about to got to SUSPEND mode.
I am trying to understand what is the difference between the two and what are other use cases where scheduled scan will be used.
Thanks.
scheduled scan can be triggered by framework depends upon features as like G-scan(google feature of periodic scanning), PNO scan is one such generally triggered when a connection is lost and have to reconnect in suspend state.
Related
When a MATLAB parallel pool is started for the first time it typically takes a few seconds. In a user-interactive application there is hence an incentive to make sure there is a parallel pool running before the first demand for computational tasks arrives, so the process of starting a parallel pool isn't added to the total time to respond to the request.
However every programmatic action such as parpool that I've seen that will start a parallel pool blocks execution until the pool is done starting up. This means even if the user has no need to call upon the parallel pool for some time, they cannot do anything else like begin setting up their computationally expensive request – filling in a user interface for instance – until the parallel pool is done starting.
This is very frustrating! If it was any other time-consuming preparatory action, once a parallel pool was in place it could be done in the background using parfeval and wouldn't obstruct the user's workflow until any request that actually called upon the completion of that preparation. But because this task actually addresses the lack of a running parallel pool, it seems users must wait for something they may not actually need to use until long after the task is complete.
Is there any way around this apparent limitation on usability?
There is currently no way to launch a parallel pool in the background. There are a couple of potential mitigations that might help:
Don't ever explicitly call parpool - just let the auto-creation of the pool only start creating the pool when you hit a parallel language construct such as parfor, parfeval, or spmd.
If you're using a cluster which might not be able to service your request for workers for some time, you could use batch to launch the whole computation in the background. (This is probably only appropriate if you've got a fairly long-running computation).
I cannot seem to find any info on this question, so I thought I'd ask here.
(No reply here: https://lists.zephyrproject.org/pipermail/zephyr-devel/2017-June/007743.html)
When a driver (eg. SPI or UART) is invoked through
FreeRTOS using the vendor HAL, then there
are two options for waiting upon completion:
1) Interrupt
2) busy-waiting
My question is this:
If the driver is invoked using busy-waiting; Does FreeRTOS then have any knowledge of the busy-waiting (occuring in the HAL Driver)? Does the task still get a time slot allocated (for doing busy-waiting). Is this
how it works? (Presuming FreeRTOS task has a preemptive scheduler)
Now in Zephyr (and probably Mynewt), I can see that when the driver is called, Zephyr keeps track of the calling task, which is then suspended (blocked state) until finished. Then the driver interrupt routine it puts the calling thread into the run-queue, when ready to proceed. This way no cycles are waisted. Is this correct understood?
Thanks
Anders
I don't understand your question. In FreeRTOS, if a driver is implemented to perform a busy wait (i.e. the driver has no knowledge of the multithreading, so is not event driven, and instead uses a busy wait that uses all CPU time) then the RTOS scheduler has no idea that is happening, so will schedule the task just as if it would any other task. Therefore, if the task is the highest priority ready state task it will use all the CPU time, and if there are other tasks of equal priority, it will share the CPU time with those tasks.
On the other hand, if the driver is written to make use of an RTOS (be that Zephr, FreeRTOS, or any other) then it can make use of the RTOS primitives to create a much more efficient event driven execution pattern. I can't see how the different schedulers you mention will behave any differently in this respect. For example, how can Zephr know that a task it didn't know the application writer was going to create was going to call a library function it had no previous knowledge of, and that the library function was going to use a busy wait?
I'm working on a system that uses several hundreds of workers in parallel (physical devices evaluating small tasks). Some workers are faster than others so I was wondering what the easiest way to load balance tasks on them without a priori knowledge of their speed.
I was thinking about keeping track of the number of tasks a worker is currently working on with a simple counter and then sorting the list to get the worker with the lowest active task count. This way slow workers would get some tasks but not slow down the whole system. The reason I'm asking is that the current round-robin method is causing hold up with some really slow workers (100 times slower than others) that keep accumulating tasks and blocking new tasks.
It should be a simple matter of sorting the list according to the current number of active tasks, but since I would be sorting the list several times a second (average work time per task is below 25ms) I fear that this might be a major bottleneck. So is there a simple version of getting the worker with the lowest task count without having to sort over and over again.
EDIT: The tasks are pushed to the workers via an open TCP connection. Since the dependencies between the tasks are rather complex (exclusive resource usage) let's say that all tasks are assigned to start with. As soon as a task returns from the worker all tasks that are no longer blocked are queued, and a new task is pushed to the worker. The work queue will never be empty.
How about this system:
Worker reaches the end of its task queue
Worker requests more tasks from load balancer
Load balancer assigns N tasks (where N is probably more than 1, perhaps 20 - 50 if these tasks are very small).
In this system, since you are assigning new tasks when the workers are actually done, you don't have to guess at how long the remaining tasks will take.
I think that you need to provide more information about the system:
How do you get a task to a worker? Does the worker request it or does it get pushed?
How do you know if a worker is out of work, or even how much work is it doing?
How are the physical devices modeled?
What you want to do is avoid tracking anything and find a more passive way to distribute the work.
I 'm working of project that use celery, rabbitmq. I want to have right to control interval that queue push task to worker(celeryd).
It sounds like you're looking for this documentation on Periodic Tasks.
Essentially, you configure and run celerybeat, which fires off task executions at intervals.
Word of warning:
If it's undesirable to be running your task multiple times concurrently, I'd suggest you follow a task locking recipe. If your workers are busy or offline, you may end up with a backlog of periodic tasks.
Can WWF handle high throughput scenarios where several dozen records are 'actively' being processed in parallel at any one time?
We want to build a workflow process which handles a few thousand records per hour. Each record takes up to a minute to process, because it makes external web service calls.
We are testing Windows Workflow Foundation to do this. But our demo programs show processing of each record appear to be running in sequence not in parallel, when we use parallel activities to process several records at once within one workflow instance.
Should we use multiple workflow instances or parallel activities?
Are there any known patterns for high performance WWF processing?
You should definitely use a new workflow per record. Each workflow only gets one thread to run in, so even with a ParallelActivity they'll still be handled sequentially.
I'm not sure about the performance of Windows Workflow, but from what I have heard about .NET 4 at Tech-Ed was that its Workflow components will be dramatically faster then the ones from .NET 3.0 and 3.5. So if you really need a lot of performance, maybe you should consider waiting for .NET 4.0.
Another option could be to consider BizTalk. But it's pretty expensive.
I think the common pattern is to use one workflow instance per record. The workflow runtime runs multiple instances in parallel.
One workflow instance runs one thread at a time. The parallel activity calls Execute method of each activity sequentially on this single thread. You may still get performance improvement from parallel activity however, if the activities are asynchronous and spend most of the time waiting for external process to finish its work. E.g. if activity calls an external web method, and then waits for a reply - it returns from Execute method and does not occupy this thread while waiting for the reply, so another activity in the Parallel group can start its job (e.g. also call to a web service) at the same time.