Sleep mode and duty cycling in UnetStack, and adding energy consumed in idle listening, and sleeping modes into a simple energy model - scheduler

I have two questions:
We want to consider a very low transmission duty cycle in our underwater sensor network,as it is the power consumption in listening and sleep mode that will dominate our network lifetime in practice.
I noticed the Scheduler Commands in the new version of UnetStack Simulator, version 3.2.0, the addsleep , showsleep etc, I downloaded the the latest version of the simulator, I tried to use those commands, but it didn't work, I tried to work both on the shell as well as inside groovy scripts, and tried to import org.arl.unet.scheduler, but none of the Scheduler commands worked, and kept receiving errors.
For example, I tried to use this: addsleep 20.s.later, but the simulator does not recognise "later", also received errors for using import org.arl.unet.scheduler.
I wonder if anyone can help me in that, in how to use the addsleep command for example.
Another question:
Besides consuming energy in transmitting and receiving, our modem draws 2.5 mA from a 5V supply while listening for the start of a packet, and can go to sleep and draw about 0.24 mA from a 5V supply, with the ability to wake up and return to the listening mode after a programmable time period.
So my question is, is there a way to consider energy consumed in idle listening and sleeping in a simple energy model?
We implemented a very simple energy model, something like the following (found this example in stackoverflow):
class MyHalfDuplexModem extends HalfDuplexModem {
float energy = 1000
#Override
boolean send(Message m) {
if (m instanceof TxFrameNtf) energy -= 10
if (m instanceof RxFrameNtf) energy -= 1
return super.send(m)
}
}
How to add energy consumed in idle listening, and sleeping to the above code? shall we need to use something like WakeFromSleepNtf ()
Thanks and any help is much appreciated.
Marwa

The scheduler service is usually hardware dependent, as it requires interaction with the specific single board computer (SBC) to put it into a sleep state and allow it to be woken up. On modems, this is usually the modem driver agent.
The HalfDuplexModem simulated modem doesn't provide this service, and so it won't work out of the box. Since HalfDuplexModem doesn't have an energy model build into it, "sleep" doesn't mean much to it. If you wanted to simulate networks where nodes slept and consumed less energy during the sleep, it would be possible to extend the HalfDuplexModem to implement the SCHEDULER service. The service is quite simple, with just 4 messages (AddScheduledSleepReq, RemoveScheduledSleepReq, GetSleepScheduleReq and WakeFromSleepNtf). Your implementation could keep track of the energy used by each node, based on whether it is sleeping, listening or transmitting, since you can keep track of the sleep schedule and hence know how much time the node has been awake/sleeping.
Commands addsleep, showsleep etc are simply convenience shortcuts in the shell extension that use the above 4 messages to do the actual work. They are enabled in the shell by loading the SchedulerShellExt, and you can simply use the messages directly from agents or in simulation scripts.

Related

How to generate delay using eBPF kernel program

I'm trying to generate delay in acknowledgement using eBPF kernel program for egress packet.
I'm running the python+c program using bcc.
i tried mdelay/msleep/udelay etc functions with delay.h library in c, it gives me LLVM error that "Program using external function which can't be resolved".
Then I tried implementing sleep functionality :
Taking 3 variables:
tprev (which get the current time at starting of prog)
tnow(this gets current time as the loop starts and gets updates in each iteration with current time)
timer: this is the duration for which we want program to be delayed.
the while loop is: while((tnow - tprev) ≤ timer)
but ebpf prog treat it as n infinite loop and gives error that infinite loop detected.
Whereas it's not an infinite loop.
Is there a way to introduce delay in Ack or delay in general in eBPF program and How?
The short answer is no(not in eBPF). At least not as of kernel 5.18. This is because eBPF programs, particularly those running in the network stack are often called from code that should never sleep.
What perhaps is most useful in your case is that TC(Traffic Control) programs can ask TC to delay a packet for you. The actual delay happens outside of the eBPF program, in the TC subsystem. You can request TC to send the packet at a given time by setting __sk_buff->tstamp. Note: this only works for Egress(outgoing) traffic not Ingress(incomming) traffic. This behavior can also be triggered via TC configuration without using eBPF.
tried mdelay/msleep/udelay etc functions with delay.h library in c, it gives me LLVM error that "Program using external function which can't be resolved".
Yes, you can't use the stdlib in eBPF programs since they use kernel facilities which are unavailable in eBPF.
Side notes:
We do have "sleepable" programs, but only syscall, LSM and tracing programs can be sleepable. "sleepable" doesn't mean we can call some sort of sleep helper with a duration, it means we can call helper functions which in turn may sleep(for example bpf_copy_from_user_stack). So you don't have control over how long the program will sleep.
Another time related feature is BPF timers, which actually allow you to set a timer and execute a callback after a given time. The limitation here is that you can't pass any arguments to this callback, and it is called without a context. So after setting the timer, the original program will continue and return as usual.

ESP8266 Micropython scheduler

I am looking for an easy way to schedule a daily reboot of my ESP8266, currently running on Micropython.
I did a fair amount of research and haven't find anything that I can use/understand.
I m wondering if this need to be done through Micropython or another system language.
Worst case scenario I ll create an infinite loop that check for the time of the day but that seems very extreme and not a best use of the RAM.
The reason behind the reboot is the controller is going to be unattended for long period of time and I need it to reset daily in case it crashes so I don't go longer than 24 hours without the data it is currently providing.
I have looked at uasyncio but don't understand it.
First, you should decide which timer to use, following are cons;
loop - "sleep" from timelib stops execution of current thread
millis - "time" or "ticks_ms" from timelib are fine, but you have to know how to overcome millis cycle
#as micropython lib
import utime as time
secs = time.time()
print (secs) #sec
millis = time.ticks_ms()
print (millis) #ms
rtc - rtc modul needed
web - php timer via wifi necessary
system - private wifi necessary, external comp has to be allways on
gps - gps modul and signal needed
Second, choose one timer and just hang up the reboot on it by specifying of certain time, then arrange reset:
#either hard reset, like power off-on
import machine
machine.reset()
#or soft reset
import sys
sys.exit()
Third, finally set time shift to get start of next reboot beyond current action,
otherwise reboots shall be repeated till time zone, you specified, shall pass.
According to the docs, you can use the watchdog timer machine.WDT. However this forum discussion suggests that the current ESP8266 Micropython doesn't actually do what the docs say it does:
OK, so it seems that the watchdog is not fully implemented on the
esp8266 as it is used internally.
It appears that all you can do is trigger it by disabling interrupts,
not sure how useful that would be.
Normally you would configure the watchdog with your chosen timeout then make sure your code calls its feed method at a shorter interval than the timeout setting. If your code has crashed and the timeout expires, the watchdog resets the system. It sounds as if this isn't fully implemented on the ESP8266 version at the moment.
You may find more information and workarounds on the Micropython forum, and if not you'll probably get a better response to any questions there.

SPI bit banging; MCP3208; Raspberry;error

I am using Raspberry Pi 2 board with raspbian loaded. need to do SPI by bit banging & interface MCP3208.
I have taken code from Github. It is written for MCp3008(10 bit adc).
Only change I made in code is that instead of calling:
adcValue = recvBits(12, clkPin, misoPin)
I called adcValue = recvBits(14, clkPin, misoPin) since have to receive 14 bits of data.
Problem: It keeps on sending random data ranging from 0-10700. Even though data should be max 4095. It means I am not reading data correctly.
I think the problem is that MCP3208 has max freq = 2Mhz, but in code there is no delay between two consecutive data read or write. I think I need to add some delay of 0.5us whenever I need to transition clock since I am operating at 1Mhz.
For a small delay I am currently reading Accurate Delays on the Raspberry Pi
Excerpt:
...when we need accurate short delays in the order of microseconds, it’s
not always the best way, so to combat this, after studying the BCM2835
ARM Peripherals manual and chatting to others, I’ve come up with a
hybrid solution for wiringPi. What I do now is for delays of under
100μS I use the hardware timer (which appears to be otherwise unused),
and poll it in a busy-loop, but for delays of 100μS or more, then I
resort to the standard nanosleep(2) call.
I finally found some py code to simplify reading from the 3208 thanks to RaresPlescan.
https://github.com/RaresPlescan/daisypi/blob/master/sense/mcp3208/adc_3.py
I had a data logger build on the pi, that was using a 3008. The COTS data logger I was trying to replicate had better resolution, so I started looking for a 12 bit and found the 3208. I literally swapped the 3008 out for the 3208 and with this guys code I have achieved better resolution than the COTS data logger.

OS system calls in x86

While working on a educational simplistic RISC processor I was wondering about how system calls work when implementing my software interrupt function. For example, hypothetically lets say our program calls sys_end which ends the current process. Now I know this would go to a vector table and then to the code to end the current process.
My question is the code that ends the process ran in supervisor mode or user mode? No where I seem to look specifies this. I'm assuming if its in normal user mode that could pose a very significant problem as a user mode process could do say do something evil like:
for (i=0; i++; i<10000){
int sys_fork //creates child process
}
which could be very bad I thought the OS would have some say on how many times a process could repeat itself and not to mention what other harmful things a process could do by changing the code in the system call itself.
system calls run in supervisor mode for the duration of the system call. The supervisor mode is necessary for accessing hardware (the screen, the keyboard), and for keeping user processes isolated from each other.
There are (or can be configured) limits on the amount of cpu, number of processes, etc. a user process may use or request, which can offer some protection against the kind of runaway program you describe.
But the default linux configuration allows 10k processes to be created in a tight loop; I've done it myself (both intentionally and accidentally)

Embedded Linux LED-flashing daemon: does it exist?

I've seen embedded boards before that have an LED that flashes like a heartbeat to show that the board is still executing code. I'd like to do something similar on an embedded Linux board I'm working on. Given that it's a fairly trivial bit of code, it seems likely to me that someone has already written a daemon for Linux that does this, but I haven't been able to find any evidence.
Note that OS X Server's heartbeatd and the High-Availability Linux heartbeat daemon are not what I'm looking for-- they both coordinate system availability over IP networks, or something like that.
Assuming what I'm looking for doesn't exist, I'm also interested in advice about how to write a daemon that toggles a pin while minimizing resource usage. At what update rate does cron become a stupid idea?
(I'd also rather not hear gushing about the LED on the sleeping MacBook Pro, if that seems relevant for some reason.)
Thanks.
The LED heartbeat is a built-in kernel function. Assuming you have a device driver for your LED, turning on the heartbeat is done thus:
$ echo "heartbeat" > /sys/class/leds/MyLed/trigger
To see the list of triggers (MMC activity, heartbeat, etc.)
$ cat /sys/class/leds/MyLed/trigger
See drivers/leds/ledtrig-heartbeat.c and http://www.avrfreaks.net/wiki/index.php/Documentation:Linux/LEDs
The interesting thing about the heartbeat is that the pattern is dynamic. The basic pattern is thump-thump-pause, just like a human heartbeat. But the rate of the heartbeat is controlled by the load average! Light loads beat at about 50 beats per minute. Heavier loads cause faster beating until it maxes out at about 180 bpm.
I wouldn't use the cron. Its just not the right tool. A very simple solution is to just run a
shell script from your inittab.
Example:
#!/bin/sh
while [ true ];
do
logger "blink!" # to be replaced
sleep 1
done
Save this to /bin/blink.sh, add the following line to your inittab and have init reread the tab be running init q.
bl:2345:respawn:/bin/blink.sh
Of course you have to adjust the blink.sh script to your environment. Its highly depended on the
particular board how an LED can be toggled from user space (device driver file, sysfs entry, ....).
If you need something more efficient you might redo the while thing in C but it might not be worth the effort.
One thing to think about is what you want to signal with a pulsing LED. With the approach outlined above we can only show that the board is still alive (kernel is running, the process executing blink.sh is scheduled and blink.sh is doing what it is supposed to do). For some use cases this might be fine but more often you actually want to signal that the application running on an embedded board is still OK (doesn't hang, hasn't crashed, ...). To implement such functionality you need to integrate the code that toggles the LED into the main loop of your application.