I am looking for an easy way to schedule a daily reboot of my ESP8266, currently running on Micropython.
I did a fair amount of research and haven't find anything that I can use/understand.
I m wondering if this need to be done through Micropython or another system language.
Worst case scenario I ll create an infinite loop that check for the time of the day but that seems very extreme and not a best use of the RAM.
The reason behind the reboot is the controller is going to be unattended for long period of time and I need it to reset daily in case it crashes so I don't go longer than 24 hours without the data it is currently providing.
I have looked at uasyncio but don't understand it.
First, you should decide which timer to use, following are cons;
loop - "sleep" from timelib stops execution of current thread
millis - "time" or "ticks_ms" from timelib are fine, but you have to know how to overcome millis cycle
#as micropython lib
import utime as time
secs = time.time()
print (secs) #sec
millis = time.ticks_ms()
print (millis) #ms
rtc - rtc modul needed
web - php timer via wifi necessary
system - private wifi necessary, external comp has to be allways on
gps - gps modul and signal needed
Second, choose one timer and just hang up the reboot on it by specifying of certain time, then arrange reset:
#either hard reset, like power off-on
import machine
machine.reset()
#or soft reset
import sys
sys.exit()
Third, finally set time shift to get start of next reboot beyond current action,
otherwise reboots shall be repeated till time zone, you specified, shall pass.
According to the docs, you can use the watchdog timer machine.WDT. However this forum discussion suggests that the current ESP8266 Micropython doesn't actually do what the docs say it does:
OK, so it seems that the watchdog is not fully implemented on the
esp8266 as it is used internally.
It appears that all you can do is trigger it by disabling interrupts,
not sure how useful that would be.
Normally you would configure the watchdog with your chosen timeout then make sure your code calls its feed method at a shorter interval than the timeout setting. If your code has crashed and the timeout expires, the watchdog resets the system. It sounds as if this isn't fully implemented on the ESP8266 version at the moment.
You may find more information and workarounds on the Micropython forum, and if not you'll probably get a better response to any questions there.
Related
I have two questions:
We want to consider a very low transmission duty cycle in our underwater sensor network,as it is the power consumption in listening and sleep mode that will dominate our network lifetime in practice.
I noticed the Scheduler Commands in the new version of UnetStack Simulator, version 3.2.0, the addsleep , showsleep etc, I downloaded the the latest version of the simulator, I tried to use those commands, but it didn't work, I tried to work both on the shell as well as inside groovy scripts, and tried to import org.arl.unet.scheduler, but none of the Scheduler commands worked, and kept receiving errors.
For example, I tried to use this: addsleep 20.s.later, but the simulator does not recognise "later", also received errors for using import org.arl.unet.scheduler.
I wonder if anyone can help me in that, in how to use the addsleep command for example.
Another question:
Besides consuming energy in transmitting and receiving, our modem draws 2.5 mA from a 5V supply while listening for the start of a packet, and can go to sleep and draw about 0.24 mA from a 5V supply, with the ability to wake up and return to the listening mode after a programmable time period.
So my question is, is there a way to consider energy consumed in idle listening and sleeping in a simple energy model?
We implemented a very simple energy model, something like the following (found this example in stackoverflow):
class MyHalfDuplexModem extends HalfDuplexModem {
float energy = 1000
#Override
boolean send(Message m) {
if (m instanceof TxFrameNtf) energy -= 10
if (m instanceof RxFrameNtf) energy -= 1
return super.send(m)
}
}
How to add energy consumed in idle listening, and sleeping to the above code? shall we need to use something like WakeFromSleepNtf ()
Thanks and any help is much appreciated.
Marwa
The scheduler service is usually hardware dependent, as it requires interaction with the specific single board computer (SBC) to put it into a sleep state and allow it to be woken up. On modems, this is usually the modem driver agent.
The HalfDuplexModem simulated modem doesn't provide this service, and so it won't work out of the box. Since HalfDuplexModem doesn't have an energy model build into it, "sleep" doesn't mean much to it. If you wanted to simulate networks where nodes slept and consumed less energy during the sleep, it would be possible to extend the HalfDuplexModem to implement the SCHEDULER service. The service is quite simple, with just 4 messages (AddScheduledSleepReq, RemoveScheduledSleepReq, GetSleepScheduleReq and WakeFromSleepNtf). Your implementation could keep track of the energy used by each node, based on whether it is sleeping, listening or transmitting, since you can keep track of the sleep schedule and hence know how much time the node has been awake/sleeping.
Commands addsleep, showsleep etc are simply convenience shortcuts in the shell extension that use the above 4 messages to do the actual work. They are enabled in the shell by loading the SchedulerShellExt, and you can simply use the messages directly from agents or in simulation scripts.
I'm integrating watchman via the socket/bser interface in a JVM program.
I'm seeing odd timing where:
A file is written to by the build system (a small text file)
I get a watchman notification on the bser interface
Thread A listening for bser subscription notifications puts the update onto a queue for a separate thread
Thread B reads the queue, reads the changed file, and then puts the file's data on the wire
However, somehow, Thread B is reading an empty file.
Which, I assume is validly empty at some point, e.g. the IO/syscalls might be:
Clear the file contents
Write chunk 1
Write chunk 2
Close the file
And I assume my Thread B is reading the file between steps 1 and 2. Or maybe 1 and 4, if 4 is when the result is flushed.
My confusion is two fold:
1) I thought watchman's default 20ms wait would account for things like this, and I'd only see an update on my thread A, let alone when my thread B does a read, after step 4, and the data is done being written to the file.
2) Even if watchman did tell me "too soon" about the 1st syscall (say step 1), and I read the results while it was an empty file, there should be another syscall/watchman notification that "btw, the file has some content now".
FWIW/oddly enough, I was seeing this very same behavior when using the Java WatchService API, where I would get an inotify event, but read a file "too soon", and so get either empty or partial results, and then no follow up inotify event when the rest of the data was available.
I assumed this was a fluke/nuance of the WatchService, so I solved it at the time by checking the file mod time before reading it, and just waiting to ensure mod time >2 seconds old before assuming the file is "done" being written.
(Note that this also handled ~100mb+ files being written, where the build process might write a chunk of data every 100ms+, but with WatchService I was seeing 100s of inotify notifications for what was essentially a single continuous write.)
When I ported my WatchService code to watchman, I dropped this "ensureSettled" hack, because I assumed watchman's 20ms settle period (which is way lower than the 2s I was using, but hey it's the default) + it's general robustness compared to the somewhat beta WatchService would mean it wouldn't be a problem.
But within ~a day of using the watchman-ported code, I'm seeing empty file reads, just like I was with the WatchService.
Any ideas about what I'm missing?
I can add back the ensureSettled hack, but at this point I'm curious about what is going on.
The docs aren't very clear on this, sorry!
Dispatching of subscription notifications is subject to the settle timeout, but since file updates are non-atomic it's likely that the default 20ms kicks in before the file contents are visible to you; under the covers, the kernel generates a series of notifications for the various mutations that you're doing, so if the truncate takes 20ms before you write (or perhaps flush) the data out, you'll likely get a notification "in the middle".
This stuff is also operating system dependent. Here's an example of a recently discovered and resolved issue: https://github.com/facebook/watchman/commit/bac383c751b248ae742a2a20df3e8272238c0ae2
it doesn't sound like it is quite the same thing as you're experiencing, it just adds some color to this discussion.
If you already have code to manage the settling in your client, then it may be easier for you to add that back; we do this in watchman-make for example.
You may also wish to try setting https://facebook.github.io/watchman/docs/config.html#settle in a .watchmanconfig file in the root of the directory tree that you're watching and leave that to the watchman server. If/when you change this setting, you will need to delete and restart the watch.
Which you choose depends on how you want to trade ease of configuration against volume of code you want to maintain and (perhaps) volume of support questions from your user base if the .watchmanconfig isn't correctly configured for them.
Note that you can use the command invocation from https://facebook.github.io/watchman/docs/cmd/log-level.html to see the debug logging for the kernel notifications as they come in in real time; this may be helpful for you in understanding exactly which notifications are coming in and when.
Just curious, are you using https://github.com/facebook/watchman/tree/master/java to talk to the watchman server?
I am using Raspberry Pi 2 board with raspbian loaded. need to do SPI by bit banging & interface MCP3208.
I have taken code from Github. It is written for MCp3008(10 bit adc).
Only change I made in code is that instead of calling:
adcValue = recvBits(12, clkPin, misoPin)
I called adcValue = recvBits(14, clkPin, misoPin) since have to receive 14 bits of data.
Problem: It keeps on sending random data ranging from 0-10700. Even though data should be max 4095. It means I am not reading data correctly.
I think the problem is that MCP3208 has max freq = 2Mhz, but in code there is no delay between two consecutive data read or write. I think I need to add some delay of 0.5us whenever I need to transition clock since I am operating at 1Mhz.
For a small delay I am currently reading Accurate Delays on the Raspberry Pi
Excerpt:
...when we need accurate short delays in the order of microseconds, it’s
not always the best way, so to combat this, after studying the BCM2835
ARM Peripherals manual and chatting to others, I’ve come up with a
hybrid solution for wiringPi. What I do now is for delays of under
100μS I use the hardware timer (which appears to be otherwise unused),
and poll it in a busy-loop, but for delays of 100μS or more, then I
resort to the standard nanosleep(2) call.
I finally found some py code to simplify reading from the 3208 thanks to RaresPlescan.
https://github.com/RaresPlescan/daisypi/blob/master/sense/mcp3208/adc_3.py
I had a data logger build on the pi, that was using a 3008. The COTS data logger I was trying to replicate had better resolution, so I started looking for a 12 bit and found the 3208. I literally swapped the 3008 out for the 3208 and with this guys code I have achieved better resolution than the COTS data logger.
Is it safe to use the low_latency tty mode with Linux serial ports? The tty_flip_buffer_push function is documented that it "must not be called from IRQ context if port->low_latency is set." Nevertheless, many low-level serial port drivers call it from an ISR whether or not the flag is set. For example, the mpc52xx driver calls flip buffer unconditionally after each read from its FIFO.
A consequence of the low latency flip buffer in the ISR is that the line discipline driver is entered within the IRQ context. My goal is to get latency of one millisecond or less, reading from a high speed mpc52xx serial port. Setting low_latency acheives the latency goal, but it also violates the documented precondition for tty_flip_buffer_push.
This question was asked on linux-serial on Fri, 19 Aug 2011.
No, low latency is not safe in general.
However, in the particular case of 3.10.5 low_latency is safe.
The comments above tty_flip_buffer_push read:
"This function must not be called from IRQ context if port->low_latency is set."
However, the code (3.10.5, drivers/tty/tty_buffer.c) contradicts this:
void tty_flip_buffer_push(struct tty_port *port)
{
struct tty_bufhead *buf = &port->buf;
unsigned long flags;
spin_lock_irqsave(&buf->lock, flags);
if (buf->tail != NULL)
buf->tail->commit = buf->tail->used;
spin_unlock_irqrestore(&buf->lock, flags);
if (port->low_latency)
flush_to_ldisc(&buf->work);
else
schedule_work(&buf->work);
}
EXPORT_SYMBOL(tty_flip_buffer_push);
The use of spin_lock_irqsave/spin_unlock_irqrestore makes this code safe to call from interrupt context.
There is a test for low_latency and if it is set, flush_to_ldisc is called directly. This flushes the flip buffer to the line discipline immediately, at the cost of making the interrupt processing longer. The flush_to_ldisc routine is also coded to be safe for use in interrupt context. I guess that an earlier version was unsafe.
If low_latency is not set, then schedule_work is called. Calling schedule_work is the classic way to invoke the "bottom half" handler from the "top half" in interrupt context. This causes flush_to_ldisc to be called from the "bottom half" handler at the next clock tick.
Looking a little deeper, both the comment and the test seem to be in Alan Cox's original e0495736 commit of tty_buffer.c. This commit was a re-write of earlier code, so it seems that at one time there wasn't a test. Whoever added the test and fixed flush_to_ldisc to be interrupt-safe did not bother to fix the comment.
So, always believe the code, not the comments.
However, in the same code in 3.12-rc* (as of October 23, 2013) it looks like the problem was opened again when the spin_lock_irqsave's in flush_to_ldisc were removed and mutex_locks were added. That is, setting UPF_LOW_LATENCY in the serial_struct flags and calling the TIOCSSERIAL ioctl will again cause "scheduling while atomic".
The latest update from the maintainer is:
On 10/19/2013 07:16 PM, Jonathan Ben Avraham wrote:
> Hi Peter,
> "tty_flip_buffer_push" is called from IRQ handlers in most drivers/tty/serial UART drivers.
>
> "tty_flip_buffer_push" calls "flush_to_ldisc" if low_latency is set.
> "flush_to_ldisc" calls "mutex_lock" in 3.12-rc5, which cannot be used in interrupt context.
>
> Does this mean that setting "low_latency" cannot be used safely in 3.12-rc5?
Yes, I broke low_latency.
Part of the problem is that the 3.11- use of low_latency was unsafe; too many shared
data areas were simply accessed without appropriate safeguards.
I'm working on fixing it but probably won't make it for 3.12 final.
Regards,
Peter Hurley
So, it looks like you should not depend on low_latency unless you are sure that you are never going to change your kernel from a version that supports it.
Update: February 18, 2014, kernel 3.13.2
Stanislaw Gruszka wrote:
Hi,
setserial has low_latency option which should minimize receive latency
(scheduler delay). AFAICT it is used if someone talk to external device
via RS-485/RS-232 and need to have quick requests and responses . On
kernel this feature was implemented by direct tty processing from
interrupt context:
void tty_flip_buffer_push(struct tty_port *port)
{
struct tty_bufhead *buf = &port->buf;
buf->tail->commit = buf->tail->used;
if (port->low_latency)
flush_to_ldisc(&buf->work);
else
schedule_work(&buf->work);
}
But after 3.12 tty locking changes, calling flush_to_ldisc() from
interrupt context is a bug (we got scheduling while atomic bug report
here: https://bugzilla.redhat.com/show_bug.cgi?id=1065087 )
I'm not sure how this should be solved. After Peter get rid all of those
race condition in tty layer, we probably don't want go back to use
spin_lock's there. Maybe we can create WQ_HIGHPRI workqueue and schedule
flush_to_ldisc() work there. Or perhaps users that need to low latency,
should switch to thread irq and prioritize serial irq to meat
retirements. Anyway setserial low_latency is now broken and all who use
this feature in the past can not do this any longer on 3.12+ kernels.
Thoughts ?
Stanislaw
A patch has been posted to LKML to address the problem. It removes the generic code for handling low_latency but keeps the parameter for the low-level drivers to use.
http://www.kernelhub.org/?p=2&msg=419071
I tried forcing low_latency on Linux 3.12 with serial console. The kernel was very unstable. If preemption was enabled, it would hang after a few minutes of use.
So the answer for now is to stay away.
I've seen embedded boards before that have an LED that flashes like a heartbeat to show that the board is still executing code. I'd like to do something similar on an embedded Linux board I'm working on. Given that it's a fairly trivial bit of code, it seems likely to me that someone has already written a daemon for Linux that does this, but I haven't been able to find any evidence.
Note that OS X Server's heartbeatd and the High-Availability Linux heartbeat daemon are not what I'm looking for-- they both coordinate system availability over IP networks, or something like that.
Assuming what I'm looking for doesn't exist, I'm also interested in advice about how to write a daemon that toggles a pin while minimizing resource usage. At what update rate does cron become a stupid idea?
(I'd also rather not hear gushing about the LED on the sleeping MacBook Pro, if that seems relevant for some reason.)
Thanks.
The LED heartbeat is a built-in kernel function. Assuming you have a device driver for your LED, turning on the heartbeat is done thus:
$ echo "heartbeat" > /sys/class/leds/MyLed/trigger
To see the list of triggers (MMC activity, heartbeat, etc.)
$ cat /sys/class/leds/MyLed/trigger
See drivers/leds/ledtrig-heartbeat.c and http://www.avrfreaks.net/wiki/index.php/Documentation:Linux/LEDs
The interesting thing about the heartbeat is that the pattern is dynamic. The basic pattern is thump-thump-pause, just like a human heartbeat. But the rate of the heartbeat is controlled by the load average! Light loads beat at about 50 beats per minute. Heavier loads cause faster beating until it maxes out at about 180 bpm.
I wouldn't use the cron. Its just not the right tool. A very simple solution is to just run a
shell script from your inittab.
Example:
#!/bin/sh
while [ true ];
do
logger "blink!" # to be replaced
sleep 1
done
Save this to /bin/blink.sh, add the following line to your inittab and have init reread the tab be running init q.
bl:2345:respawn:/bin/blink.sh
Of course you have to adjust the blink.sh script to your environment. Its highly depended on the
particular board how an LED can be toggled from user space (device driver file, sysfs entry, ....).
If you need something more efficient you might redo the while thing in C but it might not be worth the effort.
One thing to think about is what you want to signal with a pulsing LED. With the approach outlined above we can only show that the board is still alive (kernel is running, the process executing blink.sh is scheduled and blink.sh is doing what it is supposed to do). For some use cases this might be fine but more often you actually want to signal that the application running on an embedded board is still OK (doesn't hang, hasn't crashed, ...). To implement such functionality you need to integrate the code that toggles the LED into the main loop of your application.