I'm working on a project which integrates a STM32L051R8T6 chipset and I need the RTC functionality for some functions, like slow timers and sleep wakeup. However if I call Mbed's set_time() to set the RTC, the program hangs or doesn't execute as expected.
Before implementing anything I'm trying to run Mbed's RTC example code: https://os.mbed.com/docs/mbed-os/v5.8/reference/rtc.html , but I'm having no luck. The RTC seems to be set with set_time(), however, when I call time(NULL) the I always get the initially set time. Looks like the RTC is not counting.
I'm compiling the code for the STM32L053R8 using Mbed's online compiler, not sure if that target is very different from mine and that is what causes the issue.
This is the code I'm trying to execute:
#include "mbed.h"
int main() {
set_time(1256729737); // Set RTC time to Wed, 28 Oct 2009 11:35:37
while (true) {
time_t seconds = time(NULL);
printf("Time as seconds since January 1, 1970 = %d\n", seconds);
printf("Time as a basic string = %s", ctime(&seconds));
char buffer[32];
strftime(buffer, 32, "%I:%M %p\n", localtime(&seconds));
printf("Time as a custom formatted string = %s", buffer);
wait(1);
}
}
When it doesn't hang the RTC time doesn't change:
Terminal output:
Including full path for the rtc_api.h file and adding rtc_init() at the begining of the code solved the issue. The rtc_init() function takes care of selecting the available clock source. The working code looks as follows:
#include "mbed.h"
#include "mbed/hal/rtc_api.h"
int main() {
rtc_init();
set_time(1256729737); // Set RTC time to Wed, 28 Oct 2009 11:35:37
while (true) {
time_t seconds = time(NULL);
printf("Time as seconds since January 1, 1970 = %d\n", seconds);
printf("Time as a basic string = %s", ctime(&seconds));
char buffer[32];
strftime(buffer, 32, "%I:%M %p\n", localtime(&seconds));
printf("Time as a custom formatted string = %s", buffer);
wait(1);
}
}
Related
This question already has answers here:
Is timeout changed after a call to select in c?
(5 answers)
Closed 25 days ago.
In a very simple C program, using select() to check any new read data on a socket, when I use the optional timeout parameter, it is being overwritten by select(). It looks like it resets it to values of seconds and microseconds it actually waited, so when data is coming sooner than the timeout, it will have much smaller values, leading to smaller and smaller timeouts unless timeout is reset, when select() is called in a loop.
I could not find any information on this behavior in select() description. I am using Linux Ubuntu 18.04 in my testing. It looks like I have to reset the timeout value every time before calling select() to keep the same timeout?
The code snippet is this:
void *main_udp_loop(void *arg)
{
struct UDP_CTX *ctx = (UDP_CTX*)arg;
fd_set readfds = {};
struct sockaddr peer_addr = { 0 };
int peer_addr_len = sizeof(peer_addr);
while (1)
{
struct timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 850000; // wait 0.85 second.
FD_ZERO(&readfds);
FD_SET(ctx->udp_socketfd, &readfds);
int activity = select( ctx->udp_socketfd + 1 , &readfds , NULL , NULL , &timeout);
if ((activity < 0) && (errno != EINTR))
{
printf("Select error: Exiting main thread\n");
return NULL;
}
if (timeout.tv_usec != 850000)
{
printf ("Timeout changed: %ld %ld\n", (long)timeout.tv_sec, (long)timeout.tv_usec);
}
if (activity == 0)
{
printf ("No activity from select: %ld \n", (long)time(0));
continue;
}
...
}
This is documented behavior in the Linux select() man page:
On Linux, select() modifies timeout to reflect the amount of time not slept; most other implementations do not do this. (POSIX.1 permits either behavior.) This causes problems both when Linux code which reads timeout is ported to other operating systems, and when code is ported to Linux that reuses a struct timeval for multiple select()s in a loop without reinitializing it. Consider timeout to be undefined after select() returns.
So, yes, you have to reset the timeout value every time you call select().
I'm trying to use a callback function with the led controller of esp32, however I'm unable to compile the code. I'm not sure if something is missing or the code has errors, as I have a limited understanding of pointers or coding in general.
I'm using the Arduino framework, however when I hover over the ledc_cb_register text, VSCode will popup some more details/definition of this function, so I would expect that it does see the reference to it.
relevant esp32 documentation:
docs.espressif.com
I'm trying to copy the following example, but make it a bit simpler (using only one channel):
github
It seems this example can be compiled on my side too, but this uses espidf framework.
trying the following code (many lines are not shown here for simplicity)
static bool cb_ledc_fade_end_event(const ledc_cb_param_t *param, void *user_arg)
{
portBASE_TYPE taskAwoken = pdFALSE;
if (param->event == LEDC_FADE_END_EVT) {
isFading = false;
}
return (taskAwoken == pdTRUE);
}
[...]
void setup() {
ledc_timer_config_t ledc_timer = {
.speed_mode = LEDC_HIGH_SPEED_MODE, // timer mode
.duty_resolution = LEDC_TIMER_13_BIT, // resolution of PWM duty
.timer_num = LEDC_TIMER_0, // timer index
.freq_hz = LED_frequency, // frequency of PWM signal
.clk_cfg = LEDC_AUTO_CLK, // Auto select the source clock
};
ESP_ERROR_CHECK(ledc_timer_config(&ledc_timer));
ledc_channel_config_t ledc_channel = {
.gpio_num = LED_PIN,
.speed_mode = LEDC_HIGH_SPEED_MODE,
.channel = LEDC_CHANNEL_0,
.timer_sel = LEDC_TIMER_0,
.duty = 4000,
.hpoint = 0,
//.flags.output_invert = 0
};
ESP_ERROR_CHECK(ledc_channel_config(&ledc_channel));
ledc_fade_func_install(0);
ledc_cbs_t callbacks = {
.fade_cb = cb_ledc_fade_end_event
};
ledc_cb_register(LEDC_HIGH_SPEED_MODE, LEDC_CHANNEL_0, &callbacks, 0);
and getting the following error message:
[..]/.platformio/packages/toolchain-xtensa-esp32#8.4.0+2021r2-patch3/bin/../lib/gcc/xtensa-esp32-elf/8.4.0/../../../../xtensa-esp32-elf/bin/ld.exe: .pio\build\esp32dev\src\main.cpp.o:(.literal._Z5setupv+0x78): undefined reference to 'ledc_cb_register(ledc_mode_t, ledc_channel_t, ledc_cbs_t*, void*)'
[..]/.platformio/packages/toolchain-xtensa-esp32#8.4.0+2021r2-patch3/bin/../lib/gcc/xtensa-esp32-elf/8.4.0/../../../../xtensa-esp32-elf/bin/ld.exe: .pio\build\esp32dev\src\main.cpp.o: in function 'setup()':
[..]\PlatformIO\Projects\asdf/src/main.cpp:272: undefined reference to 'ledc_cb_register(ledc_mode_t, ledc_channel_t, ledc_cbs_t*, void*)'
collect2.exe: error: ld returned 1 exit status
*** [.pio\build\esp32dev\firmware.elf] Error 1
According to the docs, it seems to be a feature that was added in v4.4.4
but the latest Arduino core (2.0.6) is build on v4.4.3.
If you are not on the latest Arduino core, try updating that first and see if it works. If not, then you just have to wait until the Arduino core is updated to use ESP IDF v4.4.4.
Of course, you can use ledc_isr_register(...) to register an ISR handler for the interrupt.
Best of luck!
Update:
I realized that the problem (at least on my side when testing it) was that there is an error in the ledc.h file, where they forgot to add ledc_cb_register in an extern "C" block.
I manually patched it by moving the
#ifdef __cplusplus
}
#endif
part, which was located after the ledc_set_fade_step_and_start function, below ledc_cb_register instead.
So, the end of my ledc.h file looks like this now:
...
esp_err_t ledc_set_fade_step_and_start(ledc_mode_t speed_mode, ledc_channel_t channel, uint32_t target_duty, uint32_t scale, uint32_t cycle_num, ledc_fade_mode_t fade_mode);
/**
* #brief LEDC callback registration function
* ...
*/
esp_err_t ledc_cb_register(ledc_mode_t speed_mode, ledc_channel_t channel, ledc_cbs_t *cbs, void *user_arg);
#ifdef __cplusplus
}
#endif
I build a code in Xcode console with C++ project works perfectly before:
#include "SerialPort.hpp"
#include "TypeAbbreviations.hpp"
#include <iostream>
int main(int argc, const char * argv[]) {
//* Open port, and connect to a device
const char devicePathStr[] = "/dev/tty.usbserial-A104RXG4";
const int baudRate = 9600;
int sfd = openAndConfigureSerialPort(devicePathStr, baudRate);
if (sfd < 0) {
if (sfd == -1) {
printf("Unable to connect to serial port.\n");
}
else { //sfd == -2
printf("Error setting serial port attributes.\n");
}
return 0;
}
// * Read using readSerialData(char* bytes, size_t length)
// * Write using writeSerialData(const char* bytes, size_t length)
// * Remember to flush potentially buffered data when necessary
// * Close serial port when done
const char dataToWrite[]="abcd";
char databuffer[1024];
while(1){
readSerialData(databuffer, 4);
sleep(2);
writeSerialData(databuffer, 4);
sleep(2);
}
printf("end.\n");
return 0;
}
After this build, I tried to migrate it to my Xcode cocoa application with C++ wrappers below.
I am pretty sure my Wrapper works fine with test C++ code. That means, I can call C++ function from my ViewController.swift.
But there's one strange thing happened. I am not able to open connection with the following code:
sfd = open(portPath, (O_RDWR | O_NOCTTY | O_NDELAY));
if (sfd == -1) {
printf("Unable to open serial port: %s at baud rate: %d\n", portPath, baudRate);
printf("%s", std::strerror(errno));
return sfd;
}
There error message returns :
Unable to open serial port: /dev/tty.usbserial-A104RXG4 at baud rate: 9600
Operation not permitted
I've tried to change app sandbox configuration, set up my system preference to grant access to my app, also I disabled my rootless. (csrutil disable with command + R)
But the problem still persists:
&
I want to ask that:
1. Why my code on Xcode C++ project works fine but fail on swift's cocoa app on Xcode?
2. How to solve the "Operation not permitted" Issue.
My Xcode version is 11.3.1 and Mac OS is 10.14.6 Mojave.
I figure it out myself.
It's APP sandbox is bothering me.
All you need to do is turn off sandbox
Turn off it by click X on the mouse point.
If you want to add it back, just click +Capability and put it back on.
https://i.stack.imgur.com/ZOc18.jpg
reference : https://forums.developer.apple.com/thread/94177#285075
I created a simple GWT example using eclipse, I only added a method to GreetingService which is auto-generated.
Date greetServer2() ;
It's implemented like below:
public Date greetServer2(){
// TODO Auto-generated method stub
//
String s = "2014/04/08";
DateFormat inputFormatter = new SimpleDateFormat("yyyy/MM/dd");
Date date=null;
try {
date = inputFormatter.parse(s);
} catch (ParseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return date;
}
On the client side I just show the date in a popup:
greetingService.greetServer2(new AsyncCallback<Date>() {
public void onFailure(Throwable caught) {
// Show the RPC error message to the user
...
}
public void onSuccess(Date result) {
Window.alert(result.toString());
}
});
I run it via eclipse, the url generated by eclipse is:
http://127.0.0.1:8888/HelloGWT.html?gwt.codesvr=127.0.0.1:9997
The popup window says "Tue Apr 08 00:00:00 CLST 2014"
But if I access without gwt.codesvr parameter:
http://127.0.0.1:8888/HelloGWT.html
The popup window says "Mon Apr 07 23:00:00 GMT-400 2014"
My GWT is 2.5.1, my JDK is 1.7.0_25.
Any clues?
Thanks in advance.
One result comes from a Java code, and the other one is produced by your browser. The difference is in the time zones. If you want consistent results, you should not use date.toString(), but display date using a DateFormat, and pass a time zone to it.
Remember that your users may be in different time zones, and they will all see a different "time" (and even a different date, like in your example) based on their browser settings, unless you specify a time zone in your code.
UPDATE:
There are different strategies for dealing with time zones. For example, you can save all dates as Long values (date.getTime()) for consistency. Then, you display it using a DateFormat and a time zone.
If you want to make sure that your date starts exactly at midnight in your selected time zone, make an adjustment before saving or using it. This is how I do it:
public static Long toMidnight(Long date, TimeZone timeZone) {
return date - date % (24 * 60 * 60 * 1000) +
timeZone.getOffset(new Date(date)) * 60 * 1000;
}
So I'm working with a device where I need to send and receive raw ethernet frames. It's a wireless radio and it uses ethernet to send status messages to its host. The protocol it uses is actually IPX, but I figured it would be easier to send raw ethernet frames using libpcap than to dig through decades old code implementing IPX (which got replaced by TCP/IP, so it's quite old).
My program sends a request packet (this packet is exactly the same every time, it's stateless) and the device returns a response packet with the data I need. I'm using pcap_inject to send the frame and pcap_loop in another thread to do the receiving. I originally had it in one thread, but tried 2 threads to see if it fixed the issue I'm having.
The issue is that libpcap doesn't seem to be receiving the packets in real time. It seems to buffer about 5 of them and then process them all at once. I want to be able to read them as fast as they come. Is there some way to disable this buffering on libpcap, or increase the refresh rate?
Some example output (I just printed out the time that a packet was received). Notice how there is about a second of time between each group
Time: 1365792602.805750
Time: 1365792602.805791
Time: 1365792602.805806
Time: 1365792602.805816
Time: 1365792602.805825
Time: 1365792602.805834
Time: 1365792603.806886
Time: 1365792603.806925
Time: 1365792603.806936
Time: 1365792603.806944
Time: 1365792603.806952
Time: 1365792604.808007
Time: 1365792604.808044
Time: 1365792604.808055
Time: 1365792604.808063
Time: 1365792604.808071
Time: 1365792605.809158
Time: 1365792605.809194
Time: 1365792605.809204
Time: 1365792605.809214
Time: 1365792605.809223
Here's the inject code:
char errbuf[PCAP_ERRBUF_SIZE];
char *dev="en0";
if(dev==NULL){
fprintf(stderr,"Pcap error: %s\n",errbuf);
return 2;
}
printf("Device: %s\n",dev);
pcap_t *handle;
handle=pcap_open_live(dev, BUFSIZ, 1, 1000, errbuf);
if(handle==NULL){
fprintf(stderr, "Device open error: %s\n",errbuf);
return 2;
}
//Construct the packet that will get sent to the radio
struct ether_header header;
header.ether_type=htons(0x0170);
int i;
for(i=0;i<6;i++){
header.ether_dhost[i]=radio_ether_address[i];
header.ether_shost[i]=my_ether_address[i];
}
unsigned char frame[sizeof(struct ether_header)+sizeof(radio_request_packet)];
memcpy(frame, &header, sizeof(struct ether_header));
memcpy(frame+sizeof(struct ether_header), radio_request_packet, sizeof(radio_request_packet));
if(pcap_inject(handle, frame, sizeof(frame))==-1){
pcap_perror(handle, errbuf);
fprintf(stderr, "Couldn't send frame: %s\n",errbuf);
return 2;
}
bpf_u_int32 mask;
bpf_u_int32 net;
if(pcap_lookupnet(dev,&net,&mask,errbuf)==-1){
pcap_perror(handle, errbuf);
fprintf(stderr,"Net mask error: %s\n",errbuf);
return 2;
}
char *filter="ether src 00:30:30:01:b1:35";
struct bpf_program fp;
if(pcap_compile(handle, &fp, filter, 0, net)==-1){
pcap_perror(handle, errbuf);
fprintf(stderr,"Filter error: %s\n",errbuf);
return 2;
}
if(pcap_setfilter(handle, &fp)==-1){
pcap_perror(handle, errbuf);
fprintf(stderr, "Install filter error: %s\n",errbuf);
return 2;
}
printf("Starting capture\n");
pthread_t recvThread;
pthread_create(&recvThread, NULL, (void *(*)(void *))thread_helper, handle);
while(1){
if(pcap_inject(handle, frame, sizeof(frame))==-1){
pcap_perror(handle, errbuf);
fprintf(stderr, "Couldn't inject frame: %s\n",errbuf);
return 2;
}
usleep(200000);
}
pcap_close(handle);
return 0;
And the receiving code:
void got_packet(u_char *args,const struct pcap_pkthdr * header,const u_char * packet){
struct timeval tv;
gettimeofday(&tv, NULL);
double seconds=(double)tv.tv_sec + ((double)tv.tv_usec)/1000000.0;
printf("Time: %.6f\n",seconds);
}
void *thread_helper(pcap_t *handle){
pcap_loop(handle, -1, got_packet, NULL);
return NULL;
}
Is there some way to disable this buffering on libpcap
There's currently no libpcap API to do that.
However, depending on what OS you're running, there may be ways to do it for that particular OS, i.e. you can do it, but in a non-portable fashion.
For systems that use BPF, including *BSD and...
...OS X, which, given the "en0", I suspect you're using, the way to do it is to do something such as:
Creating a set_immediate_mode.h header file containing:
extern int set_immediate_mode(int fd);
Creating a set_immediate_mode.c source file containing:
#include <sys/types.h>
#include <sys/time.h>
#include <sys/ioctl.h>
#include <net/bpf.h>
#include "set_immediate_mode.h"
int
set_immediate_mode(int fd)
{
int on = 1;
return ioctl(fd, BIOCIMMEDIATE, &on);
}
Adding #include <string.h> and #include <errno.h> to your program if it's not already including those files, adding #include "set_immediate_mode.h" to your program, and adding, after the pcap_open_live() call succeeds, the following code:
int fd;
fd = pcap_fileno(handle);
if (fd == -1) {
fprintf(stderr, "Can't get file descriptor for pcap_t (this should not happen)\n");
return 2;
}
if (set_immediate_mode(fd) == -1) {
fprintf(stderr, "BIOCIMMEDIATE failed: %s\n", strerror(errno));
return 2;
}
That will completely disable the buffering that BPF normally does (that's the buffering you're seeing with libpcap; see the BPF(4) man page), so that packets are delivered as soon as they arrive. That changes the way buffering is done in ways that might cause BPF's internal buffers to fill up faster than they would if the normal buffering is done, so that might cause packets to be lost when they wouldn't otherwise be lost, but using pcap_set_buffer_size(), as suggested by Kiran Bandla, could help that if it happens (which it might not, especially given that you're using a filter to keep "uninteresting" packets from being put into BPF's buffer in the first place).
On Linux, this is currently not necessary - what buffering is done doesn't have a timeout for the delivery of packets. On Solaris, it would be done similarly on Solaris 11 (as libpcap uses BPF), but would be done differently on earlier versions of Solaris (as they didn't have BPF and libpcap uses DLPI). On Windows with WinPcap, pcap_open() has a flag for that.
A future version of libpcap will probably have an API for this; I can't promise when that will happen.
You can set the capture buffer size by using pcap_set_buffer_size. Make sure you do this before you activate your capture handle.
Lowering the buffer size is not always a good idea. Watchout for your CPU and also dropped packets at high capture rate.