Why does my esp32 keep rebooting problem? - webserver

This is my problem:
Brownout detector was triggered
ets Jun 8 2016 00:22:57
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:2
load:0x3fff0018,len:4
load:0x3fff001c,len:5008
ho 0 tail 12 room 4
load:0x40078000,len:10600
ho 0 tail 12 room 4
load:0x40080400,len:5684
entry 0x400806bc
I want to make some web server on my ESP32

Related

How to preserve the order of items emitted by two observables after they are merged?

I have run into a behavior of Scala Observables that has surprised me. Consider my example below:
object ObservablesDemo extends App {
val oFast = Observable.interval(3.seconds).map(n => s"[FAST] ${n*3}")
val oSlow = Observable.interval(7.seconds).map(n => s"[SLOW] ${n*7}")
val oBoth = (oFast merge oSlow).take(8)
oBoth.subscribe(println(_))
oBoth.toBlocking.toIterable.last
}
The code demonstrates emitting elements from two observables. One of them emits its elements in a "slow" way (every 7 seconds), the other in a "fast" way (every 3 seconds). For the sake of the question assume we want to define those observables with the use of the map function and map the numbers from the interval appropriately as seen above (as opposed to another possible approach which would be emitting items at the same rate from both observables and then filtering out as needed).
The output of the code seems counterintuitive to me:
[FAST] 0
[FAST] 3
[SLOW] 0
[FAST] 6
[FAST] 9 <-- HERE
[SLOW] 7 <-- HERE
[FAST] 12
[FAST] 15
The problematic part is when the [FAST] observable emits 9 before the [SLOW] observable emits 7. I would expect 7 to be emitted before 9 as whatever is emitted on the seventh second should come before what is emitted on the ninth second.
How should I modify the code to achieve the intended behavior? I have looked into the RxScala documentation and have started my search with topics such as the different interval functions and the Scheduler classes but I'm not sure if it's the right place to search for the answer.
That looks like the way it should work. Here it is listing out the seconds and the events. You can verify with TestObserver and TestScheduler if that is available in RXScala. RXScala was EOL in 2019, so keep that in mind too.
Secs Event
-----------------
1
2
3 [Fast] 0
4
5
6 [Fast] 3
7 [Slow] 0
8
9 [Fast] 6
10
11
12 [Fast] 9
13
14 [Slow] 7
15 [Fast] 12
16
17
18 [Fast] 15
19
20
21 [Fast] 18

SocketCAN - device state "STOPPED"

I use a Raspberry Pi with the PiCAN board which uses a MCP2515 CAN controller.
I use SocketCAN to read and write CAN messages via an application I wrote.
After running a few weeks without a problem the controller is now in the state "STOPPED".
What is the difference between the state STOPPED and BUS-OFF?
Does a device enter the BUS-OFF state if too many error occure on the CAN bus and the device enters the STOPPED state if you set the device down (ip link set canX down)?
Are there any other ways how the device may enter the state STOPPED? I wasn't able to find a way how my application might have set the device down.
ip -details -statistics link show can0
3: can0: <NOARP,ECHO> mtu 16 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 10
link/can promiscuity 0
can state STOPPED restart-ms 100
bitrate 250000 sample-point 0.875
tq 250 prop-seg 6 phase-seg1 7 phase-seg2 2 sjw 1
mcp251x: tseg1 3..16 tseg2 2..8 sjw 1..4 brp 1..64 brp-inc 1
clock 8000000
re-started bus-errors arbit-lost error-warn error-pass bus-off
0 0 0 146 139 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
RX: bytes packets errors dropped overrun mcast
787700920 151606570 24 0 24 0
TX: bytes packets errors dropped carrier collsns
6002905 5895301 0 0 0 0
You need to familiarize your self with ERROR ACTIVE, ERROR PASSIVE, and BUS OFF error states of CAN bus devices, and when is it needed to manually restart CAN communication.
All relevant info can be found at one of these links:
http://www.can-wiki.info/doku.php?id=can_faq:can_faq_erors
http://www.port.de/cgi-bin/CAN/CanFaqErrors

How can I transfer data to HID if it has not OUT Endpoint

I am trying to implement data exchange with some HID device. I managed to implement reading from this device using libusb_interrupt_transfer function, but I do not know how to implement sending a buffer to the HID, because device has not OUT endpoint. How can I transfer data to HID? Descriptor of device looks like this:
Bus 001 Device 074: ID 16d0:8080 MCS
Couldn&apos;t open device, some information will be missing
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 1.10
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 8
idVendor 0x16d0 MCS
idProduct 0x8080
bcdDevice 2.03
iManufacturer 1
iProduct 2
iSerial 3
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 34
bNumInterfaces 1
bConfigurationValue 1
iConfiguration 0
bmAttributes 0xc0
Self Powered
MaxPower 500mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 1
bInterfaceClass 3 Human Interface Device
bInterfaceSubClass 0 No Subclass
bInterfaceProtocol 0 None
iInterface 0
HID Device Descriptor:
bLength 9
bDescriptorType 33
bcdHID 1.11
bCountryCode 0 Not supported
bNumDescriptors 1
bDescriptorType 34 Report
wDescriptorLength 32
Report Descriptors:
** UNAVAILABLE **
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 3
Transfer Type Interrupt
Synch Type None
Usage Type Data
wMaxPacketSize 0x0008 1x 8 bytes
bInterval 5
If the device doesn't have an OUT endpoint, the only way to send data to the device is using control transfers using default control endpoint (EP0).
There are HID class specific control requests mentioned in the HID Specification document. However, SET_* requests aren't mandatory, so your HID device may not support them.
There can be also vendor specific control requests, but there is no way to guess them, so they need to be documented by the device vendor.

Kafka latency optimization

My kafka version is 0.10.2.1.
My service have really low qps (1msg/sec). And our requirement for rtt is really strict. ( 99.9% < 30ms)
Currently I've encounter a problem, when kafka run for a long time, 15 days or so, performance start to go down.
2017-10-21 was like
Time . num of msgs . percentage
cost<=2ms 0 0.000%
2ms<cost<=5ms 12391 32.659%
5ms<cost<=8ms 25327 66.754%
8ms<cost<=10ms 186 0.490%
10ms<cost<=15ms 24 0.063%
15ms<cost<=20ms 2 0.005%
20ms<cost<=30ms 0 0.000%
30ms<cost<=50ms 4 0.011%
50ms<cost<=100ms 1 0.003%
100ms<cost<=200ms 0 0.000%
200ms< cost<=300ms 6 0.016%
300ms<cost<=500ms 0 0.000%
500ms<cost<=1s 0 0.000%
cost>1s 0 0.000%
But recently, it became :
cost<=2ms 0 0.000%
2ms<cost<=5ms 7592 29.202%
5ms<cost<=8ms 17470 67.197%
8ms<cost<=10ms 698 2.685%
10ms<cost<=15ms 143 0.550%
15ms<cost<=20ms 23 0.088%
20ms<cost<=30ms 19 0.073%
30ms<cost<=50ms 11 0.042%
50ms<cost<=100ms 5 0.019%
100ms<cost<=200ms 11 0.042%
200m s<cost<=300ms 26 0.100%
300ms<cost<=500ms 0 0.000%
500ms<cost<=1s 0 0.000%
cost>1s 0 0.000%
When I check the log, I don't see a way to check the reason why a specific message have a high rtt. And if there's any way to optimize(OS tune, broker config), please enlighten me
Without the request handling time break-down it is hard to tell which part maybe the culprit of your issue. More specifically you'll need to hook up your jmx and check the following request-level metrics:
TotalTimeMs
RequestQueueTimeMs
LocalTimeMs
RemoteTimeMs
ResponseQueueTimeMs
ResponseSendTimeMs
https://kafka.apache.org/documentation/#monitoring
Check their avg / 99 percentile value over time and see which one is contributing to the perf degradation.
Consider upgrading to 0.11 (or 1.00) which has performance improvements in it
Optimisation article: https://www.confluent.io/blog/optimizing-apache-kafka-deployment/

How does 'event' works? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I'm studying SystemVerilog event data types. But I can't understanda
the simulation results.
How does event works in SystemVerilog?
UPDATE
1 module events();
2 // Declare a new event called ack
3 event ack;
4 // Declare done as alias to ack
5 event done = ack;
6 // Event variable with no synchronization object
7 event empty = null;
9 initial begin
10 #1 -> ack;
11 #1 -> empty;
12 #1 -> done;
13 #1 $finish;
14 end
15
16 always # (ack)
17 begin
18 $display("ack event emitted");
19 end
20
21 always # (done)
22 begin
23 $display("done event emitted");
24 end
25
26 /*
27 always # (empty)
28 begin
29 $display("empty event emitted");
30 end
31 */
32
33 endmodule
How does it show as following?
ack event emitted
done event emitted
ack event emitted <== I don't understand here Why does it happens?
done event emitted
I think that it should be like this.
ack event emitted
done event emitted
done event emitted
I think you may be confused about why the events are printed multiple times? Have a look at line5:
event done = ack;
Now ack and done are synonymous with each other, whenever one event is triggered the other is as well, since each is triggered once you get 4 printouts.