Need clarity on Sleep mode in MLX90614 IR sensor - stm32

I am working on the MLX90614 IR sensor. In the datasheet, they have given some steps to put the sensor but somehow I am not able to understand it clearly. A detailed description of the RAM and EEPROM access is given there. However, how to put the sensor in sleep mode is not much clear.
In another section of commands, they have given an opcode for entering sleep mode. But again there is no much information about usage of the opCode.
I am quite successful in using the sensor to read the object's temperature. But putting sleep mode is not helping me anywhere.

As per page 22 of the datasheet, you need to send a write with 0xFF to the sensor.
PEC is some CRC and and they apparently already did the math for you.
So you need to send:
0xB4 0xFF 0xE8
(Double check the I2C address and read/write bit, I'm never sure if the given address is shifted or not. Edit: 0xB4 is shifted and 8th bit 0 for write already added, so no need to do anything else).

Related

Can I overlap registers in ModBus?

I want to use modbus in my project.
I want to use it this way:
if I ask (or transmit) data, I use register number as code, and this code will be generated by script as a CRC16 from the function name.
It may happen, that areas of RegNum+RequestSize will overlap on each other, so it will not have the same meaning as in classic modbus, where reading register truely means reading register.
Here is some illustration of what I mean:
Classic modbus:CMD1(blue) reads data from register 0x00 with size 1, and CMD2(red) reads data from register 0x07 with size 2
My variant: CMD1 "reads data from" register 0x00 with size 3, and CMD2 "reads data from" register 0x02 with size 3. Im device there is no memory blocks overlap, but in the modbus request there is, and if some program creates something like memory amp, there will be a cross over
Is it legal in modern SCADA systems in particular and modern ModBus in general?
P.S. by saying "modern" I mean in modern realizations
There is no Modbus Police Department as far as I know.
If you have control over the devices on both sides of the bus, and you know what you are doing, why wouldn't you be able to?
You seem to have a couple of strange ideas regarding Modbus:
There is no meaning attached to registers, they are just numbers, you can read them (or write them) any way you want as long as you calculate the CRC of the transactions correctly.
Modbus is a standard, not a music style. There is no classic and/or modern Modbus. What you do have is devices that comply with the standard and others that are just inspired by it.
Obviously, if you are only reading, you will be fine no matter what you do. As soon as you start writing registers you should have a very clear understanding of what you are doing.
Maybe if you post code, somebody would be able to give opinions on whether it is legal, in the sense that you would be able to comply with the certification requirements.
From a more philosophical point of view, I can give you an example where I've seen what you describe: imagine a tool with two sensors on each side and four on its front, all of them giving integer values as their readings. On the left, we have values stored in registers 0x00 and 0x01; the right side goes to 0x06 and 0x07 and the four sensors on the front would be stored in registers 0x02 to 0x05. What would I do if I need readings from the front sensors twice as frequently as those coming from the sides? I can send a query to read registers 0x00 to 0x05 followed by another one to read 0x02 to 0x07.
As long as the refresh rate of all sensors and the timings where I need readings are correct for that particular process, my readings are overlapping registers 0x02 to 0x05 but I'm as legal as legal paper can be.

STM32F04xx UART transmit unreadable chars when HAL_Delay is set higher than 90 milliseconds

I'm working on transceiving data on stm32F04xx. When I transmit data from the MCU at lower speed, it looks like if the baudrate was wrong and I get a bunch of question marks. When I increate transmission speed. I can read the data I'm sending. I've used to stm32cubeIDE to generate a simple UART code and only added
HAL_UART_Transmit(&huart2, "test\r\n", sizeof("test\r\n"), 1000);
HAL_Delay(500);
in the while loop.
On my NUCLEO-F042K6 evaluation board, I don't see any issues printing data on the tty port. But I have another device which uses the same stm32f042xx chip that only works when transmitting UART data at higher speed. so when i change my delay to something like 80 milliseconds, I can read the data flow.
I've attempted to flash the same binary that I flashed on my evaluation board on the other MCU I have but again the data only readable at higher transmission speed.
I'm flashing the MCU with stm32flash tool so I don't know if that can make a difference where on the eval board, I'm using the stm32cubeIDE to flash it.
I'm not sure what's going on here, I've tried different baudrates and different clock configurations and that doesn't seem to help too.
What could possibly cause the data to be unreadable like if the baudrate was wrong when transmitting at low speed?

ESP32 i2c GY-906 0xFF 1037.55 response, temperature sensor

I'm trying to run the code below on an ESP32 TTGO T-display running micropython from loboris. (It's esp32 pre-loaded with display drivers for TTGO Display) I have attached a GY-906 temp sensor through i2c for testing. i2c.scan() finds it without issue on 0x5a [80] like it is supposed to, but when I request temperature data, the response is always 0xFF instead of proper temperature readings.
When I run the exact same code on a WeMos D1 (only difference is the pin numbers) I get temperature data returned. I am attaching both logic analyzer screenshots hoping someone can tell me what I need to do differently. Both are directly wired from 3.3, gnd, and the 2 i2c pins.
Things I have tried: adding pull up resistors to SDA, SLC (10k, 1k, 100). Switching to different i2c pins. Result seems to be the same. What am I missing? Is there supposed to be a resistor somewhere I don't know about? Other hardware? The screenshots make me think that the GY906 is responding, just the wrong response value.
Main Code
import temp_sensor
Pin = machine.Pin
I2C = machine.I2C
i2c = machine.I2C(0, scl=Pin(22), sda=Pin(21), freq=100000)
temp1 = temp_sensor.Temp.init(i2c)
print(temp1.read_object_temp())
time.sleep(1)
print(temp1.read_object_temp())
time.sleep(1)
print(temp1.read_object_temp())
time.sleep(1)
print(temp1.read_object_temp())
temp_sensor.py
import mlx90614 ##From https://github.com/mcauser/micropython-mlx90614
class Temp():
def init(i2c):
try:
sensor = mlx90614.MLX90614(i2c)
except:
print('couldnt connect to an i2c temp sensor')
sensor = False
else:
print('temp found')
#return sensor
finally:
return sensor
bad esp32 TTGO T-Display:
good 8266:
For anyone receiving 1037.55 responses from your gy-906 or MXL90614 sensor, that translates to 0xFF, 0xFF or all high (ones) from the sensor. This seems to happen when the sensor doesn't understand how to respond. (Thank you, #jasonharper for helping me understand this)
Here's how the math works:
The 0xFF, 0xFF in decimal is 65535.
The sensor resolution is 1/50 of
a degree which means you divide 65535 x 0.02 to convert to Kelvin, or
1310.7 (K)
Kelvin to Celsius (subtract 237.15) gets you 1037.55 C
Celsius to Fahrenheit gets you 1899.59 F
Bottom line, your sensor is hiccuping because it doesn't like the stop bit between the write and read, or you have a problem with your I2C bus, either the protocol is doing the request wrong or you have a cabling issue (length or wire gauge or connection, etc).
If it's the protocol like it was for me, see if anyone has updated the I2C system library recently and try a different version if possible.
I traced this issue down for days. Luckily I had a number of different MicroPython capable chips and was able to narrow it down to an old version of the machine.I2C library adding that stupid "stop" above.
I bought a $10 protocol analyzer on amazon to make that image above, and I tried loading the code on each of these: Wemos D1, HitLego ESP32S and TTGO T-Display. By trying the code on each, I was able to narrow it down to only the T-Display not working, which needed an custom old firmware version to get the ST7789 display working. The next step is to try to update and recompile the display library from loboris to work with the most recent Micropython firmware. If I make it work, I will reply below.

Re: I2C, what does "clocked out" mean?

I'm not trained in EE.
I'm programming a master-receiver device which controls a MAX11644/MAX11645. The datasheet explains the read cycle, saying:
A read cycle must be initiated to obtain conversion results. Read cycles begin with the bus master issuing a START condition followed by seven address bits and a read bit (R/W = 1). If the address byte is successfully received, the MAX11644/MAX11645 (slave) issues an acknowledge. The master then reads from the slave. The result is transmitted in 2 bytes; first 4 bits of the first byte are high, then MSB through LSB are consecutively clocked out.
All of this I understand, except the very last part: "MSB through LSB are consecutively clocked out". Most significant bit? Isn't this the first bit? We already know the first bit in the first byte is hi. And what does "clocked out" mean?
Most significant bit? Isn't this the first bit?
It may or may not be. There's no unambiguous definition of "first". RS232, for example, outputs the least significant bit first. If you mean the one that happens to be output first, then yes, that's what the next part is saying.
We already know the first bit in the first byte is hi.
Right. But the device outputs it anyway.
And what does "clocked out" mean?
It means that they are produced as output on consecutive clock cycles. That is, each time the clock advances, the next bit (in the order defined there) is placed on the output pin.

understanding double buffers

I am using the C8051F320 and basing my firmware on the HID example firmware (for example, BlinkyExample).
IN and OUT reports are each 64B long (a single 64B packet).
I enabled the ADC and set it for 10kSps. Every ADC interrupt, a sample is stored in an array. When enough samples are taken to fill a packet, an IN packet is sent.
Software sends a report telling the firmware how many reports to return.
1) The example firmware uses EP1, which has 128B. It splits the EP into IN and OUT, 64B each.
The firmware drops the first sample of each IN report at 10kSps. At 5kSps it runs fine.
2) I modified EP1 to be double buffered, but it is only 32B long now. Regardless, streaming 1000s of IN reports with 10kSps data works great (confirmed by FFT of the sampled sine wave in software).
3) I modified the firmware to use EP2, since that has 256B total, giving 64B if splitting and double buffering.
a) Again, at 10kSps, the first sample of each packet is dropped. Why? It runs fine at 5kSps.
Actually, I cannot seem to visualize how double buffering works. If the sample rate is faster than the HID transfer rate, the FIFOs will overflow regardless, right? How does double buffering help? But it seems that for double-buffering to be effective, the packet size must be cut in half.
b) While switching references of EP1 to EP2, I came across this code in F3xx_USB0_Standard_Requests.c: DATAPTR = (unsigned char*)&ONES_PACKET;. Setting a char* = address of a char* does not seem correct to me. I modified it to DATAPTR = (unsigned char*)ONES_PACKET; Regardless, there seems to be no difference. What does the zeros and ones arrays do?
HID example firmware
HID uses interrupt type endpints, which will transfer data at most once per frame, or once per 1 ms - depending on your HID descriptor, it can be much slower. This yields a net data rate of about 64000 Bytes/sec.
Once you need to transfer more data, use bulk or isochronous endpoints.