I'm working on a baremetal project with the Raspberry PI 3. I am currently trying to get the UART channel to work.
The only references (https://youtu.be/36hk_Qov5Uo?list=PLVxiWMqQvhg9FCteL7I0aohj1_YiUx1x8&t=682) I can find say I need to set the GPIO pull up/pull down register (GPPUD) to 0, then "enable" the clock for the pins, and then set GPPUD to 0 again (with 150 cycles wait time in between those steps).
I'd just like some more explanation on this.
Why do you need to set GPPUD before and after with time in between?
Why set it to 0? The datasheet for bcm2837 shows that a 0 means pull up/down is disabled, a 1 means "pull down control", and a 2 means "pull up control". What do each of these do and why set it to 0 before and after?
How does all this terminology relate/differ to the internal pull up or pull down for the gpio ports (https://grantwinney.com/using-pullup-and-pulldown-resistors-on-the-raspberry-pi/)? I.E. would these registers be how I set a port to pull either up or down while it is floating? And if so, how does the clock fit in?
Related
I'm trying to configure my SX1278 Ra-2 LoRa module via STM32 Nucleo board and ran into a problem.
While I was initializing the LNA register (0xC) by writing (0x23) -> 0010(max gain) 0011(boost on), which is supposed to give me the max gain and boost, after reading that register I receive 0x3.
Is this normal?
While LoRa SX1278 is in sleep mode it will return 0x3, without showing 3MSB. However in Standby Mode it reads 0x23 as it is supposed to.
Have you set AgcAutoOn to 0? Otherwise it will automatically set the LNAGain bits.
Source:
page 60:
When AgcAutoOn=0, the LNA gain is manually selected by choosing LnaGain bits in RegLna.
page 95:
Note:
Reading this address always returns the current LNA gain (which
may be different from what had been previously selected if AGC
is enabled.
Page 96: set bit 3 to 0 in 0x0D to disable AgcAutoOn.
Page 95: for the Booston/max gain, you need to set bits 0-1 and 5-7. Because of your writing style I suspect you are only writing to the lower ones.
While LoRa SX1278 in sleep mode will return 0x03, without showing 3MSB, in Standby Mode it reads 0x23 as it is supposed to.
I want to understand meaning of the following function mode definition, there is explanation in the library. But I don't understand that because explanations are very short and not enough. I searched on the net I couldnt find any information about.
CAN_InitStructure.CAN_TTCM = DISABLE;
CAN_InitStructure.CAN_ABOM = DISABLE;
CAN_InitStructure.CAN_AWUM = DISABLE;
CAN_InitStructure.CAN_NART = ENABLE;
CAN_InitStructure.CAN_RFLM = DISABLE;
CAN_InitStructure.CAN_TXFP = ENABLE;
These are the names of the bits located in the CAN master control register (CAN_MCR). So, the proper source for their meaning is the reference manual. My following answer will be somewhat copy & paste from the reference manual, but I will try to explain these bits in detail.
TTCM (Time triggered communication mode): This bit activates the Time Triggered Communication (TTCAN) mode, which is an extension to the CAN standard. I don't know much about TTCAN, but as I understand, it assigns time windows to messages to satisfy some real-time requirements. So, normally this bit should remain 0.
ABOM (Automatic bus-off management): If the transmit error counter (TEC) becomes greater than 255, the CAN hardware switches to bus-off state. To recover, it must wait for the recovery sequence, 128 occurrences of 11 consecutive recessive bits. Only after that, the CAN hardware may return to the normal operating state. This bit controls the returning behavior. If it's 1, returning to normal state is automatic. Otherwise, software should make the request, provided that the recovery sequence has been observed.
AWUM (Automatic wakeup mode): The CAN module can be in one of 3 modes: Initialization mode, normal mode or sleep (low power) mode. Sleep mode is requested by the software. However, you have 2 options to exit sleep mode. If this bit is 0, then you have to exit sleep mode manually. You may enable CAN wakeup interrupt to inform you about bus activity, then exit the sleep mode in ISR. But if this bit is 1, the hardware returns to normal mode automatically when it detects bus activity.
NART (No automatic retransmission): Normally, CAN hardware retries to transmit a message if its previous attempts fail, because of arbitration lost etc. But if you make this bit 1, the transmitter does not retry. This is required when you use Time Triggered Communication (TTCAN). Otherwise, you should keep this bit 0.
RFLM (Receive FIFO locked mode): Your receive mailboxes have 3 levels depth, meaning that they can store maximum 3 messages before they are overrun. This bit controls what happens in case of mailbox overrun. Default behavior is to keep the oldest 2 messages and the newest one. For example, if you received 5 messages, the buffer keeps the messages 1, 2 & 5. However, if you make this bit 1, the mailbox keeps the messages 1, 2 & 3 and discards the new arrivals.
TXFP (Transmit FIFO priority): You have 3 transmit mailboxes. When you fill more than one, the hardware must decide which one to transmit first. Normally, one can assume that a message with a lower ID number is more important and should be transmitted first. But if you want to transfer them in a first-comes-first-served fashion for some reason, you need to make this bit 1. Of course, this is just a local priority. On the physical bus, the messages with lower ID always have priority.
I have a sensor with the interrupt output connected to a input pin on my RaspberryPi. My goal is to trigger an event from the sensor interrupt. The data sheet for my sensor says that once an interrupt is triggered on the sensor, the interrupt status register will have the appropriate bit set to 1 and stay that way until it is cleared; while the status register has a status bit of 1, the interrupt pad on the sensor will be pulled down.
My problem is that I can see the status register correctly reflect an interrupt when I physically trigger the sensor. But when I read the pin from my Pi, I never see any change reflected. Here's the gist of my code:
import Sensor
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setup(11, GPIO.IN, pull_up_down = GPIO.PUD_UP)
s = Sensor.start()
while True:
print 'sensor int reg: ', s.readIntReg() # I do not clear interrupt
print 'pin value: ', GPIO.input(11)
The first print will change according to my interaction with the sensor as expected. The second print shows the pin holds 1 or 0 depending on whether it is set to pull up or down, respectively.
It seems like the problem lies in that whenever the interrupt fires, the sensor is pulling the pin down and the Pi is pulling it up... How should I handle this?
The sensor is the VCNL4010 [https://www.adafruit.com/products/466]
I suppose you have the gpio driver installed and active on the Pi?
Then you'll probably never see the interrupt triggering from the Python level since the kernel driver will service it (and reset the flag) already in the background.
I added a 10k external pull-up resistor with 3.3V and that did the trick... not sure why the internal pull-up on the Pi didn't do the same, perhaps I configured it wrong.
UPDATE: That turned out not to be the issue at all. I was neglecting to explicitly set the sensor to free run mode. Part of my code had the unintended side effect of setting that mode so in tweaking things for test sometimes it worked. The pull-up on the Pi works fine.
I have two xbee's series 1. I have them as endpoint devices working in API mode and talking to each other. The first xbee is attached at a raspberry pi, while the other is on my pc where I see the terminal tab of XCTU program. The baud rate I use is 125000.
From raspberry pi I try to send a jpg image which is 30Kbytes. I send data frames 100 byte long (the biggest as it is said in the xbee documentation). Inside a loop I create and send the packets, I have also a cout statement that prints the loop number. Everything is fine and all bytes are sent. When I comment out the cout statement not all bytes are sent.
From what I have understood the cout statement works as a delay between packets, but I still cannot understand why is this happening as it is supposed that I use the half speed ...
I hope I was clear and look forward for a reply.
UPDATE
Just to summarize, i changed baud rate to 250000 where there is the same behavior as in 125000. I also implemented hardware flow control by checking cts signal. When xbees are in transparent mode I need a delay between sending characters at around 150us. The same goes for api mode too. The difference with 125000 baud rate in api mode was that the delay needed, was enough to be betwween each data packet, but in 250000 the delay is needed between each byte that i send. If i do the above everything goes well.
The next thing i did was to plug both xbees in my pc in transparent mode. I went to terminal tab of xctu software where i chose assemble packet and sent at around 3000 bytes to the other xbee. The result was the same. The second xbee received at about 1500 bytes and then each time that i was sending one byte from the first to the second, the "lost bytes" were being received at packets of 1000. :/
So could anyone know what am I doing wrong?
You should connect the /CTS pin from the XBee module into the Raspberry Pi, and have your routine stop sending data when the XBee de-asserts it.
At higher baud rates, it's possible to stream data into the XBee module faster than it can send to the remote module. The local XBee module uses the /CTS pin to notify the host when its buffers are almost full and the host should stop sending. People refer to this as hardware flow control.
It may be necessary to modify the serial driver on the Raspberry Pi to make use of that signal -- it should pause the transmit buffer when de-asserted, and automatically resume sending when re-asserted.
We have an virtual audio device driver similar to Sound flower. This virtual device will be listed in sound system preferences. Whenever our device gets selected in system preferences, it prevents idle sleep. If we switch the selection to default output device, everything works as expected.
If we execute 'pmset -g assertions' command in Terminal, it gives below output
Assertion status system-wide:
ChargeInhibit 0
PreventUserIdleDisplaySleep 0
PreventUserIdleSystemSleep 1
NoRealPowerSources_debug 0
CPUBoundAssertion 0
EnableIdleSleep 1
PreventSystemSleep 0
DisableInflow 0
DisableLowPowerBatteryWarnings 0
ExternalMedia 0
Listed by owning process:
pid 115: [0x0000012c00000073] PreventUserIdleSystemSleep named: MY_DRIVER_IDENTIFER.noidlesleep"
Could any one suggest me some pointers to resolve this issue.
I think this governed by the flag kIOPMPreventIdleSleep, which resides in the capabilityFlags field of the IOPMPowerState struct.
To participate in power management decisions, you'll need to add your device driver to the power management plane, typically in your overridden IOService::start(provider) method:
PMinit();
provider->joinPMtree(this);
registerPowerDriver(this, powerStates, numPowerStates);
where powerStates and numPowerStates specifies an array of power states you want your device to be able to be in. You'll probably not want more than 2 for a virtual device, and maybe you even only need one. I suspect a superclass of your class is setting states that are inhibiting sleep. Once you've registered for power management, your driver will be expected to handle the power management methods such as IOService::setPowerState().
Depending on how you want your device to behave, you might want to create 2 power states, one "live" when playing back or capturing sound (and inhibiting sleep) and the other "idle" when the device isn't doing anything, and allowing sleep.
The power management topic is a bit too big to cover entirely in a StackOverflow answer, so I suggest you read the docs on the stuff I've mentioned above and try clearing the relevant flag in your power state(s).
Hope that helps.