I am using Raspberry Pi 2 board with raspbian loaded. need to do SPI by bit banging & interface MCP3208.
I have taken code from Github. It is written for MCp3008(10 bit adc).
Only change I made in code is that instead of calling:
adcValue = recvBits(12, clkPin, misoPin)
I called adcValue = recvBits(14, clkPin, misoPin) since have to receive 14 bits of data.
Problem: It keeps on sending random data ranging from 0-10700. Even though data should be max 4095. It means I am not reading data correctly.
I think the problem is that MCP3208 has max freq = 2Mhz, but in code there is no delay between two consecutive data read or write. I think I need to add some delay of 0.5us whenever I need to transition clock since I am operating at 1Mhz.
For a small delay I am currently reading Accurate Delays on the Raspberry Pi
Excerpt:
...when we need accurate short delays in the order of microseconds, it’s
not always the best way, so to combat this, after studying the BCM2835
ARM Peripherals manual and chatting to others, I’ve come up with a
hybrid solution for wiringPi. What I do now is for delays of under
100μS I use the hardware timer (which appears to be otherwise unused),
and poll it in a busy-loop, but for delays of 100μS or more, then I
resort to the standard nanosleep(2) call.
I finally found some py code to simplify reading from the 3208 thanks to RaresPlescan.
https://github.com/RaresPlescan/daisypi/blob/master/sense/mcp3208/adc_3.py
I had a data logger build on the pi, that was using a 3008. The COTS data logger I was trying to replicate had better resolution, so I started looking for a 12 bit and found the 3208. I literally swapped the 3008 out for the 3208 and with this guys code I have achieved better resolution than the COTS data logger.
Related
I'm currently starting to use ThreadX on a STM32 Nucleo-H723ZG (STM32H723ZG MCU).
I noticed that when loading the Nx_TCP_Echo_Server / Nx_TCP_Echo_Client projects from CubeMX, the RAM gets filled up pretty much to the top, which makes me wonder, how I'm supposed to add my own code and data here.
Since I'm pretty new to RAM partitioning, RTOS and similar, I don't have a perfect feeling for what is wrong or right and how to proceed (and if it is a problem at all).
Nevertheless I wonder, if maybe using a different way of partitioning the RAM or by dropping some non-necessary code-parts, the RAM could be freed-up.
Or a different way of thinking:
Since RAM_D1 got filled, but _D2, _D3 and DTCMRAM are pretty much empty, is there a way to use the free RAM for my own purposes (I would like to let SPI and ADC processing run via DMA, so this needs a place to go ....)
Hope my questions are not too confusing ;)
The system has the following amount of RAM, according to STM:
"SRAM: total 564 Kbytes all with ECC, including 128 Kbytes of data TCM RAM for critical real-time data + 432 Kbytes of system RAM (up to 256 Kbytes can remap on instruction TCM RAM for critical real time instructions) + 4 Kbytes of backup SRAM (available in the lowest-power modes)" (see STMs STM32H723ZG MCU product page)
Down below you'll find screenshots of the current RAM usage, for RAM_D1 especially .tcp_sec eats up most of the RAM.
--> Can .tcp_sec be optimized or kicked-out?
If tcp means here the tcp protocol, maybe this can be a way to optimize this, since I'm not sure whether I need a handshake etc., maybe UDP is sufficient (and faster for the ADC data streaming) ... what do you say?
Edit:
The linker-file shows, that there .tcp_sec (NOLOAD) is written ... is NOLOAD maybe a hint on a "placebo" RAM occupation (pre-allocation / reservation, but no actual usage?)
Linker-script extract:
/* User_heap_stack section, used to check that there is enough RAM left */
._user_heap_stack :
{
. = ALIGN(8);
PROVIDE ( end = . );
PROVIDE ( _end = . );
. = . + _Min_Heap_Size;
. = . + _Min_Stack_Size;
. = ALIGN(8);
} >RAM_D1
.tcp_sec (NOLOAD) : {
. = ABSOLUTE(0x24048000);
*(.RxDecripSection)
. = ABSOLUTE(0x24048060);
*(.TxDecripSection)
} >RAM_D1 AT> FLASH
For context:
I am developing a "system controller", where my plan is to have it running a RTOS, which manages the read-in of analog values, writing control messages via SPI to two other STMs of the same kind and communicating via Ethernet to my desktop application.
The desktop application is then in charge of post-processing the digitized analog values and sending control messages to the system controller. In the best case the system controller digitizes the analog signal on ADC3 with 5 MSPS (at probably 6 Bit resolution = 30 MBit/s) and sends that data hickup-free to my desktop application.
-> Is this plan possible on this MCU?
I tried to buy a higher (more RAM) version of the nucleo I've got, but due to shortages this one is the best one I was able to get.
For the RTOS I'd like to stick with ThreadX, since FreeRTOS support in STM32IDE seems to be phased out now, after ThreadX was employed as the RTOS by STM.
(I like the easy register configuration using CubeMX/STM32 IDE, hence my drive to use that SW universe ... if there are good reasons to use a different RTOS, tell me :) )
Thank you for your time!
I generated the same project on my side and took a look. I believe you should be able to implement what you want in this CPU. You will need to carefully use the available memory.
It seems there is a confusion about the section .tcp_sec. It contains DMA reception and transmission descriptors for the Ethernet controller/driver. These are constrained by the driver and hardware to be at a specific address. The descriptors are rather small, but the buffers are bigger. With some work these can be reduced. If you are using Ethernet you will need this, no mater if you use TCP or not. As I said, the name can be confusing.
The flash has still plenty of space available. In the debug configuration only about 11% is used. The rest is available for your application code.
You can locate you data in other memory regions. Depending on the toolchain you will use is how you will need to tell the compiler/linker where your data goes. You can look towards the top of the main.c file in that example to see how the DMA descriptors are assigned to a specific section for three different toolchains (IAR, ARM MDK, GCC).
In terms of how to most efficiently use and configure the microcontroller peripherals please get in touch with STMicro, they will know best.
This should get you started. Let us know if this helps!
The STM32 family has fantastic interrupt service, they stack a whole slew of extra registers for you, and load the LR with an artificial return to properly unstack while looking for opportunities for tail chaining, aborted entry, etc etc.
HOWEVER....it is too damn slow. I am finding (STM32F730Z8, 200 MHz clock, all code including handlers in ITCM, everything in GNU assembly) that it takes about 120-150 ns overhead to get into an interrupt.
I am still learning about these, used to the old ARM7 where you had to do it all yourself, however, in those chips, if you had a minimal handler you didn't need to stack much.
So -- can I "subvert" the context switching in hardware, and just have it leap to the handler at elevated priority, pausing only to fill the pipeline, and leaving me to take care of stacking what is needed? I don't think so, and haven't seen a way to do it, but I'm working on an extremely tight time-sensitive realtime code, and interrupt switching is eating all my time budget. I'm reverting to doing it all in low-code, polled, but I hate the jitter that gives me on response to pin edges. Help?
No, this is done in pure hardware and is the main defining feature of all "Cortex-M" processors, not just STM32.
150ns at 200MHz is 30 cycles. You can probably get it quite a bit faster.
One way is to mark the floating point unit as unused each time you finish with it, and to set a core flag to tell it not to save the floating point registers. See ARM application note 298 for details.
Another method that you might try is to move your vector table and interrupt handler code to internal SRAM. STM32 has a flash memory accelerator which avoids most wait states on internal flash by performing prefetch of sequential instructions, but an asynchronous interrupt will probably not benefit from this.
that it takes about 120-150 ns overhead to get into an interrupt.
It is not the truth at all
It takes 60ns as it takes 12 clock cycles.
Testing out an STM32 with an FDCAN module (updated from the older BxCAN peripheral). CAN Classic at 500kbps.
I am running into an issue that when using the default pair of pins (D0/D1 in my case) I get expected behavior, but when switching the pins to the secondary option (B8/B9) using GPIO remapping, I get strange output on the bus.
I tried baud settings and options like protocol exception. Nothing seems to explain where this scope output is coming from.
I'm using the HAL to get this working, so I'm certain I'm not missing any registers on remapping. I've DeInit and ReInit the FDCAN module, started/stopped, etc. There seems to be no documented "process" for remapping pins. The entire FDCAN section of the reference module doesn't have the letters GPIO.
Picture: Yellow is the CANTX 0-3V signal (low is dominant). Purple is the CAN+ signal that idles at 2.5V and pulls past ~3.5V on a dominant. There is nothing else on this line, so I'm not concerned about the sawtoothing. The large initial CAN "SOF" pulse is wrong for timing. The long recessives are nonsense. Then the small value 1 bits are of the correct 2uS pulses for 500kbps. Changing the data put into the FDCAN FIFO makes no difference, the output is always the same.
Solved.
After sending this message, the INIT bits were set in the FDCAN->CCCR register. There were values in the error counters. Indicates an internal error. I was using the HAL as a time saver, but it was over-writting my desired GPIO settings.
I would set the pins B8/B9 to AF mode for FDCAN. Then call FDCAN_DeInit/Init, which via an MSP_INIT callback also calls GPIO Init, but for the original D0/D1 pins. Meaning the B8/B9 I set, and the D0/D1 pins were enabled at the same time.
This is an obvious problem. The HAL is fine for prototyping, but careful because it will try and "help". Undefined behavior at best and I normally wouldn't even post such a dumb mistake.
However... Maybe someone else finds it interesting that whatever the FDCAN state machine is doing here, makes this unique output seen in the scope picture. I initially didn't double check my pin setup, because it looked right, I was getting output on the scope, just the wrong output. I spent much more time going over peripheral settings and data I was feeding to it.
How can I load OS image from floppy disk to memory without BIOS Service while booting my PC?
The only way I’ve used is calling int13h in real mode .
I got to know that I need to handle with ‘Disk controller’ .
Do I need to write kinda ‘Device driver’ in [BIT 16] real mode and is it possible?
As 0andriy has commented, you will have to communicate with the floppy controller directly, bypassing the BIOS. (Which BTW, why do you want to do such a thing? The BIOS was made specifically so you don't have to do this. Is it solely because you want to, maybe to learn how to program the FDC? I'm okay with that.)
The FDC (Floppy Disk Controller) is of the ISA (Industry Standard Architecture) era, back when I/O ports were hard coded to specific addresses. The FDC came in many variants, but most followed a standard rule. The original 756 was a common FDC, with later (still really old to today's standards) controllers following the 82077AA variant.
These controllers had twelve (12) registers using eight (8) I/O Byte addresses, Base + 00h to Base + 07h. (Please note that a single I/O address can be two registers if one is a read and one is a write.) You read and write to these registers to instruct the FDC to do things, such as start the motor for drive 1. (For fun: Did you know that the FDC was originally capable of handling four drives?)
This isn't to difficult to do, but now you have to have some way for the ISA bus to communicate with the FDC and the main memory. In comes the DMA (Direct Memory Access). Now you have to also program the DMA to make the transfers.
Here is the catch. If you don't have all of the FDC and DMA code within the first 512 bytes of the floppy, the 512 bytes the BIOS loaded for you already, there is no way to load the remaining sectors. For example, you can't have your DMA code in the second sector of your boot code expecting to call it, since you have to use that DMA to load that sector in the first place. All FDC and DMA code, at least a minimum read service, must be in the first sector of the disk. This is quite difficult to do, reliably.
I am not saying it is impossible to do, I am just saying it is improbable. For one thing, if you can do it (reliably) in 512 bytes, I would like to see it. It might be a fun experiment. Anyway, do a search for FDC, DMA, etc., things I wrote of here. There are many examples on the web. If you wish to read a book about it, I wrote such a book a while back with all the juicy details.
EDIT1
okay i couldnt post a long comment(i am new to the website so please accept my apologies) so i am editing my earlier question. I have tried to implement multiplexing in 2 attempts:
-2nd attempt
-3rd attempt
in 2nd attempt i have tried to send the seven seg variables of each module to the module which is just one step ahead of it, and when they all reach the final top module i have multiplexed them...there is also a clock module which generates a clock for the units module(which makes units place change 2 times in a second) and a clock for multiplexing(multiplexing between each displays 500 times per second)...ofcourse i read that my board has a clock freq of 50M hertz, so these calculations for clocks are based on that figure...
in the 3rd comment i have done the same thing, in one single module. see the 2nd attempt first and then the 3rd one.
both give errors right after synthesis and lots of unfamiliar warnings.
EDIT 2
I have been able to synthesize and implement the program in attempt4(which i am not allowed to post since my reputation is low), using the save flag for variables, variables1 variables2 and variables3(which were giving warning of unused pins) but the program doesnt run on fpga...it simply shows the number 3777. also there are still warnings of "combinatorial loops" for some things that are related to some variables( i am sorry i am new to all this verilog thing) but you can see all of them in attempt 3 as well.
You can not implement counters with loops. Neither can you implement cascaded counters with nested loops.
Writing HDL is not writing software! Please read a book or tutorial on VHDL or Verilog on how to design basic hardware circuits. There is also the Synthesis and Simulation Guide 14.4 - UG626 from Xilinx. Have a look at page 88.
Edit1:
Now it's possible to access your zip file without any dropbox credentials and I have looked into your project. Here are my comments on your code.
I'll number my bullets for better reference:
Your project has 4 mostly identical ucf files. The difference is only in assigning different anode control signals to the same pin location. This will cause errors in post synthesis steps (assign multiple nets to one pin). Normally, simple projects have only one ucf file.
The Nexsys 2 board has a 4 digit 7-segment display with common cathodes and switchable common anodes. In total these are 8+4 wires to control. A time multiplexing circuit is needed to switch at 25Hz < f < 1kHz through every digit of your 4-digit output vector.
Choosing a nested hierarchy is not so good. One major drawback is the passing of many signals from every level to the topmost level for connecting them to the FPGA pins. I would suggest a top-level module and 4 counters on level one. The top-level module can also provide the time-multiplexing circuit and the binary to 7-seg encoding.