Implementing SPI library in Arduino (how do classes work?) - class

I am currently trying to self learn Arduino/C programming/Assembly. I am working on a project which requires a lot of data collection, and by research I discovered a chip called the "23K256" from Microchip (see here: http://www.microchip.com/wwwproducts/Devices.aspx?dDocName=en539039). Moreover, I have also discovered that an Arduino library taking advantage of this chip exists (see here: http://playground.arduino.cc/Main/SpiRAM). I downloaded the "spiRAM3a.zip" file, which I believe is the one most up-to-date. Note that I have only recently downloaded the Arduino software and thus have the latest version installed (I believe it's 1.0.6). Also note that I'm using Arduino Uno, although I will eventually need to use Arduino Mega (I just want this working on ANYTHING at this point). With this library is some code that exemplifies its use to read and write to the 23K256 (the file name is "SpiRAM_Example" included in the package I downloaded), effectively increasing the SRAM on Arduino available. Here is the actual, exact code:
#include <SPI.h>
#include <SpiRAM.h>
#define SS_PIN 10
byte clock = 0;
SpiRAM SpiRam(0, SS_PIN);
void setup() {
Serial.begin(9600);
}
void loop()
{
char data_to_chip[17] = "Testing 90123456";
char data_from_chip[17] = " ";
int i = 0;
// Write some data to RAM
SpiRam.write_stream(0, data_to_chip, 16);
delay(100);
// Read it back to a different buffer
SpiRam.read_stream(0, data_from_chip, 16);
// Write it to the serial port
for (i = 0; i < 16; i++) {
Serial.print(data_from_chip[i]);
}
Serial.print("\n");
delay(1000); // wait for a second
}
My problem is that when I complie the code, to test my confguration and try to learn its use, I surprisingly get an error. This is what I get:
SpiRAM_Example:7: error: 'SpiRAM' does not name a type
SpiRAM_Example.ino: In function 'void loop()':
SpiRAM_Example:20: 'SpiRAM' was not declared in this scope
So it's basically telling me that there's something wrong with the SpiRAM SpiRam(0, SS_PIN);line of code. My question is, why? Am I misunderstanding something very fundamental about how classes work? I feel like I must not be doing something because I highly doubt an incorrect piece of code would be published on Arduino's website. How can I get this code to compile, or at least be able to simply use this library? Should I post the code for the library itself ("SpiRAM.h"), which was included in the package I downloaded?
I would really appreciate any help I can get, and sincerely apologize if this is a really dumb question. I think this is the first I've worked with classes.

Did you download Attach:spiRAM3a.zip or the original? I installed this and your code. It complies on the IDE 1.05

Related

Simulink Legacy Code Tool - custom Arduino servo write block problem

I'm trying to create my own servo.write block in Simulink for Arduino DUE deployment (and External Mode). Before you ask why if there is one available inside the Simulink Arduino Support Package, generally my final goal is to create a block that will use Arduino servo.writeMicroseconds function (there is no out of the box block for this one) but first i want to try with something simple to debug to see if i can get it to work....
I've been using this guide (https://www.mathworks.com/matlabcentral/fileexchange/39354-device-drivers) and one of working examples in there as a template and started modifying it (originally it implemented Digital Output driver). I took the LCT approach.
The the original digitalio_arduino.cpp/h files from the guide (example with digital read/out) were the files I modified as they were working without any issues out-of-the-box. Step by step i made following modifications:
Remove DIO read (leave only write) from CPP and H files
Change StartFcnSpec to digitalIOSetup and make changes in H file so port is always in OUTPUT mode
Include Servo.h library within CPP file and create Servo object
Up to this point, all edits went fine, no compile errors, all header files were detected by Simulink and diode kept blinking as it should so the code actually worked (i ran it in External Mode).
But as soon as i made the final modification and replaced pinMode() with myservo.attach() and digitalWrite() with myservo.write() (of course i changed the data type in writeDigitalPin function from boolean to uint8_T) the code, despite compiling and building without any issue didn't work at all. Specified servo port was completely dead, as it even wasn't initialised. Changing value on S-Function input didn't yield any results.
Of course If i replaced custom block with built in Servo Write block from Hardware Support Package, everything worked fine so its not hardware issue.
I'm completely out of ideas what could be wrong, especially that there are no errors so not even a hint where to look.
Here is the LCT *.m script used for generating S-Function:
def = legacy_code('initialize');
def.SFunctionName = 'dout_sfun';
def.OutputFcnSpec = 'void NO_OP(uint8 p1, uint8 u1)';
def.StartFcnSpec = 'void NO_OP(uint8 p1)';
legacy_code('sfcn_cmex_generate', def);
legacy_code('compile', def, '-DNO_OP=//')
def.SourceFiles = {fullfile(pwd,'..','src','digitalio_arduino.cpp')};
def.HeaderFiles = {'digitalio_arduino.h'};
def.IncPaths = {fullfile(pwd,'..','src'), 'C:\ProgramData\MATLAB\SupportPackages\R2021b\aIDE\libraries\Servo\src'};
def.OutputFcnSpec = 'void writeDigitalPin(uint8 p1, uint8 u1)';
def.StartFcnSpec = 'void digitalIOSetup(uint8 p1)';
legacy_code('sfcn_cmex_generate', def);
legacy_code('sfcn_tlc_generate', def);
legacy_code('rtwmakecfg_generate',def);
legacy_code('slblock_generate',def);
Here is digitalio_arduino.CPP file
#include <Arduino.h>
#include <Servo.h>
#include "digitalio_arduino.h"
Servo myservo;
// Digital I/O initialization
extern "C" void digitalIOSetup(uint8_T pin)
{
//pinMode(pin, OUTPUT);
myservo.attach(pin);
}
// Write a logic value to pin
extern "C" void writeDigitalPin(uint8_T pin, uint8_T val)
{
//digitalWrite(pin, val);
myservo.write(val);
}
// [EOF]
And here is digitalio_arduino.H file
#ifndef _DIGITALIO_ARDUINO_H_
#define _DIGITALIO_ARDUINO_H_
#include "rtwtypes.h"
#ifdef __cplusplus
extern "C" {
#endif
void digitalIOSetup(uint8_T pin);
void writeDigitalPin(uint8_T pin, uint8_T val);
#ifdef __cplusplus
}
#endif
#endif //_DIGITALIO_ARDUINO_H_
As I mentioned I've been using a working example as a reference. So I've modified it step by step to see if maybe there is a point when suddenly some error comes up but everything compiles yet does not work :/
I was wondering maybe if there is an issue with the Servo.h library or the Servo object and did some tinkering with these, like i removed Servo myservo; line of code to see if anything happens and just like expected, i started receiving errors that Servo is not defined. If I did not include Servo.h at all or forget to add IncPath to Servo.h as before compile errors about Servo not being supported symbol or not being able to find Servo.h library - so actually the code seems to be "working" in a way, it seems to have everything it needs to work :/
I also looked at the MathWorks implementation of Servo Write block, the MWServoReadWrite to see how Arduino API is being used and no surprise, it's being used in the same way as I've been trying to. They include Servo.h, they are using servo.attach() and servo.write() to control the servo port. And that's it. Yet for them it works, for me does not :/
When I inspect generated C code that runs on Arduino (with my custom S-Function block in it), it seems that all the functions are placed exactly where they are supposed to be, they receive correct arguments. I expected at least that I'll find a hint in there, i.e. missing code or anything else.

stm32 NVIC_EnableIRQ() bare metal equivalent?

I'm using the blue pill, and trying to figure out interrupts. I have an interrupt handler:
void __attribute__ ((interrupt ("TIM4_IRQHandler"))) myhandler()
{
puts("hi");
TIM4->EGR |= TIM_EGR_UG; // send an update even to reset timer and apply settings
TIM4->SR &= ~0x01; // clear UIF
TIM4->DIER |= 0x01; // UIE
}
I set up the timer:
RCC_APB1ENR |= RCC_APB1ENR_TIM4EN;
TIM4->PSC=7999;
TIM4->ARR=1000;
TIM4->EGR |= TIM_EGR_UG; // send an update even to reset timer and apply settings
TIM4->EGR |= (TIM_EGR_TG | TIM_EGR_UG);
TIM4->DIER |= 0x01; // UIE enable interrupt
TIM4->CR1 |= TIM_CR1_CEN;
My timer doesn't seem to activate. I don't think I've actually enabled it though. Have I??
I see in lots of example code commands like:
NVIC_EnableIRQ(USART1_IRQn);
What is actually going in NVIC_EnableIRQ()?
I've googled around, but I can't find actual bare-metal code that's doing something similar to mine.
I seem to be missing a crucial step.
Update 2020-09-23 Thanks to the respondents to this question. The trick is to set the bit for the interrupt number in an NVIC_ISER register. As I pointed out below, this doesn't seem to be mentioned in the STM32F101xx reference manual, so I probably would never have been able to figure this out on my own; not that I have any real skill in reading datasheets.
Anyway, oh joy, I managed to get interrupts working! You can see the code here: https://github.com/blippy/rpi/tree/master/stm32/bare/04-timer-interrupt
Even if you go bare metal, you might still want to use the CMSIS header files that provide declarations and inline version of very basic ARM Cortex elements such NVIC_EnableIRQ.
You can find NVIC_EnableIRQ at https://github.com/ARM-software/CMSIS_5/blob/develop/CMSIS/Core/Include/core_cm3.h#L1508
It's defined as:
#define NVIC_EnableIRQ __NVIC_EnableIRQ
__STATIC_INLINE void __NVIC_EnableIRQ(IRQn_Type IRQn)
{
if ((int32_t)(IRQn) >= 0)
{
__COMPILER_BARRIER();
NVIC->ISER[(((uint32_t)IRQn) >> 5UL)] = (uint32_t)(1UL << (((uint32_t)IRQn) & 0x1FUL));
__COMPILER_BARRIER();
}
}
If you want to, you can ignore __COMPILER_BARRIER(). Previous versions didn't use it.
The definition is applicable to Cortex M-3 chips. It's different for other Cortex versions.
With the libraries is still considered bare metal. Without operating system, but anyway, good that you have a desire to learn at this level. Someone has to write the libraries for others.
I was going to do a full example here, (it really takes very little code to do this), but will take from my code for this board that uses timer1.
You obviously need the ARM documentation (technical reference manual for the cortex-m3 and the architectural reference manual for armv7-m) and the data sheet and reference manual for this st part (no need for programmers manual from either company).
You have provided next to no information related to making the part work. You should never dive right into a interrupt, they are advanced topics and you should poll your way as far as possible before finally enabling the interrupt into the core.
I prefer to get a uart working then use that to watch the timer registers when the roll over, count, etc. Then see/confirm the status register fired, learn/confirm how to clear it (sometimes it is just a clear on read).
Then enable it into the NVIC and by polling see the NVIC sees it, and that you can clear it.
You didn't show your vector table this is key to getting your interrupt handler working. Much less the core booting.
08000000 <_start>:
8000000: 20005000
8000004: 080000b9
8000008: 080000bf
800000c: 080000bf
...
80000a0: 080000bf
80000a4: 080000d1
80000a8: 080000bf
...
080000b8 <reset>:
80000b8: f000 f818 bl 80000ec <notmain>
80000bc: e7ff b.n 80000be <hang>
...
080000be <hang>:
80000be: e7fe b.n 80000be <hang>
...
080000d0 <tim1_handler>:
The first word loads the stack pointer, the rest are vectors, the address to the handler orred with one (I'll let you look that up).
In this case the st reference manual shows that interrupt 25 is TIM1_UP at address 0x000000A4. Which mirrors to 0x080000A4, and that is where the handler is in my binary, if yours is not then two things, one you can use VTOR to find an aligned space, sometimes sram or some other flash space that you build for this and point there, but your vector table handler must have the proper pointer or your interrupt handler won't run.
volatile unsigned int counter;
void tim1_handler ( void )
{
counter++;
PUT32(TIM1_SR,0);
}
volatile isn't necessarily the right way to share a variable between interrupt handler and foreground task, it happens to work for me with this compiler/code, you can do the research and even better, examine the compiler output (disassemble the binary) to confirm this isn't a problem.
ra=GET32(RCC_APB2ENR);
ra|=1<<11; //TIM1
PUT32(RCC_APB2ENR,ra);
...
counter=0;
PUT32(TIM1_CR1,0x00001);
PUT32(TIM1_DIER,0x00001);
PUT32(NVIC_ISER0,0x02000000);
for(rc=0;rc<10;)
{
if(counter>=1221)
{
counter=0;
toggle_led();
rc++;
}
}
PUT32(TIM1_CR1,0x00000);
PUT32(TIM1_DIER,0x00000);
A minimal init and runtime for tim1.
Notice that the NVIC_ISER0 is bit 25 that is set to enable interrupt 25 through.
Well before trying this code, I polled the timer status register to see how it works, compare with docs, clear the interrupt per the docs. Then with that knowledge confirmed with the NVIC_ICPR0,1,2 registers that it was interrupt 25. As well as there being no other gates between the peripheral and the NVIC as some chips from some vendors may have.
Then released it through to the core with NVIC_ISER0.
If you don't take these baby steps and perhaps you have already, it only makes the task much worse and take longer (yes, sometimes you get lucky).
TIM4 looks to be interrupt 30, offset/address 0x000000B8, in the vector table. NVIC_ISER0 (0xE000E100) covers the first 32 interrupts so 30 would be in that register. If you disassemble the code you are generating with the library then we can see what is going on, and or look it up in the library source code (as someone already did for you).
And then of course your timer 4 code needs to properly init the timer and cause the interrupt to fire, which I didn't check.
There are examples, you need to just keep looking.
The minimum is
vector in the table
set the bit in the interrupt set enable register
enable the interrupt to leave the peripheral
fire the interrupt
Not necessarily in that order.

ARM Eclipse debugging code in the RAM. Is it possible to see the source code`

I have a problem when try to debug the code which is copied to the SRAM and executed from there.
The code is overwriting the data - but it is done only during the system update. The sections where code is placed are correctly defined in the linker script file and the debuger correctly see the addresses. But when I step into the function (and the code in RAM is the correct one) it does not connect the source files with the code executed in the memory.
Do you know how can it be done. Debugging C code on the assembler level is not something which makes me happy :)
Any help appreciated.
The problem is a bit silly. When you call RAM function from the FLASH (the first call has to be done this way) it has to be done by the veneer. It was messing up the debugger. But having own calling macro (because of the distance it has to be done via the pointer) everything works fine
example calling macro.
#define RAMFCALL(func, ...) {unsigned (* volatile fptr)() = (unsigned (* volatile)())func; fptr(__VA_ARGS__);}

STM32H7 - IAR Placing Local Variables into 'Reserved Memory' (0x1FF20000 - 0x1FFFFFFF)

I started a new project using an STM32H7, currently using IAR EWARM V8, used the STM32CUBEMX to generate the configuration code, and get an initial project going.
I worked through a couple of the CUBEMX eval projects to get some hardware verified and working, and am able to step through code fine.
But there is something odd going on, in particular with variables if you assign them as local vars within a function, somehow IAR is placing them into the 'System Reserved' memory range...
ie within 0x1FF20000 - 0x1FFFFFFF
For example... the project example 'FMC_NOR' that STM provides, is test code for testing our a NOR flash, etc..
they created these two small arrays as globals vars just at the top of the main.c file.
(buffer_size is 0x1000)
uint16_t aTxBuffer[BUFFER_SIZE] = {0};
uint16_t aRxBuffer[BUFFER_SIZE] = {0};
When in the global space, they are allocated in the DTCM region (0x2000:0000)
When moved as local vars, they then become allocated into the 'reserved space'...
What happens is, when IAR encounters any arrays like this, the processor faults with an 'imprecise data access' hardware fault.
This same error occurs with code to initialize the JPEG module, as it attempts to load the arrays of Huffman tables, etc...
When using TrueStudio this problem does not occur... CubeMX auto-generates the linker files for whichever compiler you are using.
I didn't specifically see anything in the linker files pointing to the reserved memory address.
So not sure what could be going on? I'm new to using this processor, so I'm just starting to understand it's memory mapping.
Thanks for any help or suggestions, I'd like to get IAR figured out, as so far I like it a bit over TrueStudio.
I solved my own question... so no longer need help on this...
This is in the 'stm32h743xx_flash.icf' generated by STM CUBEMX for the STM32H7...
/*-Sizes-*/
define symbol __ICFEDIT_size_cstack__ = 0x400;
define symbol __ICFEDIT_size_heap__ = 0x200;
/**** End of ICF editor section. ###ICF###*/
Bumped the 'size_cstack' up to 2k (0x800) and everything is fine...

Running executables of different format on any OS

This shouldn't be that hard that one may think, if I got it right. Specifically, I'll begin with iOS and the ELF executable format. Let's clarify that I have a jailbroken iPhone and I don't want to do this in any appstore apps, so pleas avoid "good advices" like "you can't do it as it's prohibited by Apple".
So, what I have seen is that there's a Flash player implementation, called Frash (by Comex btw, developer of recent jailbreaks). This utility requires, after installation, that Android's libflashplayer.so is present (copied to) the iPhone file system. I digged into the source code and found out that the tweak actually opens the Android (ELF) shared object file, "parses" it and executes code from it. I already asked a friend of mine wheter it is or is not actually possible and he told me that it is, because ELF on ARM and Mach-O on ARM are binary compatible (because they're both ARM). But he actually failed to explain it to me in detail, so I'd like to ask how can it be done? I can't exactly understand the source code fragment that handles, but one thing is sure:
int fd = open("libflashplayer.so", O_RDONLY);
_assert(fd > 0);
fds_init();
sandbox_me();
int symtab_size;
Elf32_Sym *symtab;
void **init_array;
Elf32_Word init_array_size;
char *strtab;
TIME(base_load_elf(fd, &symtab, &symtab_size, &init_array, &init_array_size, &strtab));
// Call the init funcs
_assert(init_array);
while(init_array_size >= 4) {
void (*x)() = *init_array++;
notice("Calling %p", x);
x();
init_array_size -= 4;
}
(from the original code, as of 02/12/2011 on GitHub)
It seems to me that he uses libelf to perform this, right? And that in an ELF file there are symbols that can be executed on a compatible processor just fine?
I'd also like to know whether it is true for all other processor architectures? So maybe one can execute symbols from Linux binaries on OS X?
The important thing about compatibility is the underlying processor architecture, not Linux vs. OS X vs. Android. If the ELF or .so are compiled for the same processor instruction set, then this can work. If not, then they are not compatible. For example, if both were built for Linux but different processors, they would not be compatible.