USART of pic18f4550 - pic18

I'm working on PIC18f4550. I want it to communicate through USART. I'm able to transmit a character but not able to receive any data. I check all the SFR's and r ri8 according to me. I'm using mplab c18 v3.46 compiler and MPLAB v8.40.
#include <p18f4550.h>
#include<usart.h>
#pragma config VREGEN = OFF // Voltage regulator USB , is Suspended
#pragma config WDT = OFF // Watchdog timer is suspended
#pragma config PLLDIV = 1 // Internal Oscillator engaged
#pragma config MCLRE = ON
#pragma config WDTPS = 32768
#pragma config CCP2MX = ON
#pragma config PBADEN = OFF
#pragma config CPUDIV = OSC1_PLL2
#pragma config USBDIV = 2
#pragma config FOSC = INTOSCIO_EC
#pragma config FCMEN = OFF
#pragma config IESO = OFF
#pragma config PWRT = OFF
#pragma config BOR = OFF
#pragma config BORV = 3
#pragma config LPT1OSC = OFF
#pragma config STVREN = ON
#pragma config LVP = OFF
#pragma config ICPRT = OFF
#pragma config XINST = OFF
#pragma config DEBUG = OFF
#pragma config CP0 = OFF, CP1 = OFF, CP2 = OFF, CP3 = OFF
#pragma config CPB = OFF
#pragma config CPD = OFF
#pragma config WRT0 = OFF, WRT1 = OFF, WRT2 = OFF, WRT3 = OFF
#pragma config WRTC = OFF
#pragma config WRTB = OFF
#pragma config WRTD = OFF
#pragma config EBTR0 = OFF, EBTR1 = OFF, EBTR2 = OFF, EBTR3 = OFF
#pragma config EBTRB = OFF
#define a PORTD
int i,j;
unsigned char serial_data;
extern void delay(int);
extern void tx_data(unsigned char);
extern unsigned char rx_data(void);
void tx_data(unsigned char data1)
{
TXREG=data1;
while(PIR1bits.TXIF==0);
}
unsigned char rx_data(void)
{
while(PIR1bits.RCIF==0); // Wait until RCIF gets low
return RCREG;
}
void main(void)
{
OSCCON=0x74;
TRISD= 0x00;
TRISC= 0x80;
OpenUSART(USART_TX_INT_OFF & USART_RX_INT_ON & USART_ASYNCH_MODE &USART_EIGHT_BIT & USART_CONT_RX & USART_BRGH_LOW, 12);
RCON=0x90;
INTCON=0xC0;
IPR1=0x00;
BAUDCON=0x00;
RCSTA=0x90;
tx_data('o'); // Transmit the same data back to PC
serial_data=rx_data(); // Receive data from PC
tx_data('k');
}
I found this code on net and modified accordingly. It transmits 'o' and never respond again for 'k'

Off the top of my head... I don't use the C18 compiler, but if it acts like any regular compiler does, then it's probably due to something like this:
You activate the UART receive interrupt with the USART_RX_INT_ON flag, then you enable the GIEH/GIEL bits in INTCON.
But you don't provide an interrupt service routine. So my guess is that, sitting at the interrupt vector locations (typically PROGRAM ADDRESS 0x08 and 0x18 for PIC18), are NOP assembly instructions. And so when a Receive event hits, it jumps to the high priority interrupt vector at address 0x18 (because IPEN is enabled and IPR1 is cleared), and then it just overflows to whatever next valid instruction is from there, because there are no GOTO instruction at that vector address to properly make it go to a specific ISR function, and then properly return to the last known code location when the interrupt event occured...

Related

STM32WL55JC1 - HAL ADC wont change channels

What I want to accomplish
So I want to accomplish the following:
I have 3 FreeRTOS-Threads which all shall read one of 3 (5) channels of my ADC. I want to poll the ADC. The Threads then enter the read value into a FreeRTOS-queue.
My code so far
I have the following functions:
ADC initialisation
void MX_ADC_Init(void)
{
hadc.Instance = ADC;
hadc.Init.Resolution = ADC_RESOLUTION_12B;
hadc.Init.ClockPrescaler = ADC_CLOCK_ASYNC_DIV4;
hadc.Init.ScanConvMode = DISABLE;
hadc.Init.ContinuousConvMode = DISABLE;
hadc.Init.DiscontinuousConvMode = DISABLE;
hadc.Init.ExternalTrigConv = ADC_SOFTWARE_START;
hadc.Init.ExternalTrigConvEdge = ADC_EXTERNALTRIGCONVEDGE_NONE;
hadc.Init.DataAlign = ADC_DATAALIGN_RIGHT;
hadc.Init.NbrOfConversion = 1;
hadc.Init.DMAContinuousRequests = DISABLE;
hadc.Init.Overrun = ADC_OVR_DATA_PRESERVED;
hadc.Init.EOCSelection = ADC_EOC_SEQ_CONV;
hadc.Init.LowPowerAutoPowerOff = DISABLE;
hadc.Init.LowPowerAutoWait = DISABLE;
if (HAL_ADC_Init(&hadc) != HAL_OK)
{
Error_Handler();
}
for(int ch = 0; ch < GPIO_AI_COUNT; ch++)
{
ADC_Select_Ch(ch);
}
}
GPIO initialisation
GPIO_InitTypeDef GpioInitStruct = {0};
GpioInitStruct.Pin = GPIO_AI1_PIN | GPIO_AI2_PIN | GPIO_AI3_PIN | GPIO_AI4_PIN | GPIO_AI5_PIN;
GpioInitStruct.Pull = GPIO_NOPULL;
GpioInitStruct.Speed = GPIO_SPEED_FREQ_LOW;
GpioInitStruct.Mode = GPIO_MODE_ANALOG;
HAL_GPIO_Init(GPIOB, &GpioInitStruct);
Where the GPIO_AI2_PIN definition is defined as:
/* Analog Inputs ----------------------------------------------------------- */
#define GPIO_AI_COUNT 5
#define GPIO_AI1_PIN GPIO_PIN_3
#define GPIO_AI1_PORT GPIOB
#define GPIO_AI1_CH ADC_CHANNEL_2 /* ADC_IN2, Datasheet P. 51 */
#define GPIO_AI2_PIN GPIO_PIN_4
#define GPIO_AI2_PORT GPIOB
#define GPIO_AI2_CH ADC_CHANNEL_3 /* ADC_IN3, Datasheet P. 51 */
#define GPIO_AI3_PIN GPIO_PIN_14
#define GPIO_AI3_PORT GPIOB
#define GPIO_AI3_CH ADC_CHANNEL_1 /* ADC_IN1, Datasheet P. 55 */
#define GPIO_AI4_PIN GPIO_PIN_13
#define GPIO_AI4_PORT GPIOB
#define GPIO_AI4_CH ADC_CHANNEL_0 /* ADC_IN0, Datasheet P. 55 */
#define GPIO_AI5_PIN GPIO_PIN_2
#define GPIO_AI5_PORT GPIOB
#define GPIO_AI5_CH ADC_CHANNEL_4 /* ADC_IN4, Datasheet P. 54 */
Changing channel
void ADC_Select_Ch(uint8_t channelNb)
{
adcConf.Rank = ADC_RANKS[channelNb];
adcConf.Channel = GPIO_AI_CH[channelNb];
adcConf.SamplingTime = ADC_SAMPLETIME_12CYCLES_5;
if (HAL_ADC_ConfigChannel(&hadc, &adcConf) != HAL_OK)
{
Error_Handler();
}
}
Where ADC_RANKS and GPIO_AI_CH are static arrays of the channels and ranks I want to use. The ranks increase with every channel.
Reading a channel
uint32_t ADC_Read_Ch(uint8_t channelNb)
{
uint32_t adc_value = 0;
ADC_Select_Ch(channelNb);
HAL_ADC_Start(&hadc);
if(HAL_OK == HAL_ADC_PollForConversion(&hadc, ADC_CONVERSION_TIMEOUT))
{
adc_value = HAL_ADC_GetValue(&hadc);
}
HAL_ADC_Stop(&hadc);
printf("Ch%d / %x) %d\r\n", channelNb, adcConf.Channel, adc_value);
return adc_value;
}
The problem
No matter what I try, the ADC only ever reads in the channel before the last channel I defined. Every time a conversion happens, the method HAL_ADC_GetValue(...) returns only the value of one channel, one, which I haven't even "selected" with my method.
What I've tried so far
I tried several different things:
Change NumberOfConversions
Change ScanMode, ContinuousConvMode, Overrun, EOCSelection, etc.
Use only Rank "1" when choosing a channel
Not use HAL_ADC_Stop(...), that however resulted in a failure (error handler was called)
Using the read functions etc. in the main(), not in a FreeRTOS thread - this also resulted in only one channel being read.
Change GPIO setup
Make the adcConfig global and public, so that maybe the config is shared among the channel selections.
Different clock settings
"Disabling" all other channels but the one I want to use (*)
Several other things which I've already forgotten
There seems to be one big thing I completely miss. Most of the examples are with one of the STM32Fxx microcontrollers, so maybe the ADC hardware is not the same and I can't do it this way. However, since I am using HAL, I should be able to do it this way. It would be weird, if it wouldn't be somehow the same across different uC families.
I really want to use polling, and ask one channel of the ADC by using some kind of channel selection, so that I can read them in different FreeRTOS tasks.
Disabling channels
I tried "disabling" channels but the one I've used with this function:
void ADC_Select_Ch(uint8_t channelNb)
{
for(int ch = 0; ch < GPIO_AI_COUNT; ch++)
{
adcConf.SamplingTime = ADC_SAMPLETIME_12CYCLES_5;
adcConf.Channel = GPIO_AI_CH[ch];
adcConf.Rank = ADC_RANK_NONE;
if (HAL_ADC_ConfigChannel(&hadc, &adcConf) != HAL_OK)
{
Error_Handler();
}
}
adcConf.SamplingTime = ADC_SAMPLETIME_12CYCLES_5;
adcConf.Channel = GPIO_AI_CH[channelNb];
if (HAL_ADC_ConfigChannel(&hadc, &adcConf) != HAL_OK)
{
Error_Handler();
}
}
Can anyone help me? I'm really stuck, and the Reference Manual does not provide a good "guide" on how to use it. Only technical information, lol.
Thank you!
I think your general approach seems reasonable. I've done something similar on a project (for an STM32F0), where I had to switch the ADC between two channels. I think you do need to disable the unused channels. Here is some code verbatim from my project:
static void configure_channel_as( uint32_t channel, uint32_t rank )
{
ADC_ChannelConfTypeDef sConfig = { 0 };
sConfig.Channel = channel;
sConfig.Rank = rank;
sConfig.SamplingTime = ADC_SAMPLETIME_1CYCLE_5;
if ( HAL_ADC_ConfigChannel ( &hadc, &sConfig ) != HAL_OK )
{
dprintf ( "Failed to configure channel\r\n" );
}
}
void adc_configure_for_head( void )
{
configure_channel_as ( ADC_CHANNEL_0, ADC_RANK_CHANNEL_NUMBER );
configure_channel_as ( ADC_CHANNEL_6, ADC_RANK_NONE );
}
void adc_configure_for_voltage( void )
{
configure_channel_as ( ADC_CHANNEL_6, ADC_RANK_CHANNEL_NUMBER );
configure_channel_as ( ADC_CHANNEL_0, ADC_RANK_NONE );
}
uint16_t adc_read_single_sample( void )
{
uint16_t result;
if ( HAL_ADC_Start ( &hadc ) != HAL_OK )
dprintf ( "Failed to start ADC for single sample\r\n" );
if ( HAL_ADC_PollForConversion ( &hadc, 100u ) != HAL_OK )
dprintf ( "ADC conversion didn't complete\r\n" );
result = HAL_ADC_GetValue ( &hadc );
if ( HAL_ADC_Stop ( &hadc ) != HAL_OK )
dprintf ( "Failed to stop DMA\r\n" );
return result;
}
My project was bare-metal (no-OS) and had a single thread. I don't know enough about how your tasks are scheduled, but I'd be concerned that they might be "fighting over" the ADC if there is a chance they could be run concurrently (or pre-empt each other). Make sure the entire configure / read sequence is protected by some kind of mutex or semaphore.
EDIT: I notice a bug in your "Disabling channels" code, where you don't seem to set the rank of your enabled channel. I don't know if that is a transcription error, or an actual bug.
Ranks are used to sort the ADC channels for cases of continuous measurrements or channel scans. HAL_ADC_PollForConversion only works on a single channel and somehow needs to now which channel to pick, therefore it will use the one with the lowest rank. To configure a specific channel to be measured once, set its rank to ADC_REGULAR_RANK_1.
No need to disable any other channels, but remember to properly configure the ranks of all channels if you want to switch to channel scanning or continuous measurements.
HAL_ADC_ConfigChannel(&hadc1, &channel_config) only updates the configuration of the channel itself but does not update the configuration of the ADC peripheral itself. So it has to be understood as "configure this channel" and not as "configure ADC to use this channel"

print floats from audio input callback function

I'm working on a university project that involves a lot of programming in C, especially with Portaudio & ALSA. At the moment i'm trying to make a callback function to pass audio through, standard input/output job. I was wondering if anybody could tell me how to print the floats from my inputBuffer to display in real time in the terminal? Here is the internal structure of my callback function so far.
Thanks very much for your help in advance!
#define SAMPLE_RATE (44100)
#define PA_SAMPLE_TYPE paFloat32
#define FRAMES_PER_BUFFER (64)
typedef float SAMPLE;
static int audio_callback( const void *inputBuffer, void *outputBuffer,
unsigned long framesPerBuffer,
const PaStreamCallbackTimeInfo* timeInfo,
PaStreamCallbackFlags statusFlags,
void *userData )
{
SAMPLE *out = (SAMPLE*)outputBuffer;
const SAMPLE *in = (const SAMPLE*)inputBuffer;
unsigned int i;
(void) timeInfo; /* Prevent unused variable warnings. */
(void) statusFlags;
(void) userData;
if( inputBuffer == NULL )
{
for( i=0; i<framesPerBuffer; i++ )
{
*out++ = *in++; /* left - clean */
*out++ = *in++; /* right - clean */
}
}
return paContinue;
}

GPS output being incorrectly written to file on SD card- Arduino

I have a sketch to take information (Lat, Long) from an EM-406a GPS receiver and write the information to an SD card on an Arduino shield.
The program is as follows:
#include <TinyGPS++.h>
#include <SoftwareSerial.h>
#include <SD.h>
TinyGPSPlus gps;
SoftwareSerial ss(4, 3); //pins for the GPS
Sd2Card card;
SdVolume volume;
SdFile root;
SdFile file;
void setup()
{
Serial.begin(115200); //for the serial output
ss.begin(4800); //start ss at 4800 baud
Serial.println("gpsLogger by Aaron McRuer");
Serial.println("based on code by Mikal Hart");
Serial.println();
//initialize the SD card
if(!card.init(SPI_FULL_SPEED, 9))
{
Serial.println("card.init failed");
}
//initialize a FAT volume
if(!volume.init(&card)){
Serial.println("volume.init failed");
}
//open the root directory
if(!root.openRoot(&volume)){
Serial.println("openRoot failed");
}
//create new file
char name[] = "WRITE00.TXT";
for (uint8_t i = 0; i < 100; i++){
name[5] = i/10 + '0';
name[6] = i%10 + '0';
if(file.open(&root, name, O_CREAT | O_EXCL | O_WRITE)){
break;
}
}
if(!file.isOpen())
{
Serial.println("file.create");
}
file.print("Ready...\n");
}
void loop()
{
bool newData = false;
//For one second we parse GPS data and report some key values
for (unsigned long start = millis(); millis() - start < 1000;)
{
while (ss.available())
{
char c = ss.read();
//Serial.write(c); //uncomment this line if you want to see the GPS data flowing
if(gps.encode(c)) //did a new valid sentence come in?
newData = true;
}
}
if(newData)
{
file.write(gps.location.lat());
file.write("\n");
file.write(gps.location.lng());
file.write("\n");
}
file.close();
}
When I open up the file on the SD card when the program is finished executing, I get a message that it has an encoding error.
I'm currently inside (and unable to get a GPS signal, thus the 0), but the encoding problem needs to be tackled, and there should be as many lines as there are seconds that the device has been on. There's only that one. What do I need to do to make things work correctly here?
Closing the file in the loop, and never reopening it, is the reason there's only one set of data in your file.
Are you sure gps.location.lat() and gps.location.lng() return strings, not an integer or float? That would explain the binary data and the "encoding error" you see.

Xcode duplicate symbol error

I am getting "Apple Mach-O Linker (Id) Error":
ld: duplicate symbol _matrixIdentity in /BlahBlah/Corridor.o and /Blahblah/Drawable.o for architecture i386
The class "Corridor" is extending the class "Drawable" and "_matrixIdentity" is defined and implemented in a file "Utils.h". Here are top lines from my header files:
Drawable.h
#import <Foundation/Foundation.h>
#import "Utils.h"
#interface Drawable : NSObject
...
Corridor.h
#import <Foundation/Foundation.h>
#import "Drawable.h"
#interface Corridor : Drawable
...
I have already checked if there are any ".m" imports instead of ".h", everything is correct. Any idea, what could cause this problem?
EDIT: posting code from "Utils.h"
#import <Foundation/Foundation.h>
...
#pragma mark -
#pragma mark Definitions
typedef float mat4[16];
#pragma mark -
#pragma mark Functions
void matrixIdentity(mat4 m)
{
m[0] = m[5] = m[10] = m[15] = 1.0;
m[1] = m[2] = m[3] = m[4] = 0.0;
m[6] = m[7] = m[8] = m[9] = 0.0;
m[11] = m[12] = m[13] = m[14] = 0.0;
}
...
I am only referencing to "mat4" definition in my both classes' methods. Also, "matrixIdentity" is just the first function in this file, may be the problem is not in implementation.
C/C++/Objective-C diff with Java, C#, Ruby, Python...
Divide files.
header & mm
Do not use #include (may include many times)
Use #import... (include once)
Utils.h
#ifndef __utils_h__ // <<< avoid multiple #include
#define __utils_h__ // <<< avoid multiple #include
#import <Foundation/Foundation.h>
...
#pragma mark -
#pragma mark Definitions
typedef float mat4[16];
#pragma mark -
#pragma mark Functions
extern void matrixIdentity(mat4 m);
#endif // __utils_h__ <<< avoid multiple #include
Utils.mm
#import "Utils.h"
void matrixIdentity(mat4 m)
{
m[0] = m[5] = m[10] = m[15] = 1.0;
m[1] = m[2] = m[3] = m[4] = 0.0;
m[6] = m[7] = m[8] = m[9] = 0.0;
m[11] = m[12] = m[13] = m[14] = 0.0;
}
...
Two solutions to your problem:
Declare only void matrixIdentity(mat4 m); in the header file and then implment the actual code in a corresponding c/m file.
Make your function in the header file inline (that's the technique Apple uses)
inline void matrixIdentity(mat4 m) { ...
From your description, utils.h declares and implements a variable, the implementation of which is compiled in corridor.h and Drawable.h by virtue of utils.h being included in both (indirectly through Drawable.h in the case of Corridor.h).
Thus both compilation units contain an implementation for _matrixIdentity, and the linker complains.
Move the implementation of _matrixIdentity into a new module utils.m to ensure there is only one definition of the symbol.
Use -force_load for one library in Other linker flags .. that solved the prob for me once
In my case, I was implementing a function in the header file itself. Adding a static inline keyword before the function fixed the error for me.

Creating Threaded callbacks in XS

EDIT: I have created a ticket for this which has data on an alternative to this way of doing things.
I have updated the code in an attempt to use MY_CXT's callback as gcxt was not storing across threads. However this segfaults at ENTER.
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"
#ifndef aTHX_
#define aTHX_
#endif
#ifdef USE_THREADS
#define HAVE_TLS_CONTEXT
#endif
/* For windows */
#ifndef SDL_PERL_DEFINES_H
#define SDL_PERL_DEFINES_H
#ifdef HAVE_TLS_CONTEXT
PerlInterpreter *parent_perl = NULL;
extern PerlInterpreter *parent_perl;
#define GET_TLS_CONTEXT parent_perl = PERL_GET_CONTEXT;
#define ENTER_TLS_CONTEXT \
PerlInterpreter *current_perl = PERL_GET_CONTEXT; \
PERL_SET_CONTEXT(parent_perl); { \
PerlInterpreter *my_perl = parent_perl;
#define LEAVE_TLS_CONTEXT \
} PERL_SET_CONTEXT(current_perl);
#else
#define GET_TLS_CONTEXT /* TLS context not enabled */
#define ENTER_TLS_CONTEXT /* TLS context not enabled */
#define LEAVE_TLS_CONTEXT /* TLS context not enabled */
#endif
#endif
#include <SDL.h>
#define MY_CXT_KEY "SDL::Time::_guts" XS_VERSION
typedef struct {
void* data;
SV* callback;
Uint32 retval;
} my_cxt_t;
static my_cxt_t gcxt;
START_MY_CXT
static Uint32 add_timer_cb ( Uint32 interval, void* param )
{
ENTER_TLS_CONTEXT
dMY_CXT;
dSP;
int back;
ENTER; //SEGFAULTS RIGHT HERE!
SAVETMPS;
PUSHMARK(SP);
XPUSHs(sv_2mortal(newSViv(interval)));
PUTBACK;
if (0 != (back = call_sv(MY_CXT.callback,G_SCALAR))) {
SPAGAIN;
if (back != 1 ) Perl_croak (aTHX_ "Timer Callback failed!");
MY_CXT.retval = POPi;
} else {
Perl_croak(aTHX_ "Timer Callback failed!");
}
FREETMPS;
LEAVE;
LEAVE_TLS_CONTEXT
dMY_CXT;
return MY_CXT.retval;
}
MODULE = SDL::Time PACKAGE = SDL::Time PREFIX = time_
BOOT:
{
MY_CXT_INIT;
}
SDL_TimerID
time_add_timer ( interval, cmd )
Uint32 interval
void *cmd
PREINIT:
dMY_CXT;
CODE:
MY_CXT.callback=cmd;
gcxt = MY_CXT;
RETVAL = SDL_AddTimer(interval,add_timer_cb,(void *)cmd);
OUTPUT:
RETVAL
void
CLONE(...)
CODE:
MY_CXT_CLONE;
This segfaults as soon as I go into ENTER for the callback.
use SDL;
use SDL::Time;
SDL::init(SDL_INIT_TIMER);
my $time = 0;
SDL::Timer::add_timer(100, sub { $time++; return $_[0]} );
sleep(10);
print "Never Prints";
Output is
$
it should be
$ Never Prints
Quick comments:
Do not use Perl structs (SV, AV, HV, ...) outside of the context of a Perl interpreter object. I.e. do not use it as C-level static data. It will blow up in a threading context. Trust me, I've been there.
Check out the "Safely Storing Static Data in XS" section in the perlxs manpage.
Some of that stuff you're doing looks rather non-public from the point of view of the perlapi. I'm not quite certain, though.
$time needs to be a shared variable - otherwise perl works with separate copies of the variable.
My preferred way of handling this is storing the data in the PL_modglobal hash. It's automatically tied to the current interpreter.
We have found a solution to this using Perl interpreter threads and threads::shared. Please see these
Time.xs
Also here is an example of a script using this code.
TestTimer.pl