I am working with CAN of STM32F303K8. I can transmit data but I can't receive data; the function HAL_CAN_Receive() returns HAL_TIMEOUT. I am working with 1M and 32 clock.
can any one tell me how we use HAL_CAN_Receive() to receive data frome CAN
In order to receive CAN messages on STM32 you have to enable at least one filter.
The message is compared with all filters, and it needs to match at least one filter to be put in a fifo.
You can make a filter that matches all messages.
sFilterConfig.FilterNumber = 0;
sFilterConfig.FilterMode = CAN_FILTERMODE_IDMASK;
sFilterConfig.FilterScale = CAN_FILTERSCALE_32BIT;
sFilterConfig.FilterIdHigh = 0x0000;
sFilterConfig.FilterIdLow = 0x0000;
sFilterConfig.FilterMaskIdHigh = 0x0000;
sFilterConfig.FilterMaskIdLow = 0x0000;
sFilterConfig.FilterFIFOAssignment = 0;
sFilterConfig.FilterActivation = ENABLE;
sFilterConfig.BankNumber = 14;
Related
I have made a custom board that features a STM32F469NI MCU coupled with AS4C4M32S SDRAM.
Following are the SDRAM timing and register set information:
hsdram1.Instance = FMC_SDRAM_DEVICE;
/* hsdram1.Init */
hsdram1.Init.SDBank = FMC_SDRAM_BANK1;
hsdram1.Init.ColumnBitsNumber = FMC_SDRAM_COLUMN_BITS_NUM_8;
hsdram1.Init.RowBitsNumber = FMC_SDRAM_ROW_BITS_NUM_12;
hsdram1.Init.MemoryDataWidth = FMC_SDRAM_MEM_BUS_WIDTH_32;
hsdram1.Init.InternalBankNumber = FMC_SDRAM_INTERN_BANKS_NUM_4;
hsdram1.Init.CASLatency = FMC_SDRAM_CAS_LATENCY_3;
hsdram1.Init.WriteProtection = FMC_SDRAM_WRITE_PROTECTION_DISABLE;
hsdram1.Init.SDClockPeriod = FMC_SDRAM_CLOCK_PERIOD_2;
hsdram1.Init.ReadBurst = FMC_SDRAM_RBURST_DISABLE;
hsdram1.Init.ReadPipeDelay = FMC_SDRAM_RPIPE_DELAY_0;
/* SdramTiming */
SdramTiming.LoadToActiveDelay = 2;
SdramTiming.ExitSelfRefreshDelay = 7;
SdramTiming.SelfRefreshTime = 4;
SdramTiming.RowCycleDelay = 6;
SdramTiming.WriteRecoveryTime = 2;
SdramTiming.RPDelay = 2;
SdramTiming.RCDDelay = 2;
if (HAL_SDRAM_Init(&hsdram1, &SdramTiming) != HAL_OK)
{
Error_Handler( );
}
/* USER CODE BEGIN FMC_Init 2 */
FMC_SDRAM_CommandTypeDef cmd = { 0 };
cmd.CommandMode = FMC_SDRAM_CMD_CLK_ENABLE;
cmd.CommandTarget = FMC_SDRAM_CMD_TARGET_BANK1;
cmd.AutoRefreshNumber = 1;
cmd.ModeRegisterDefinition = 0;
HAL_SDRAM_SendCommand(&hsdram1, &cmd, 0xFFFF);
HAL_Delay(100);
cmd.CommandMode = FMC_SDRAM_CMD_PALL;
cmd.CommandTarget = FMC_SDRAM_CMD_TARGET_BANK1;
cmd.AutoRefreshNumber = 1;
cmd.ModeRegisterDefinition = 0;
HAL_SDRAM_SendCommand(&hsdram1, &cmd, 0xFFFF);
cmd.CommandMode = FMC_SDRAM_CMD_AUTOREFRESH_MODE;
cmd.CommandTarget = FMC_SDRAM_CMD_TARGET_BANK1;
cmd.AutoRefreshNumber = 8;
cmd.ModeRegisterDefinition = 0;
HAL_SDRAM_SendCommand(&hsdram1, &cmd, 0xFFFF);
cmd.CommandMode = FMC_SDRAM_CMD_LOAD_MODE;
cmd.CommandTarget = FMC_SDRAM_CMD_TARGET_BANK1;
cmd.AutoRefreshNumber = 1;
cmd.ModeRegisterDefinition = (uint32_t) 0x230; //also tried 0x231
HAL_SDRAM_SendCommand(&hsdram1, &cmd, 0xFFFF);
//Setup SDRAM Refresh Timer Register with the delay between refresh cycles of the SDRAM, this is SDRAM specific.
HAL_SDRAM_ProgramRefreshRate(&hsdram1, 1404); //Tried 1386
This same code works when running on a STM32F469-I DISCOVERY Board but fails when It is run on my custom carrier board.
On my board, whenever I write to any SDRAM address, only the first byte is READ/WRITTEN to, for example when I write 0x00 to address 0xC0000000, the data at that address when monitored under debugging contains 0x48129100 i.e first 3 BYTES are random garbage and only the last byte is written correctly(0x00), and suppose If i write 0xAA to the same address the data there when monitored contains the value 0x481291AA.
This is true to every address that I read/write to/from the external SDRAM.
Only the last byte is written/read.
The custom board schematic and PCB layout is identical to the STM32F469-I DISCO board.
Only difference is the RAM chip which instead of being MICRON one is Alliance Memory's AS4C4M32S .
What can be the issue of such a problem?
Reading SDRAM always uses at the full bit width, but writing requires to set the four signals BL0, BL1, BL2 and BL3 to say which bytes to write.
It sounds like only BL0 is currently working. Perhaps BL1-3 are not physically connected (hardware) or perhaps you just forgot to configure them (software).
(If it turns out to be a hardware problem then someone with the appropriate permissions will want to move this question and answer to the electronics stack exchange)
I need to set a CAN Filter to ID "$7E8". But I need to set a ID High, Low and a Mask.
FilterConfig.FilterIdHigh = 0x0000;
FilterConfig.FilterIdLow = 0x0000;
FilterConfig.FilterMaskIdHigh = 0x0000;
FilterConfig.FilterMaskIdLow = 0x0000;
I found this Article: https://schulz-m.github.io/2017/03/23/stm32-can-id-filter/
And tried that code:
FilterConfig.FilterIdHigh = 0x00000008;
FilterConfig.FilterIdLow = 0x00000008;
FilterConfig.FilterMaskIdHigh = 0x1FFFFFFF;
FilterConfig.FilterMaskIdLow = 0x1FFFFFFF;
But it doesn't recognise a message sent with $7E8
The following works for me. id0 and id1 are two message IDs you want to allow (in ID list mode, you always set two per filter bank). If you want to set just one, I assume you can repeat the first one.
CAN_FilterTypeDef filter;
filter.FilterActivation = ENABLE;
filter.FilterFIFOAssignment = CAN_FILTER_FIFO0;
// Allow two IDs per entry
filter.FilterScale = CAN_FILTERSCALE_16BIT;
filter.FilterMode = CAN_FILTERMODE_IDLIST;
// Mask seems to have no effect when using CAN_FILTERMODE_IDLIST
filter.FilterMaskIdHigh = 0x0000;
filter.FilterMaskIdLow = 0x0000;
filter.FilterIdLow = id0;
filter.FilterIdHigh = id1;
filter.FilterBank = 0;
if (HAL_CAN_ConfigFilter(hcan, &filter)) {
Error_Handler();
}
2 corrections:
When using CAN_FILTERSCALE_16BIT in combination with CAN_FILTERMODE_IDLIST the mask high and mask low variables also contain 11-bit-IDs. You can set a filter bank for 4 11-bit-IDs with ID list mode this way.
The ID needs to be shifted to the correct position in the corresponding register. This is different for 16 Bit and 32 Bit scale. For 32 bit scale refer to the microcontroller reference manual. For 16 bit scale here is an example:
CAN_FilterTypeDef tsCanFilter = { 0 };
tsCanFilter.FilterActivation = ENABLE;
tsCanFilter.FilterFIFOAssignment = CAN_FILTER_FIFO0;
tsCanFilter.FilterScale = CAN_FILTERSCALE_16BIT;
tsCanFilter.FilterMode = CAN_FILTERMODE_IDLIST;
tsCanFilter.FilterIdLow = u32Id1 << 5u;
tsCanFilter.FilterMaskIdLow = u32Id2 << 5u;
tsCanFilter.FilterIdHigh = u32Id3 << 5u;
tsCanFilter.FilterMaskIdHigh = u32Id4 << 5u;
tsCanFilter.FilterBank = 0u;
if ( HAL_CAN_ConfigFilter ( &tsCanHandle, &tsCanFilter ) != HAL_OK )
{
// error handling
}
I am currently working on a GUI-based application that records serial data from an embedded system that is transmitted at 50Hz. The problem is that MATLAB appears incapable of recording this data reliably, and drops samples. The format of the transmitted data is as follows:
4918 22279 29342 3161 0 24953 29814 5319 1 0
4919 22279 29348 2997 1 24953 29838 5037 0 0
4920 22279 29357 2682 0 24953 29853 4544 0 0
Each row is terminated by a CR/LF. I have written a test function to try and figure out what is going on - the code for this is below.
function stack_stream(time)
column_count = 10;
serial_object = create_serial_object;
fopen(serial_object);
date_vector = clock;
file_name = datestr(date_vector,30);
file_name = [file_name '.txt'];
file_identifier = fopen(file_name,'w');
tic;
while (toc < time)
if (serial_object.UserData.is_new_data == true)
raw_chunk = serial_object.UserData.data;
serial_object.UserData.is_new_data = false;
data_chunk = sscanf(raw_chunk,'%d');
data_chunk_length = length(data_chunk);
if (mod(data_chunk_length,column_count) == 0)
data_column_count = data_chunk_length/column_count;
data = reshape(data_chunk,column_count,data_column_count);
fprintf(file_identifier,...
'%6d %6d %6d %6d %6d %6d %6d %6d %6d %6d\r\n',data);
end
end
end
fclose(file_identifier);
fclose(serial_object);
delete(serial_object);
end
function serial_object_callback(object,event)
new_data = fscanf(object,'%c',object.BytesAvailable);
if (object.UserData.is_new_data == false)
object.UserData.data = new_data;
object.UserData.is_new_data = true;
else
object.UserData.data = [object.UserData.data new_data];
end
end
function serial_object = create_serial_object()
serial_object = serial('COM2');
serial_object.BaudRate = 57600;
serial_object.DataBits = 8;
serial_object.FlowControl = 'none';
serial_object.StopBits = 1;
serial_object.Terminator = 'CR/LF';
serial_object.InputBufferSize = 2^18;
serial_object.BytesAvailableFcnMode = 'terminator';
serial_object.BytesAvailableFcn = {#serial_object_callback};
serial_object.UserData.data = [];
serial_object.UserData.is_new_data = false;
serial_object.UserData.response = [];
serial_object.UserData.is_new_response = false;
end
In essence, this function writes time seconds worth of data to file, using a callback to transfer data between the serial object buffer and a user buffer. Using stack_stream I find that I miss around 10 rows of data for every 3000 received - ie about 10 a minute.
Anecdotal evidence would lead me to believe that MATLAB is capable of handling serial data in the kHz range, so I am at somewhat of a loss to determine why I cannot get it to work properly at a paltry 50Hz. Is there anything I can do to improve performance without having to resort to the real-time target for windows or the like?
I have built a tricopter from scratch based on a .NET Micro Framework board from TinyCLR.com. I used the FEZ Mini which runs at 72 MHz. Read more about my project at: http://bit.ly/TriRot.
So after a pre-flight check where I initialise and test each component, like calibrating the IMU and spinning each motor, checking that I get receiver data, etc., it enters a permanent loop which then calls the flight controller method on each loop.
I'm trying to tune my PID controller now using the Ziegler-Nichols method, but I am always getting a progressively larger overshoot. I was eventually able to get a [mostly] stable oscillation using proportional control only (setting Ki and Kd = 0); timing the period K with a stopwatch averaged out to 3.198 seconds.
I came across the answer (by Rex Logan) on a similar question by chris12892.
I was initially using the "Duration" variable in milliseconds which made my copter highly aggressive, obviously because I was multiplying the running integrator error by thousands on each loop. I then divided it by another thousand to bring it to seconds, but I'm still battling...
What I don't understand from Rex's answer is:
Why does he ignore the time variable in the integral and differential parts of the equations? Is that right or is it a typo?
What he means by the remark
In a normal sampled system the delta term would be one...
One what? Should this be one second under normal circumstances? What
if this value fluctuates?
My flight controller method is below:
private static Single[] FlightController(Single[] imuData, Single[] ReceiverData)
{
Int64 TicksPerMillisecond = TimeSpan.TicksPerMillisecond;
Int64 CurrentTicks = DateTime.Now.Ticks;
Int64 TickCount = CurrentTicks - PreviousTicks;
PreviousTicks = CurrentTicks;
Single Duration = (TickCount / TicksPerMillisecond) / 1000F;
const Single Kp = 0.117F; //Proportional Gain (Instantaneou offset)
const Single Ki = 0.073170732F; //Integral Gain (Permanent offset)
const Single Kd = 0.001070122F; //Differential Gain (Change in offset)
Single RollE = 0;
Single RollPout = 0;
Single RollIout = 0;
Single RollDout = 0;
Single RollOut = 0;
Single PitchE = 0;
Single PitchPout = 0;
Single PitchIout = 0;
Single PitchDout = 0;
Single PitchOut = 0;
Single rxThrottle = ReceiverData[(int)Channel.Throttle];
Single rxRoll = ReceiverData[(int)Channel.Roll];
Single rxPitch = ReceiverData[(int)Channel.Pitch];
Single rxYaw = ReceiverData[(int)Channel.Yaw];
Single[] TargetMotorSpeed = new Single[] { rxThrottle, rxThrottle, rxThrottle };
Single ServoAngle = 0;
if (!FirstRun)
{
Single imuRoll = imuData[1] + 7;
Single imuPitch = imuData[0];
//Roll ----- Start
RollE = rxRoll - imuRoll;
//Proportional
RollPout = Kp * RollE;
//Integral
Single InstanceRollIntegrator = RollE * Duration;
RollIntegrator += InstanceRollIntegrator;
RollIout = RollIntegrator * Ki;
//Differential
RollDout = ((RollE - PreviousRollE) / Duration) * Kd;
//Sum
RollOut = RollPout + RollIout + RollDout;
//Roll ----- End
//Pitch ---- Start
PitchE = rxPitch - imuPitch;
//Proportional
PitchPout = Kp * PitchE;
//Integral
Single InstancePitchIntegrator = PitchE * Duration;
PitchIntegrator += InstancePitchIntegrator;
PitchIout = PitchIntegrator * Ki;
//Differential
PitchDout = ((PitchE - PreviousPitchE) / Duration) * Kd;
//Sum
PitchOut = PitchPout + PitchIout + PitchDout;
//Pitch ---- End
TargetMotorSpeed[(int)Motors.Motor.Left] += RollOut;
TargetMotorSpeed[(int)Motors.Motor.Right] -= RollOut;
TargetMotorSpeed[(int)Motors.Motor.Left] += PitchOut;// / 2;
TargetMotorSpeed[(int)Motors.Motor.Right] += PitchOut;// / 2;
TargetMotorSpeed[(int)Motors.Motor.Rear] -= PitchOut;
ServoAngle = rxYaw + 15;
PreviousRollE = imuRoll;
PreviousPitchE = imuPitch;
}
FirstRun = false;
return new Single[] {
(Single)TargetMotorSpeed[(int)TriRot.LeftMotor],
(Single)TargetMotorSpeed[(int)TriRot.RightMotor],
(Single)TargetMotorSpeed[(int)TriRot.RearMotor],
(Single)ServoAngle
};
}
Edit: I found that I had two bugs in my code above (fixed now). I was integrating and differentiating with the last IMU values as opposed to the last error values. That got rid of the runaway sitation completely. The only problem now is that it seems to be a bit slow. When I perturb the system, it responds very quickly and stop it from continuing, but it takes a long time to get back to the setpoint (0), about 10 seconds or more. Is this now just down to tuning the PID? I'll give the suggestions below a go, and let you know if any of them make a difference.
One question I have is:
being a .NET board, I don't want to bank on any kind of accurate timing, so instead of trying to work out at what frequency I am executing that method, surely if I calculate the actual time and factor that into the equations, it should be better, or am I misunderstanding something?
I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format
_
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ;
_streamFormat.mBitsPerChannel = 16;
_streamFormat.mChannelsPerFrame = 2;
_streamFormat.mBytesPerPacket = 4;
_streamFormat.mBytesPerFrame = 4;
_streamFormat.mFramesPerPacket = 1;
_streamFormat.mSampleRate = 44100;
_streamFormat.mReserved = 0;
to this format
_streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0;
_streamFormatOutput.mBitsPerChannel = 16;
_streamFormatOutput.mChannelsPerFrame = 1;
_streamFormatOutput.mBytesPerPacket = 2;
_streamFormatOutput.mBytesPerFrame = 2;
_streamFormatOutput.mFramesPerPacket = 1;
_streamFormatOutput.mSampleRate = 44100;
_streamFormatOutput.mReserved = 0;
and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows
This is to set the channel map for PCM output file
SInt32 channelMap[1] = {0};
status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap);
and this is to convert the buffer in a while loop
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for (int y=0; y<audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
//frames = audioBuffer.mData;
NSLog(#"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels);
NSLog(#"The buffer size is %d",audioBuffer.mDataByteSize);
numBytesIO = audioBuffer.mDataByteSize;
convertedBuf = malloc(sizeof(char)*numBytesIO);
status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf);
char errchar[10];
NSLog(#"status audio converter convert %d",status);
if (status != 0) {
NSLog(#"Fail conversion");
assert(0);
}
NSLog(#"Bytes converted %d",numBytesIO);
status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf);
NSLog(#"status for writebyte %d, bytes written %d",status,numBytesIO);
free(convertedBuf);
if (numBytesIO != audioBuffer.mDataByteSize) {
NSLog(#"Something wrong in writing");
assert(0);
}
countByteBuf = countByteBuf + numBytesIO;
But the insz problem is there... so it cant convert. I would appreciate any input
Thanks in advance
First, you cannot use AudioConverterConvertBuffer() to convert anything where input and output byte size is different. You need to use AudioConverterFillComplexBuffer(). This includes performing any kind of sample rate conversions, or adding/removing channels.
See Apple's documentation on AudioConverterConvertBuffer(). This was also discussed on Apple's CoreAudio mailing lists, but I'm afraid I cannot find a reference right now.
Second, even if this could be done (which it can't) you are passing the same number of bytes allocated for output as you had for input, despite actually requiring half of the number of bytes (due to reducing number of channels from 2 to 1).
I'm actually working on using AudioConverterConvertBuffer() right now, and the test files are mono while I need to play stereo. I'm currently stuck with the converter performing conversion only of the first chunk of the data. If I manage to get this to work, I'll try to remember to post the code. If I don't post it, please poke me in comments.