Get error the second time to write to sd card - sd-card

I try to communicate by avr with micro SD. Card is initializing properly in SPI mode.
I can write 512 bytes of data for the first time in SD card. Data write operation is performed correctly. But for the second time will get an error and Data is not written in the SD card.
How to fix this error?
The code that I wrote.
unsigned int SD_write(long int Address,unsigned int data[],unsigned int k){
unsigned int j,er=0;
CS_ENABLE();
spi(0x58);
spi(Address>>24);
spi(Address>>16);
spi(Address>>8);
spi(Address);
spi(0xff);
for(j=0; j<2000 && spi(0xFF)!=0x00; j++){} //whit for OK=0x00 reply
if(j>=2000) er=1; //get error 1
spi(0xFF);
spi(0xFF);
spi(0xFE);
for (j=0;j<k;j++)
{
spi(data[j]);
}
for(j=k;j<512;j++)
{
spi(0x00);
}
spi(0x00);
spi(0x00);
spi(0x00);
for (j=0;j<2000&&spi(0xFF)!=0x00;j++){} //whit for 0x00 OK reply
if(j>=2000) er=2; //get error 2
CS_DISABLE();
return er;

Related

STM32 Keil - Can not access target while debugging (AT Command UART)

I am trying to communicate with GSM module via UART communication. I could get message from the module as I expected. However when it comes to while loop (it is empty), debug session ends with "can not access target" error. Stepo by step, I am going to share my code:
Function 1 is AT_Send. (Note: Some of variables are declared globally.)
int AT_Send(UART_HandleTypeDef *huart, ATHandleTypedef *hat, unsigned char *sendBuffer, uint8_t ssize, unsigned char *responseBuffer, uint8_t rsize) {
if (HAL_UART_Transmit_IT(huart,sendBuffer,ssize) != HAL_OK) {
return -1;
}
while ((HAL_UART_GetState(huart) & HAL_UART_STATE_BUSY_TX) == HAL_UART_STATE_BUSY_TX) {
continue;
}
//;HAL_Delay(1000);
if (strstr((char*)receiveBuffer,(char*)responseBuffer) != NULL) {
rxIndex = 0;
memset(command, 0, sizeof(command));
return 0;
}
rxIndex = 0;
memset(command, 0, sizeof(command));
return 1;
}
Second function is AT_Init function. It sends AT to get OK response. At this point on, if I am not wrong, I am opening receive interrrupt and I am trying to get 1 byte.
int AT_Init(UART_HandleTypeDef *huart, ATHandleTypedef *hat)
{
HAL_UART_Receive_IT(huart,&rData,1);
tx = AT_Send(huart,hat,"AT\r",sizeof("AT\r\n"),"OK\r\n",sizeof("OK\r\n"));
return tx;
}
After these two functions, I am calling receive IT function in the call back while there are data on the bus.
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)
{
if (huart->Instance == USART1){
command[rxIndex] = rData;
rxIndex++;
if((rxIndex == 2) && (strstr((char*)command,"\r\n") != NULL)) {
rxIndex = 0;
} else if (strstr((char*)command,"\r\n") != NULL) {
memcpy(receiveBuffer, command, sizeof(command));
rxIndex = 0;
memset(command,0,sizeof(command));
}
HAL_UART_Receive_IT(&huart1,&rData,1);
}
}
Moreover, I am going to send a few HTTP commands simultaneously if I can get rid of this problem.
Can anyone share his/her knowledge?
Edit: Main function is shown below
tx = AT_Init(&huart1,&hat);
while (1)
{
HAL_GPIO_TogglePin(GPIOB,GPIO_PIN_3);
HAL_Delay(500);
}
Edit 2: I had replaced uart channel by USART2, and debugger worked. I suppose that it is related to hardware. Still, I am curious about possible reasons that cause to this problem.
The question doesn't mention on which µC the program is running, I only see the "stm32" tag. Similarly, we don't know which debug protocol is used (JTAG or SWD?).
Still, I dare to guess that the toggle command for GPIO port PB3 in the main loop is causing the observations: On many (most? all?) STM32 controllers, PB3 is used as JTDO pin, which is needed for JTAG debug connections.
Please make sure to configure the debug connection to SWD (without SWO, i.e., neither SWV is correct). It may also help to check the wiring of the debug cable, the fast toggle at the PB3/JTDO line may influence the signal levels on some neighbouring SWD lines if the wiring is low quality or a fast SWD speed has been chosen.
My hypothesis can be falsified by removing all actions to PB3. If the problem remains, I'm wrong.

Detecting CAN bus errors under socketCAN linux driver

Our products are using a well known CANopen stack, which uses socketCAN, on an embedded Beaglebone Black based system running under Ubuntu 14.04 LTS. But for some reason, even though the stack we're using will detect when the CAN bus goes into a PASSIVE state or even a BUS OFF state, it never indicates when the CAN bus recovers from errors and goes out of a PASSIVE or warning state, and enters a non error state.
If I were to query the socketCAN driver directly (via ioctl calls), would I be able to detect when the CAN bus goes in and out of a warning state (which is less than 127 errors), in and out of a PASSIVE state (greater than 127 errors) or goes BUS OFF (greater than 255 errors)?
I'd like to know if I'd be wasting my time doing this or is there a better way to detect, accurately and in real-time, all conditions of a CAN bus?
I have only a partial solution to that problem.
As you are using socketCAN, the interface is seen as a standard network interface, on which we can query the status.
Based on How to check Ethernet in Linux? (replace "eth0" by "can0"), you can check the link status.
This is not real-time, but can be executed in a periodic thread to check the bus state.
So while this is an old question, I just happened to stumble upon it (while searching for something only mildly related).
SocketCAN provides all the means for detecting error frames OOB.
Assuming your code looks similar to this:
int readFromCan(int socketFd, unsigned char* data, unsigned int* rxId) {
int32_t bytesRead = -1;
struct can_frame canFrame = {0};
bytesRead = (int32_t)read(socketFd, &canFrame, sizeof(can_frame));
if (bytesRead >= 0) {
bytesRead = canFrame.can_dlc;
if (data) {
memcpy(data, canFrame.data, readBytes);
}
if (rxId) {
*rxId = canFrame.can_id; // This will come in handy
}
}
return bytesRead;
}
void doStuffWithMessage() {
int32_t mySocketFd = fooGetSocketFd();
int32_t receiveId = 0;
unsigned char myData[8] = {0};
int32_t dataLength = 0;
if ((dataLength = readFromCan(mySocketFd, myData, &receiveId) == -1) {
// Handle error
return;
}
if (receiveId & CAN_ERR_MASK != 0) {
// Handle error frame
return;
}
// Do stuff with your data
}

stm32f4xx HAL lib & PCF8457AT - no response to write

I have stm32f4-discovery kit and I want to try i/o expander for hd44870 LCD . I have PCF8574AT link to io example like mine 8-bit expander where i2c address is 0x3f (checked by i2c scanner) on hi2c3 hardware. For c/c++ use HAL libraries on Eclipse environment. Ok take look at code.
First I initialize i2c3 - like Datasheet 100kHz on SCL:
static void MX_I2C3_Init(void)
{
hi2c3.Instance = I2C3;
hi2c3.Init.ClockSpeed = 100000;
hi2c3.Init.DutyCycle = I2C_DUTYCYCLE_2;
hi2c3.Init.OwnAddress1 = 0;
hi2c3.Init.AddressingMode = I2C_ADDRESSINGMODE_7BIT;
hi2c3.Init.DualAddressMode = I2C_DUALADDRESS_DISABLE;
hi2c3.Init.OwnAddress2 = 0;
hi2c3.Init.GeneralCallMode = I2C_GENERALCALL_DISABLE;
hi2c3.Init.NoStretchMode = I2C_NOSTRETCH_DISABLE;
if (HAL_I2C_Init(&hi2c3) != HAL_OK)
{
_Error_Handler(__FILE__, __LINE__);
}
}
Then try to send data to I/O expander. But before that I check that i/o is ready to use:
result = HAL_I2C_IsDeviceReady(&hi2c3,0x3f , 2, 2);
if (result == HAL_BUSY)
{
HD44780_Puts(6, 1, "busy");
}else{
HD44780_Puts(6, 1, "ready");
uint8_t data_io = 0xff;
HAL_I2C_Master_Transmit(&hi2c3, 0x3f, data_io, 1, 100);
}
On a same expander nothing changes. Any ideas what is wrong or maybe i/0 expander is broken ?
Im not sure about HAL driver, really never used HAL. But I have touched pcf8574 IO expander. As you said, you have checked it with scanner and if you get address, line and device is OK. As I am not expert on I2C and HAL libs,I'll show my I2C driver it relies on STM32 standard periphery drivers and it worked for PCF8574 and various I2C devices. There is an example,snippet(blocking mode, not irq based):
Checking if IO is not busy.
while(I2C_GetFlagStatus(&I2Cx, I2C_FLAG_BUSY) == SET){
if((timeout--) == 0) return -ETIMEDOUT;
}
Generate start condition and set write condition(with address for write mode).
I2C_TransferHandling(&I2Cx, dev_addr, 1, I2C_SoftEnd_Mode, I2C_Generate_Start_Write);
while(I2C_GetFlagStatus(&I2Cx, I2C_ISR_TXIS) == RESET){
if((timeout--) == 0) return -ENODEV;
}
Now you can send data byte( it is your IO states), This function writes directly to I2C TX(transceiver) register :
I2C_SendData(&I2Cx, reg_addr);
while(I2C_GetFlagStatus(&I2Cx, I2C_ISR_TC) == RESET){
if((timeout--) == 0) return -EIO;
}
Generate reading condition and than read from PCF8574, data should be same as it was just written(if nothing toggles IO expander). Basically you can read byte or more bytes (depends on device). In your case PCF8574(8bit) gives only 1 byte.
I2C_TransferHandling(dev->channel,dev_addr, len, I2C_AutoEnd_Mode,I2C_Generate_Start_Read);
size_t i;
for(i=0;i<len;i++){
timeout = I2C_TIMEOUT;
while(I2C_GetFlagStatus(dev->channel, I2C_ISR_RXNE) == RESET){
if((timeout--) == 0) return -EIO;
}
data[i] = I2C_ReceiveData(dev->channel);
}
You can continue RW operations, or just simply wait till device automatically stop transition on line:
while(I2C_GetFlagStatus(&I2Cx, I2C_FLAG_STOPF) == RESET){
if((timeout--) == 0) return -EIO;
}
I2C_ClearFlag(&I2Cx, I2C_ICR_STOPCF);
This steps will write and read data. Anyway this chip has some tricky logic there, it more simplistic than it looks like.Actually it works just as simple OUTPUT. Extern input just triggers up PCF8574 pin and nothing more, no special configuration for input mode. For input monitor for sure use PCF8574 INT pin, PCF8574 will trigger INT pin.
For example:
If you want input pins, than just simply set pins to logic zero. And monitor INT pin,if change happens on input, INT pin will be triggered and you should read data via I2C .
For OUTPUT low just write zero's.
And for OUTPUT high set logic true.
You are using HAL so you have to read what happens inside HAL_I2C_Master_Transmit function. Do not forget that address is 7bit and first byte with address also includes R/W condition.First byte bit0 is R/W bit. So you should handle it.
for example with defines:
#define PCF8574_WRITE_ADRESS (0x40) /*for writing to chip*/
#define PCF8574_READ_ADRESS ((0x40)|0x01) /*for reading chip*/
Here is some links:
i2c explanations
this may help
Really nice guide!
Hope this will help to understand your problem and solve it.:)
thanks , Bulkin
I found obvious mistake . HAL libs do not i2c_address << 1 . I/YOU must put that in code not same result !
HAL_I2C_Master_Transmit(&hi2c3, (0x3f<<1), data_io, 1, 100);
or
$define i2c_address_write (0x3f <<1 )
HAL_I2C_Master_Transmit(&hi2c3, i2c_address_write , data_io, 1, 100);
to read :
$define i2c_address_read ((0x3f <<1) | 0x01)
HAL_I2C_Master_Transmit(&hi2c3, i2c_address_read , data_io, 1, 100);

Sim900 only echos back the commands- No response

I'm using Atmega32 and sim900 for a project. I keep sending "AT" command and wait for the "OK" response, but all I am getting is AT\r\n. I've checked and rechecked wiring and my baud rate, but still getting no where. Whatever I send to sim900 I only get echoed back of the same transmitted string.
Can anyone help me please? I'd really appreciate it.
I'm posting my code here:
int sim900_init(void)
{
while(1)
{
sim_command("AT");
_delay_ms(2000);
}
return 0;
}
void usart_init(void)
{
/************ENABLE USART***************/
UBRRH=(uint8_t)(MYUBRR>>8);
UBRRL=(uint8_t)(MYUBRR); //set baud rate
UCSRB=(1<<TXEN)|(1<<RXEN); //enable receiver and transmitter
UCSRC=(1<<UCSZ0)|(1<<UCSZ1)|(1<<URSEL); // 8bit data format
UCSRB |= (1 << RXCIE); // Enable the USART Recieve Complete interrupt (USART_RXC)
/***************FLUSH ALL PRIVIOUS ACTIVITIES********************/
flush_usart();
/*********APPOINT POINTERS TO ARRAYS********************/
command=commandArray; // Assigning the pointer to array
response=responseArray; //Assigning the pointer to array
/*****************ENABLE INTRUPT***************************/
sei(); //Enabling intrupts for receving characters
}
void flush_usart(void)
{
response_full=FALSE; //We have not yet received the
}
void transmit_char(unsigned char value)
{
while (!( UCSRA & (1<<UDRE))); // wait while register is free
UDR = value;
}
void sim_command(char *cmd)
{
int j=0;
strcpy(command,cmd);
while(*(cmd+j)!='\0')
{
transmit_char(*(cmd+j));
j++;
}
transmit_char(0x0D); // \r // after all the at commands we should send \r\n so, we send it here after the string
transmit_char(0x0A); // \n
}
unsigned char recieve_char(void)
{
char temp;
while(!(UCSRA) & (1<<RXC)); // wait while data is being received
temp=UDR;
LCDdata(lcdchar,temp);
return temp;
}
void recive_sim900_response(void)
{
static int i=0;
char temp;
temp=recieve_char();
if(temp!='\n' && temp!='\r') // We dont want \r \n that will be send from Sim so we dont store them
*(response+i)=temp;
if(i==8) //when null char is sent means the string is finished- so we have full response
{ //we use them later in WaitForResponse function. we wait until the full response is received
i=0;
response_full=TRUE;
}
else
i++;
}
You were the only one who had exactly the same problem as I.
Somehow the library from gsmlib.org worked but entering AT commands directly using the Arduino serial monitor using the Arduino as a bridge or just an FTDI didn't.
The reason is that apparently the SIM900 expects commands to end with an '\r' character. I found this by trying GTKTerm which worked.
If typing "AT" and pressing enter in GTKTerm what is actually sent is "AT" followed by twice '\r' (0x0d) and one 0x0a
By default GSM module is in echo back ON mode. And you need to change your command.
sim_command("AT");
you need Enter=CR/LF after Command so modify your code like this and give a try
sim_command("AT\r");
And in case you want to turn off echo back of command you send then you should send this command once you have OK response back for AT command.
sim_command("ATE0\r"); //Echo back OFF
sim_command("ATE1\r"); //Echo back ON

Access to internal Xilinx FPGA block RAM

I'm writing a device driver for Xilinx Virtex-6 X8 PCI Express Gen 2 Evaluation/Development Kit SX315T FPGA. My OS is openSUSE 11.3 64 bit.
In the documentation for this device (Virtex-6 FPGA Integrated Block form PCI Express User Guide UG517 (v5.0) April 19, 2010, page 219) says:
The PIO design is a simple target-only application that interfaces with the Endpoint for
PCIe core’s Transaction (TRN) interface and is provided as a starting point for customers to build their own designs. The following features are included:
• Four transaction-specific 2 KB target regions using the internal Xilinx FPGA block
RAMs, providing a total target space of 8192 bytes
• Supports single DWORD payload Read and Write PCI Express transactions to
32-/64-bit address memory spaces and I/O space with support for completion TLPs
• Utilizes the core’s trn_rbar_hit_n[6:0] signals to differentiate between TLP destination
Base Address Registers
• Provides separate implementations optimized for 32-bit, 64-bit, and 128-bit TRN
interfaces
In the device is available BAR0 and BAR2 length 128 bytes.
I'm trying to access internal Xilinx FPGA block RAM, for that I am mapping BAR0 in virtual space kernel.
struct pcie_dev {
struct pci_dev* dev;
struct cdev chr_dev;
atomic_t dev_available;
u32 IOBaseAddress;
u32 IOLastAddress;
void* __iomem bar;
void *virt_addr;
u32 length;
unsigned long sirqNum;
void *private_data; };
struct pcie_dev cur_pcie_dev;
cur_pcie_dev.IOBaseAddress = pci_resource_start(dev, 0);
cur_pcie_dev.IOLastAddress = pci_resource_end(dev, 0);
cur_pcie_dev.length=pci_resource_len(dev,0);
cur_pcie_dev.bar=pci_iomap(dev, 0,cur_pcie_dev.length);
IOBaseAddress is 0xfbbfe000
IOLastAddress is 0xfbbfe07f
length=128;
Using IOCTL I try, write/read data.
case IOCTL_INFO_DEVICE:
{
u32 *rcslave_mem = (u32 *)pCur_dev->bar;
u32 result = 0;
u32 value = 0;
int i;
for (i = 0; i <2048 ; i++) {
printk(KERN_DEBUG "Writing 0x%08x to 0x%p.\n",
(u32)value, (void *)rcslave_mem + i);
iowrite32(value, rcslave_mem + i);
value++;
}
/* read-back loop */
value = 0;
for (i = 0; i < 2048; i++) {
result = ioread32(rcslave_mem + i);
printk(KERN_DEBUG "Wrote 0x%08x to 0x%p, but read back 0x%08x.\n",
(u32)value, (void *)rcslave_mem + i, (u32)result);
value++;
}
But it turns out to write and read only 32 values​​. As I understand it, the recording takes place in BAR0 (4 byte * 32 values ​​= 128 bytes), but not in internal Xilinx memory.I tried to go the other way.
cur_pcie_dev.IOBaseAddress = pci_resource_start(dev, 0);
cur_pcie_dev.IOLastAddress = pci_resource_end(dev, 0);
cur_pcie_dev.length=pci_resource_len(dev,0);
flags = pci_resource_flags(dev,0);
if (flags & IORESOURCE_MEM) {
if (request_mem_region(cur_pcie_dev.IOBaseAddress,cur_pcie_dev.length, DEVICE_NAME)== NULL) {
return -EBUSY;}
cur_pcie_dev.virt_addr=ioremap_nocache(cur_pcie_dev.IOBaseAddress,cur_pcie_dev.length);
if (cur_pcie_dev.virt_addr == NULL) {
printk(KERN_ERR "ERROR: BAR%u remapping FAILED\n",0);
return -ENOMEM;
}
printk(KERN_INFO " Allocated I/O memory range %#lx-%#lx\n", cur_pcie_dev.IOBaseAddress,(cur_pcie_dev.IOBaseAddress+cur_pcie_dev.length-1));
} else {
printk(KERN_ERR "ERROR: Invalid PCI region flags\n");
return -EIO;
}
Then
address = ((unsigned int)pCur_dev->virt_addr+pd.Address);
iowrite32(pd.Value,(unsigned int*) address);
address = ((unsigned int)pCur_dev->virt_addr+pd.Address);
pd.Value = ioread32((unsigned int *)address);
I use a summing virtual address and the address, which specifies the user. But the result is read / write operations is also not true. Tell me what I'm doing wrong.
P.S.Sorry for my bad English
What is the reason you are trying to access internal block RAM of your board? I think a normal behavior of a device driver (which your device here is a PCI Express interface), would suffice if you are using Programmed I/O (PIO) on your FPGA. When you write to your device driver, the data would be transferred to block RAM by downloaded IP core on FPGA side (and also in reverse).
Take a look at Linux Driver in xapp1022 (Memory Endpoint Test) package from Xilinx.
P.S.: I know it's an old question and you may found your answer way sooner :)