USART format data type - type-conversion

i would like to ask, how i can send data via usart as integer, i mean variable which stores number. I am able to send char variable, but terminal shows me ascii presentation of this number and i need to see number.
I edited code like shown below but it gives me error: "conflicting types for 'USART_Transmit'"
#include <avr/io.h>
#include <util/delay.h>
#define FOSC 8000000// Clock Speed
#define BAUD 9600
#define MYUBRR FOSC/16/BAUD-1
void USART_Init( unsigned int ubrr );
void USART_Transmit( unsigned char data );
unsigned char USART_Receive( void );
int main( void )
{
unsigned char str[5] = "serus";
unsigned char strLenght = 5;
unsigned int i = 47;
USART_Init ( MYUBRR );
//USART_Transmit('S' );
while(1)
{
/*USART_Transmit( str[i++] );
if(i >= strLenght)
i = 0;*/
USART_Transmit(i);
_delay_ms(250);
}
return(0);
}
void USART_Init( unsigned int ubrr )
{
/* Set baud rate */
UBRR0H = (unsigned char)(ubrr>>8);
UBRR0L = (unsigned char)ubrr;
/* Enable receiver and transmitter */
UCSR0B = (1<<RXEN)|(1<<TXEN);
/* Set frame format: 8data, 2stop bit */
UCSR0C = (1<<USBS)|(3<<UCSZ0);
}
void USART_Transmit( unsigned int data )
{
/* Wait for empty transmit buffer */
while ( !( UCSR0A & (1<<UDRE)) )
;
/* Put data into buffer, sends the data */
UDR0 = data;
}
unsigned char USART_Receive( void )
{
/* Wait for data to be received */
while ( !(UCSR0A & (1<<RXC)) )
;
/* Get and return received data from buffer */
return UDR0;
}
Do you have any ideas what is wrong?
PS: I hope you understand what im trying to explain.

I like to use sprintf to format numbers for serial.
At the top of your file, put:
#include <stdio.h>
Then write some code in a function like this:
char buffer[16];
sprintf(buffer, "%d\n", number);
char * p = buffer;
while (*p) { USART_Transmit(*p++); }
The first two lines construct a null-terminated string in the buffer. The last two lines are a simple loop to send all the characters in the buffer. I put a newline in the format string to make it easier to see where one number ends and the other begins.

Technically a UART serial connection is just a stream of bits divided into symbols of a certain length. It's perfectly possible send the data in raw form, but this comes with a number of issues the must be addressed:
How to identify the start and end of a transmission unambiguously?
How to deal with endianess on either side of the connection?
How to serialize and deserialize the data in a robust way?
How to deal with transmission errors?
At the end of the day it turns out, that you never can resolve all the ambiguties and binary data somehow must be escaped or otherwise encoded to prevent misinterpretation.
As far as delimiting transmissions is concerned, that has been addressed by the creators of the ASCII standard through the set of nonprintable control characters: Of interest for you should be the special control characters
STX / 0x02 / Start of Text
ETX / 0x03 / End of Text
There are also other control characters which form a pretty complete set to make up data structures; you don't need JSON or XML for this. However ASCII itself does support the transmission of arbitrary binary data. However the standard staple for this task for a long time has been and is base64 encoding. Use that for transmission of arbitrary binary data.
Numbers you probably should not transmit in binary at all; just push digits around; if you're using octal or hexadecimal digits parsing into integers is super simple (boils down to a bunch of bit masking and shifting).

Related

Create Unicode from a hex number in C++

My objective is to take a character which represents to UK pound symbol and convert it to it's unicode equivalent in a string.
Here's my code and output so far from my test program:
#include <iostream>
#include <stdio.h>
int main()
{
char x = 163;
unsigned char ux = x;
const char *str = "\u00A3";
printf("x: %d\n", x);
printf("ux: %d %x\n", ux, ux);
printf("str: %s\n", str);
return 0;
}
Output
$ ./pound
x: -93
ux: 163 a3
str: £
My goal is to take the unsigned char 0xA3 and put it into a string representing the unicode UK pound representation: "\u00A3"
What exactly is your question? Anyway, you say you're writing C++, but you're using char* and printf and stdlib.h so you're really writing C, and base C does not support unicode. Remember that a char in C is not a "character" it's just a byte, and a char* is not an array of characters, it's an array of bytes. When you printf the "\u00A3" string in your sample program, you are not printing a unicode character, you are actually printing those literal bytes, and your terminal is helping you out and interpreting them as a unicode character. The fact that it correctly prints the £ character is just coincidence. You can see this for yourself. If you printf str[0] in your sample program you should just see the "\" character.
If you want to use unicode correctly in C you'll need to use a library. There are many to choose from and I haven't used any of them enough to recommend one. Or you'll need to use C++11 or newer and use std::wstring and friends. But what you are doing is not real unicode and will not work as you expect in the long run.

How does WideCharToMultiByte deal with codepages?

When I execute the below code, why am I getting '?' for the first case? AFAIK, codepage 932 supports line draw characters.
How does this API deal with codepages? AFAIK, it searches and maps the character in the codepage, then returns the position of the character from the codepage.
typedef struct dbcs {
unsigned char HighByte;
unsigned char LowByte;
} DBCS;
static DBCS set[5] = {0x25,0x5D};
unsigned char array[2];
#include <windows.h>
#include <stdio.h>
int main()
{
// printf("hello world");
int str_size;
LPCWSTR charpntr;
LPSTR getcd;
LPBOOL flg;
int i ;
array[0] = set[0].LowByte;
array[1] = set[0].HighByte;
charpntr = &array;
str_size = WideCharToMultiByte(932, 0, charpntr, 1, getcd, 2, NULL, NULL);
printf(" value of %u", getcd);
printf("number of bytes %d character is %s", str_size, getcd);
printf("\n");
array[0] = set[0].LowByte;
array[1] = set[0].HighByte;
charpntr = &array;
str_size = WideCharToMultiByte(437, 0, charpntr, 1, getcd, 2, NULL, NULL);
printf(" value of %u", getcd);
printf("number of bytes %d character is %s", str_size, getcd);
printf("\n");
}
Result of execution in CodeBlocks:
Windows codepage 932 is not a simple thing - as it uses multibyte characters.
I have no Windows here, so I have been experimenting with the encoding of the character you are using in Python3, in an UTF-8 terminal: it works fine with cp437 and UTF-8, but Python refuses to encode the character to what it calls "cp932", or any of its aliases listed in the Wikipedia article:
https://en.wikipedia.org/wiki/Code_page_932_(Microsoft_Windows)
It may be a fault in Python's internal Unicode tables (fetched directly from the Unicode consortium), or possibly, this codepage don't map this character at all.
Anyway, there are problems in your code: one is that you never initialize getcd. Reading the docs for WideCharToMultiByte(), one see it should not be set to NULL, so you have to have the proper return buffer allocated there.
So, try putting the getcd declaration as:
char getcd[6]={};
That should give you enough space for even the widest characters you experiment with, and include a string \x00 terminator.
And another thing is that if these line drawing characters are present in CP932, they are definitely multibyte - thus the cbMultiByte parameter for the call (the "1" after charptr) should be set to at least 2. If no other error kicks in, and the char exists in cp932, this alone might fix your issue.

Weird looking symbols in dat file?

Learning to program in C. Used a textbook to learn about writing data randomly to a random access file. It seems like the textbook code works ok. However the output in the Notepad file is: Jones Errol Ÿru ”#e©A Jones Raphael Ÿru €€“Ü´A. This can't be correct yeah? Do you know why the numbers don't show?
I have no idea how to format code properly. Always someone tells me it is bad. I use CTRL +K. And in my compiler follow the book exactly. I'm sorry if it isn't correct. Maybe you can tell me how? Thanks
Here is the code:
#include <stdio.h>
//clientData structure definition
struct clientData{
int acctNum;
char lastName[15];
char firstName[10];
double balance;
};
int main (void){
FILE *cfPtr;//credit.dat file pointer
//create clientData with default information
struct clientData client={0,"","",0.0};
if ((cfPtr=fopen("c:\\Users\\raphaeljones\\Desktop\\clients4.dat","rb+"))==NULL){
printf("The file could not be opened\n");
}
else {
//require user to specify account number
printf("Enter account number"
"(1 to 100, 0 to end input)\n");
scanf("%d",&client.acctNum);
// users enters information which is copied into file
while (client.acctNum!=0) {
//user enters lastname,firstname and balance
printf("Enter lastname,firstname, balance\n");
//set record lastName,firstname and balance value
fscanf(stdin,"%s%s%lf", client.lastName,
client.firstName, &client.balance);
//seek position in file to user specified record
fseek(cfPtr,
(client.acctNum-1)* sizeof (struct clientData),
SEEK_SET);
//write user specified information in file
fwrite(&client,sizeof(struct clientData),1,cfPtr);
//enable user to input another account number
printf("Enter account number\n");
scanf("%d",&client.acctNum);
}
fclose(cfPtr);
return 0;
}
You have created a structure clientData which contains an integer, two strings and a double. You open the file in binary mode and you use fwrite() to write the structure to it.
This means you are writing the integer and the double in binary, and not as character strings, so what you see is logically correct, and you could read the file back into a structure with fread() and then print it out.
If you want to create a text file, you should use fprintf(). You can specify the field widths for integer and double values, so you can create a fixed-length record (which is essential for random access).

How does Perl store integers in-memory?

say pack "A*", "asdf"; # Prints "asdf"
say pack "s", 0x41 * 256 + 0x42; # Prints "BA" (0x41 = 'A', 0x42 = 'B')
The first line makes sense: you're taking an ASCII encoded string, packing it into a string as an ASCII string. In the second line, the packed form is "\x42\x41" because of the little endian-ness of short integers on my machine.
However, I can't shake the feeling that somehow, I should be able to treat the packed string from the second line as a number, since that's how (I assume) Perl stores numbers, as little-endian sequence of bytes. Is there a way to do so without unpacking it? I'm trying to get the correct mental model for the thing that pack() returns.
For instance, in C, I can do this:
#include <stdio.h>
int main(void) {
char c[2];
short * x = c;
c[0] = 0x42;
c[1] = 0x41;
printf("%d\n", *x); // Prints 16706 == 0x41 * 256 + 0x42
return 0;
}
If you're really interested in how Perl stores data internally, I'd recommend PerlGuts Illustrated. But usually, you don't have to care about stuff like that because Perl doesn't give you access to such low-level details. These internals are only important if you're writing XS extensions in C.
If you want to "cast" a two-byte string to a C short, you can use the unpack function like this:
$ perl -le 'print unpack("s", "BA")'
16706
However, I can't shake the feeling that somehow, I should be able to treat the packed string from the second line as a number,
You need to unpack it first.
To be able to use it as a number in C, you need
char* packed = "\x42\x41";
int16_t int16;
memcpy(&int16, packed, sizeof(int16_t));
To be able to use it as a number in Perl, you need
my $packed = "\x42\x41";
my $num = unpack('s', $packed);
which is basically
use Inline C => <<'__EOI__';
SV* unpack_s(SV* sv) {
STRLEN len;
char* buf;
int16_t int16;
SvGETMAGIC(sv);
buf = SvPVbyte(sv, len);
if (len != sizeof(int16_t))
croak("usage");
Copy(buf, &int16, 1, int16_t);
return newSViv(int16);
}
__EOI__
my $packed = "\x42\x41";
my $num = unpack_s($packed);
since that's how (I assume) perl stores numbers, as little-endian sequence of bytes.
Perl stores numbers in one of following three fields of a scalar:
IV, a signed integer of size perl -V:ivsize (in bytes).
UV, an unsigned integer of size perl -V:uvsize (in bytes). (ivsize=uvsize)
NV, a floating point numbers of size perl -V:nvsize (in bytes).
In all case, native endianness is used.
I'm trying to get the correct mental model for the thing that pack() returns.
pack is used to construct "binary data" for interfacing with external APIs.
I see pack as a serialization function. It takes as input Perl values, and outputs a serialized form. The fact the output serialized form happens to be a Perl bytestring is more of an implementation detail than a core functionality.
As such, all you're really expected to do with the resulting string is feed it to unpack, though the serialized form is convenient to have it move around processes, hosts, planets.
If you're interested in serializing it to a number instead, consider using vec:
say vec "BA", 0, 16; # prints 16961
To take a closer look at the string's internal representation, take a look at Devel::Peek, though you're not going to see anything surprising with a pure ASCII string.
use Devel::Peek;
Dump "BA";
SV = PV(0xb42f80) at 0xb56300
REFCNT = 1
FLAGS = (POK,READONLY,pPOK)
PV = 0xb60cc0 "BA"\0
CUR = 2
LEN = 16

Reversing endianness with memcpy

I'm writing a method that creates an in-memory WAV file. The first 4 bytes of the file should contain the characters 'RIFF', so I'm writing the bytes like this:
Byte *bytes = (Byte *)malloc(len); // overall length of file
char *RIFF = (char *)'RIFF';
memcpy(&bytes[0], &RIFF, 4);
The problem is that this writes the first 4 bytes as 'FFIR', thanks to little-endianness. To correct this problem, I'm just doing this:
Byte *bytes = (Byte *)malloc(len);
char *RIFF = (char *)'FFIR';
memcpy(&bytes[0], &RIFF, 4);
This works, but is there a better-looking way of getting memcpy to reverse the order of the bytes it's writing?
You're doing some bad things with pointers (and some weird but not wrong things). Try this:
Byte *bytes = malloc(len); // overall length of file
char *RIFF = "RIFF";
memcpy(bytes, RIFF, 4);
It'll work fine.