conversion of string to int and int to string using static_cast - type-conversion

I am just not able to convert different datatypes in c++,I know that c++ is a strong type language so,I
used here static_cast but I am facing a problem the error messages are
invalid static_cast from type 'std::string {aka std::basic_string}' to type 'int'
invalid conversion from 'int' to 'const char*' [-fpermissive]
#include <vector>
#include <iostream>
using namespace std;
int main()
{
string time;
string t2;
cin >> time;
int hrs;
for(int i=0;i!=':';i++)
{
t2[i]=time[i];
}
hrs=static_cast<int>(t2);
hrs=hrs+12;
t2=static_cast<string>(hrs);
for(int i=0;i!=':';i++)
{
time[i]=t2[i];
}
cout<<time;
return 0;
}

Making a string from an int (and the converse) is not a cast.
A cast is taking an object of one type and using it, unmodified, as if it were another type.
A string is a pointer to a complex structure including at least an array of characters.
An int is a CPU level structure that directly represents a numeric value.
An int can be expressed as a string for display purposes, but the representation requires significant computation. On a given platform, all ints use exactly the same amount of memory (64 bits for example). However, the string representations can vary significantly, and for any given int value there are several common string representations.
Zero, as an int on a 64 bit platform, consists of 64 bits at low voltage. As a string, it can be represented with a single byte "0" (high voltage on bits 4 and 5, low voltage on all other bits), the text "zero", the text "0x0000000000000000", or any of several other conventions that exist for various reasons. Then you get into the question of which character encoding scheme is being used - EBCDIC, ASCII, UTF-8, Simplified Chinese, UCS-2, etc.
Determining the int from a string requires a parser, and producing a string from an int requires a formatter.

Related

In DPI-C, How to map data type to reg or wire

I am writing a CRC16 function in C to use in System Verilog.
Requirement as below:
Output of CRC16 has 16 bits
Input of CRC16 has bigger than 72 bits
The difficulty is that I don't know whether DPI-C can support map data type with reg/wire in System Verilog to C or not ?
how many maximum length of reg/wire can support to use DPI-C.
Can anybody help me ?
Stay with compatible types across the language boundaries. For output use shortint For input, use an array of byte in SystemVerilog which maps to array of char in C.
Dpi support has provision for any bit width, converting packed arrays into c-arrays. The question is: what are you going to do with 72-bit data at c side?
But, svBitVecVal for two-state bits and svLogicVecVal for four-stat logics could be used at 'c' side to retrieve values. Look at H.7.6/7 of lrm for more info.
Here is an example from lrm H.10.2 for 4-state data (logic):
SystemVerilog:
typedef struct {int x; int y;} pair;
import "DPI-C" function void f1(input int i1, pair i2, output logic [63:0] o3);
C:
void f1(const int i1, const pair *i2, svLogicVecVal* o3)
{
int tab[8];
printf("%d\n", i1);
o3[0].aval = i2->x;
o3[0].bval = 0;
o3[1].aval = i2->y;
o3[1].b = 0;
...
}

Comparing unsigned integers with signed type

Some languages do not have support for unsigned integers, like dart or Java.
I have two integer numbers int a, b that are really unsigned (basically hashes or bitfields), but have to be stored in the signed data types.
A comparison function is needed. The usual a < b will not work here, as it would wrongly interpret negative values to be smaller, while they are (in the desired unsigned interpretation) actually larger. Each of those two ranges are handled correctly if considered alone.
A working solution I came up with (in dart, but language shouldn't really matter) is
int compareAsUnsigned(int a, int b) {
final signA = a.sign;
final signB = b.sign;
if (signA == signB) return a.compareTo(b);
if (signA == -1 || signB == -1) return b.compareTo(a);
return a.compareTo(b);
}
Are there any efficent and / or elegant ways to get the unsigned compare for values stored in signed data types (a longer type is not available and all bits are used)?

why chisel UInt(32.W) can not take a unsigned number which bit[32] happens to be 1?

It is defined that UInt is the type of unsigned integer. But in such case it seems like the MSB is still a sign. e.g., the most relative QA is Chisel UInt negative value error which works out a workaround but no why. Could you enlight me about the 'why'?
The UInt seems to be defined in chisel3/chiselFrontend/src/main/scala/chisel3/core/Bits.scala but I cannot understand the details. Is the UInt is derived from Bits and Bits is derived from Int of scala?
The simple answer is that this is due to how Scala evaluates things.
Consider an example like
val x = 0xFFFFFFFF.U
This statement causes an error.
UInt literal are represented internally by BigInts, but the 0xFFFFFFFF is an specifying an Int value. 0xFFFFFFFF is equivalent to the Int value -1.
The -1 Int value is converted to BigInt -1 and -1.U is illegal because the .U literal creation method will not accept negative values.
Adding the L fixes this because 0xFFFFFFFL is a positive Long value.
The issue is that Scala only has signed integers, it does not have an unsigned integer type. From the REPL
scala> 0x9456789a
res1: Int = -1806272358
Thus, Chisel only sees the negative number. UInts obviously cannot be negative so Chisel reports an error.
You can always cast from an SInt to a UInt if you want the raw 2's complement representation of a negative number interpreted as a UInt. eg.
val a = -1.S(32.W).asUInt
assert(a === "xffffffff".U)

D language unsigned hash of string

I am a complete beginner with the D language.
How to get, as an uint unsigned 32 bits integer in the D language, some hash of a string...
I need a quick and dirty hash code (I don't care much about the "randomness" or the "lack of collision", I care slightly more about performance).
import std.digest.crc;
uint string_hash(string s) {
return crc320f(s);
}
is not good...
(using gdc-5 on Linux/x86-64 with phobos-2)
While Adams answer does exactly what you're looking for, you can also use a union to do the casting.
This is a pretty useful trick so may as well put it here:
/**
* Returns a crc32Of hash of a string
* Uses a union to store the ubyte[]
* And then simply reads that memory as a uint
*/
uint string_hash(string s){
import std.digest.crc;
union hashUnion{
ubyte[4] hashArray;
uint hashNumber;
}
hashUnion x;
x.hashArray = crc32Of(s); // stores the result of crc32Of into the array.
return x.hashNumber; // reads the exact same memory as the hashArray
// but reads it as a uint.
}
A really quick thing could just be this:
uint string_hash(string s) {
import std.digest.crc;
auto r = crc32Of(s);
return *(cast(uint*) r.ptr);
}
Since crc32Of returns a ubyte[4] instead of the uint you want, a conversion is necessary, but since ubyte[4] and uint are the same thing to the machine, we can just do a reinterpret cast with the pointer trick seen there to convert types for free at runtime.

C programming on IAR- timestamp Conversion to readable format

I am using Z-stack-CC2530-2.5 for developing Zigbee-based application. I've come across a timestmap conversion problem.
I am using osal_ConvertUTCTime method to convert a uint32 timestamp value to timestampStruct as follows:
osal_ConvertUTCTime(& timestampStruct, timestamp);
The Struct is defined as follows:
typedef struct{
uint8 seconds;
uint8 min;
uint8 hour;
uint8 day;
uint8 month;
uint16 year;
} UTCTimeStruct
My Question:
How to convert the Struct's content to be written on the UART port in a human readable format ?
Example:
HalUARTWrite (Port0, timestampStruct, len) // Output: 22/1/2013 12:05:45
Thank you.
I do not have the prototype of the function HalUartWrite at the moment, but I googled it and someone used it as this:
HalUARTWrite(DEBUG_UART_PORT, "12345", 6);
so I guess the second argument must be a pointer to char. You can't just pass a struct UTCTimeStruct variable into the second argument. If you just need to output the raw data to the serial port. You need to cast the struct into char * in order to make the compiler happy. But generally, this is bad practice. This might not be a problem in your case as you work in a 8-bit processor that all the struct fields are either a char or a short. In general, if you cast a struct into a char * and print it out, due to struct padding, you get a lot of nonsense characters between your struct fields.
OK. A bit off topic. Back to your question, you need to convert the struct into a friendly string yourself. Because you know your output string is of format "22/1/2013 12:05:45" which has fixed length, you can simply declare a char[] of that length. And manually fill in the numbers by bit-manipulating the uint32 timestamp value. After that, you can pass the char[] into the second argument and the exact length into the third argument.