why chisel UInt(32.W) can not take a unsigned number which bit[32] happens to be 1? - scala

It is defined that UInt is the type of unsigned integer. But in such case it seems like the MSB is still a sign. e.g., the most relative QA is Chisel UInt negative value error which works out a workaround but no why. Could you enlight me about the 'why'?
The UInt seems to be defined in chisel3/chiselFrontend/src/main/scala/chisel3/core/Bits.scala but I cannot understand the details. Is the UInt is derived from Bits and Bits is derived from Int of scala?

The simple answer is that this is due to how Scala evaluates things.
Consider an example like
val x = 0xFFFFFFFF.U
This statement causes an error.
UInt literal are represented internally by BigInts, but the 0xFFFFFFFF is an specifying an Int value. 0xFFFFFFFF is equivalent to the Int value -1.
The -1 Int value is converted to BigInt -1 and -1.U is illegal because the .U literal creation method will not accept negative values.
Adding the L fixes this because 0xFFFFFFFL is a positive Long value.

The issue is that Scala only has signed integers, it does not have an unsigned integer type. From the REPL
scala> 0x9456789a
res1: Int = -1806272358
Thus, Chisel only sees the negative number. UInts obviously cannot be negative so Chisel reports an error.
You can always cast from an SInt to a UInt if you want the raw 2's complement representation of a negative number interpreted as a UInt. eg.
val a = -1.S(32.W).asUInt
assert(a === "xffffffff".U)

Related

Comparing unsigned integers with signed type

Some languages do not have support for unsigned integers, like dart or Java.
I have two integer numbers int a, b that are really unsigned (basically hashes or bitfields), but have to be stored in the signed data types.
A comparison function is needed. The usual a < b will not work here, as it would wrongly interpret negative values to be smaller, while they are (in the desired unsigned interpretation) actually larger. Each of those two ranges are handled correctly if considered alone.
A working solution I came up with (in dart, but language shouldn't really matter) is
int compareAsUnsigned(int a, int b) {
final signA = a.sign;
final signB = b.sign;
if (signA == signB) return a.compareTo(b);
if (signA == -1 || signB == -1) return b.compareTo(a);
return a.compareTo(b);
}
Are there any efficent and / or elegant ways to get the unsigned compare for values stored in signed data types (a longer type is not available and all bits are used)?

Big Int in scala

I'm new to Scala. I'm trying to create a test case for a simple factorial function.
I couldn't assign the result value in the assert statement. I'm getting
Integer number is out of range even for type Long error in IntelliJ.
test("Factorial.factorial6") {
assert(Factorial.factorial(25) == 15511210043330985984000000L)
}
I also tried to assign the value to val, using the 'L' literal, again it shows the same
message.
val b: BigInt = 15511210043330985984000000L
I'm clearly missing some basic stuff about Scala, I would appreciate your help, to solve this
The value you are giving is indeed larger than can be held in a Long, and that is the maximum size for a literal value in Scala. However you can initialise a BigInt using a String containing the value:
val b = BigInt("15511210043330985984000000")
and therefore
assert(Factorial.factorial(25) == BigInt("15511210043330985984000000"))

Using zero constant as long in a less verbose way [duplicate]

object LPrimeFactor {
def main(arg:Array[String]):Unit = {
start(13195)
start(600851475143)
}
def start(until:Long){
var all_prime_fac:Array[Int] = Array()
var i = 2
(compile:compileIncremental) Compilation failed
integer number too large
Even though I changed the arg type to Long, it's still not fixed.
Pass the argument as a Long (notice the L at the end of the number):
start(600851475143L)
// ^
To create a Long literal you must add L to the end of it.
start(600851475143L)
Please remember literals values, if you has not any type direct suffix, the compiler try to get your numeric type values, such as 600851475143 as type Int, which is 32-bit length, two complement representation
MIN_VALUE = -2147483648(- 2 ^ 31)
MAX_VALUE = 2147483647(2 ^ 31 - 1)
So please add right suffix on the literal value, as 600851475143L

conversion of string to int and int to string using static_cast

I am just not able to convert different datatypes in c++,I know that c++ is a strong type language so,I
used here static_cast but I am facing a problem the error messages are
invalid static_cast from type 'std::string {aka std::basic_string}' to type 'int'
invalid conversion from 'int' to 'const char*' [-fpermissive]
#include <vector>
#include <iostream>
using namespace std;
int main()
{
string time;
string t2;
cin >> time;
int hrs;
for(int i=0;i!=':';i++)
{
t2[i]=time[i];
}
hrs=static_cast<int>(t2);
hrs=hrs+12;
t2=static_cast<string>(hrs);
for(int i=0;i!=':';i++)
{
time[i]=t2[i];
}
cout<<time;
return 0;
}
Making a string from an int (and the converse) is not a cast.
A cast is taking an object of one type and using it, unmodified, as if it were another type.
A string is a pointer to a complex structure including at least an array of characters.
An int is a CPU level structure that directly represents a numeric value.
An int can be expressed as a string for display purposes, but the representation requires significant computation. On a given platform, all ints use exactly the same amount of memory (64 bits for example). However, the string representations can vary significantly, and for any given int value there are several common string representations.
Zero, as an int on a 64 bit platform, consists of 64 bits at low voltage. As a string, it can be represented with a single byte "0" (high voltage on bits 4 and 5, low voltage on all other bits), the text "zero", the text "0x0000000000000000", or any of several other conventions that exist for various reasons. Then you get into the question of which character encoding scheme is being used - EBCDIC, ASCII, UTF-8, Simplified Chinese, UCS-2, etc.
Determining the int from a string requires a parser, and producing a string from an int requires a formatter.

Swift float multiplication error

This code fails:
let element: Float = self.getElement(row: 1, column: j)
let multiplier = powf(-1, j+2)*element
with this error:
Playground execution failed: :140:51: error: cannot invoke '*' with an argument list of type '(Float, Float)'
let multiplier = powf(-1, j+2)*element
Bear in mind that this occurs in this block:
for j in 0...self.columnCount {
where columnCount is a Float. Also, the first line does execute and so the getElement method indeed returns a Float.
I am completely puzzled by this as I see no reason why it shouldn't work.
There is no implicit numeric conversion in swift, so you have to do explicit conversion when dealing with different types and/or when the expected type is different than the result of the expression.
In your case, j is an Int whereas powf expects a Float, so it must be converted as follows:
let multiplier = powf(-1, Float(j)+2)*element
Note that the 2 literal, although usually considered an integer, is automatically inferred a Float type by the compiler, so in that case an explicit conversion is not required.
I ended up solving this by using Float(j) instead of j when calling powf(). Evidently, j cannot be implicitly converted to a Float.