OpenCL convert uint4 to unsigned int - type-conversion

How do I convert uint4 type to unsigned int ? I am reading an image in the kernel using readimageui(). This returns uint4 type.
In the host code I tried using a buffer of type cl_uint4 when reading back results into the destination buffer but that didn't work out.

Related

Casting an int to Uint8 in Dart

I am using the ffi and tyed_data Dart libraries and keep running into an error with this line of code,
Uint8 START_OF_HEADER = 1 as Uint8;
My error:
type 'int' is not a subtype of type 'Uint8' in type cast
What am I doing wrong here? Another strange thing is that I can write this line of code using these libraries and my IDE will compile and not throw an error, that is until you use that line of code. I'm using Intellij version 2019.2.4
You are trying to create an instance of a Uint8 in Dart, but that's not possible - that type is just a marker.
/// [Uint8] is not constructible in the Dart code and serves purely as marker in
/// type signatures.
You'll just use those markers in the typedefs, for example to describe a C function taking two signed 32 bit ints and returning signed 32 bit int:
typedef native_sum_func = Int32 Function(Int32 a, Int32 b);
this will be paired with an equivalent Dart-like typedef
typedef NativeSum = int Function(int a, int b);
Dart ffi is responsible for converting a and b from Dart ints into 32 bit C ints and for converting the return value back to a Dart int.
Note that you can create a pointer to these C types, for example Pointer<Uint8> using the allocate method from package:ffi.

conversion of string to int and int to string using static_cast

I am just not able to convert different datatypes in c++,I know that c++ is a strong type language so,I
used here static_cast but I am facing a problem the error messages are
invalid static_cast from type 'std::string {aka std::basic_string}' to type 'int'
invalid conversion from 'int' to 'const char*' [-fpermissive]
#include <vector>
#include <iostream>
using namespace std;
int main()
{
string time;
string t2;
cin >> time;
int hrs;
for(int i=0;i!=':';i++)
{
t2[i]=time[i];
}
hrs=static_cast<int>(t2);
hrs=hrs+12;
t2=static_cast<string>(hrs);
for(int i=0;i!=':';i++)
{
time[i]=t2[i];
}
cout<<time;
return 0;
}
Making a string from an int (and the converse) is not a cast.
A cast is taking an object of one type and using it, unmodified, as if it were another type.
A string is a pointer to a complex structure including at least an array of characters.
An int is a CPU level structure that directly represents a numeric value.
An int can be expressed as a string for display purposes, but the representation requires significant computation. On a given platform, all ints use exactly the same amount of memory (64 bits for example). However, the string representations can vary significantly, and for any given int value there are several common string representations.
Zero, as an int on a 64 bit platform, consists of 64 bits at low voltage. As a string, it can be represented with a single byte "0" (high voltage on bits 4 and 5, low voltage on all other bits), the text "zero", the text "0x0000000000000000", or any of several other conventions that exist for various reasons. Then you get into the question of which character encoding scheme is being used - EBCDIC, ASCII, UTF-8, Simplified Chinese, UCS-2, etc.
Determining the int from a string requires a parser, and producing a string from an int requires a formatter.

Chisel poke() print format

Is it possible to configure the print format of poke() function in Chisel test class ?
I want to 'poke()' an unsigned long (64bits) int and Chisel print it like a signed long int when I launch this code:
poke(c.io.masterwrite.wdata, 0xbebecacacafedecaL)
The result :
POKE AvlMasterWrite.io_masterwrite_wdata <- -0x4141353535012136
I can't add the letter 'U' like in C to force unsigned :
0xbebecacacafedecaUL
That doesn't compile.
The following should work:
import java.math._
poke (c.io.masterwrite.wdata, new BigInteger("bebecacacafedeca", 16)
The input port c.io.masterwrite.wdata should be of type UInt and 64-bit long.

C programming on IAR- timestamp Conversion to readable format

I am using Z-stack-CC2530-2.5 for developing Zigbee-based application. I've come across a timestmap conversion problem.
I am using osal_ConvertUTCTime method to convert a uint32 timestamp value to timestampStruct as follows:
osal_ConvertUTCTime(& timestampStruct, timestamp);
The Struct is defined as follows:
typedef struct{
uint8 seconds;
uint8 min;
uint8 hour;
uint8 day;
uint8 month;
uint16 year;
} UTCTimeStruct
My Question:
How to convert the Struct's content to be written on the UART port in a human readable format ?
Example:
HalUARTWrite (Port0, timestampStruct, len) // Output: 22/1/2013 12:05:45
Thank you.
I do not have the prototype of the function HalUartWrite at the moment, but I googled it and someone used it as this:
HalUARTWrite(DEBUG_UART_PORT, "12345", 6);
so I guess the second argument must be a pointer to char. You can't just pass a struct UTCTimeStruct variable into the second argument. If you just need to output the raw data to the serial port. You need to cast the struct into char * in order to make the compiler happy. But generally, this is bad practice. This might not be a problem in your case as you work in a 8-bit processor that all the struct fields are either a char or a short. In general, if you cast a struct into a char * and print it out, due to struct padding, you get a lot of nonsense characters between your struct fields.
OK. A bit off topic. Back to your question, you need to convert the struct into a friendly string yourself. Because you know your output string is of format "22/1/2013 12:05:45" which has fixed length, you can simply declare a char[] of that length. And manually fill in the numbers by bit-manipulating the uint32 timestamp value. After that, you can pass the char[] into the second argument and the exact length into the third argument.

Are int32s signed or unsigned in OSC (or is it unspecified?)

The OSC Specification, version 1.0 specifies the "int32" data type as "32-bit big-endian two's complement integer". This implies that it's signed (otherwise, why would you write "two's complement"...), but it doesn't come right out and say it.
This comes up most clearly in the encoding of blobs: should it be legal to have a blob of length #x90000000 ? This number can be encoded as an unsigned 32-bit integer, but not as a signed 32-bit integer. I grant you, that's an extremely big blob (more than 2 gigabytes).
The specification gives you no more details. I checked the code of the C++ osc implementation I use and it's defined as:
typedef signed long int32;
the blob is defined as:
struct Blob{
Blob() {}
explicit Blob( const void* data_, unsigned long size_ )
: data( data_ ), size( size_ ) {}
const void* data;
unsigned long size;
};
So yes, it's signed integer for the "atomic" int32 type.
The blob on the other hand has it's size stored as unsigned long. So probably it can be larger. You may have to try it first, because I have only the implementation of osc pack here.