How to define a variable length type in postgresql - postgresql

I try to declare a variable length type which contains a numeric array,
the type looks like
typedef struct MyType {
double count;
double[] lower;
double[] upper;
} MyType;
I find some words in postgresql website as follows:
"To do this, the internal representation must follow the standard layout for variable-length data: the first four bytes must be a char[4] field which is never accessed directly (customarily named vl_len_). You must use SET_VARSIZE() to store the size of the datum in this field and VARSIZE() to retrieve it. The C functions operating on the data type must always be careful to unpack any toasted values they are handed, by using PG_DETOAST_DATUM."
These words confuse me. For example, how to convert the values to toasted values?
Could you give me some examples or some suggestions about how to implement it?
Thanks very much

Related

Transmitting floating-point numbers over a TLM port from SystemVerilog to SystemC

I implemented a specific filter in C/C++, "encapsulated" in a SystemC-Module. I want to use this filter in my actual verification environment (VE), which is based on SystemVerilog. To transfer data from and to the filter, I want to implement a TLM-connection. For TLM, there is something called a "generic payload", basically defining what can be transmitted via TLM, which is a byte-array.
Because of this, I need to convert the data samples in the VE from datatype real to a byte-array. What I tried to do is create a union-type, such that I can store a real-value and read a byte-array.
typedef union packed {
real value;
byte unsigned array[8];
} real_u;
However, I get the following error message.
real value;
|
ncvlog: *E,SVBPSE (Initiator.sv,7|11): The data type of a packed struct/union member must be a SystemVerilog integral type.
byte unsigned array[8];
|
ncvlog: *E,SVBPSE (Initiator.sv,8|20): The data type of a packed struct/union member must be a SystemVerilog integral type.
How could I resolve that issue? Are there other convenient ways to convert floating-point numbers to byte-arrays in SV/C++?
packed unions and structs might only contain packed members. So, in your case, both, real and byte unsigned array[8] are unpacked. Potentially you can use unpacked unions to do so, but not every vendor implements those.
Moreover, byte size of 'real' is not defined in the standard, therefore, your union most likely will not work at all. However, system verilog provides a set of functions to convert real to certain sized variables. In your case, $realtobits which returns 64 bits, will probably work.
So, i suggest you just pass real value after conversion to bits:
bit[63:0] realBits = $realtobits(value);

How to declare any-range string element inside an user-defined type in QBasic?

I'm learning QBasic and found an user-defined type example code in a documentation. In this example there's a string element inside an user-defined type, and that string doesn't have a length defined.
However my compiler throws the exception "Expected STRING * on..." for this example. Test-case defining the string length:
TYPE Person
name AS STRING * 4
END TYPE
DIM Matheus AS Person:
Matheus.name = "Matheus":
PRINT Matheus.name:
It logs "Math", expected "Matheus". Is there a way to allow any range for this string?
Note: I'm using QB64 compiler
No, there is not a way to use a variable-length string, even with QB64. You might look into FreeBASIC if you want this feature since it offers it.
TYPE creates a record type with the specified fields, and records have a fixed length. Look at the OPEN ... FOR RANDOM specification:
OPEN Filename$ FOR RANDOM AS #1 [LEN = recordlength%]
recordlength% is determined by getting the LEN of a TYPE variable or a FIELD statement.
If no record length is used in the OPEN statement, the default record size is 128 bytes except for the last record.
A record length cannot exceed 32767 or an error will occur!
TYPE was never intended to contain strings that are dynamically sized. This allows a developer to keep record sizes small. If you had an address book, for example, you wouldn't want people's names to be too large, else the address book wouldn't fit in memory.
QB64 didn't remove that restraint, probably to keep things compatible with older QBASIC code since the original goal was to preserve compatibility.

Matlab Coder throws mixed field type error in structure array while loading MAT file with coder.load

I'm working on a MATLAB Coder project where I want to load some constant values. After trying many possibilities, all unsuccessfully, I came up with the "coder.load" directive that loads variables from MAT files and assumes them as compile time constants in generated C-code.
This is the error that I get:
Found unsupported class for variable using function 'coder.load'.
Mixed field types in structure arrays are not supported. Type at
'ind_x.params(1).name' differed from type at 'ind_x.params(2).name'.
But the problem is that the "name" field of the "params" structure array has the same type for each array element. Indeed, trying it out on the command window gives me the same type:
>> class(ind_x.params(1).name)
ans =
char
>> class(ind_x.params(2).name)
ans =
char
There are other fields of the structure array that are of type "double", and one of type "bool", but the type doesen't change inside different array elements of the same field, that's why I don't understand the error.
Ok I think I found the answer to my question. The problem indeed was the character string length. If one of the fields of the structure array is of type "char", then it has to be of the same lenght for every array element.
That is, if you define
ind_x.params(1).name = 'john';
ind_x.params(2).name = 'harry';
It will throw an error if you try to load that structure with coder.load() because length(ind_x.params(1).name) is different from length(ind_x.params(2).name). A workaround could be to define a maximum length and add trailing spaces to the string.
This limitation may come from the constants definitions in C, but what I found messy is the misleading error message. Thanks anyway for the help!
EDIT : I realized that the above restriction for constant structure arrays is valid not only for type "char", but for every type! You can't have a field whose length is varying within different array elements.

What about memory layout means that []T cannot be converted to []interface in Go?

So I've been reading these two articles and this answer
Cannot convert []string to []interface {} says that the memory layout needs to be changed.
http://jordanorelli.com/post/32665860244/how-to-use-interfaces-in-go says that understanding the underlying memory makes answering this question easy, and
http://research.swtch.com/interfaces, explains what is going on under the hood.
But for the life of me I can't think of a reason, in terms of the implementation of interfaces as to why []T cannot be cast to []interface.
So Why?
The article "InterfaceSlice" try to detail:
A variable with type []interface{} is not an interface! It is a slice whose element type happens to be interface{}. But even given this, one might say that the meaning is clear.
Well, is it? A variable with type []interface{} has a specific memory layout, known at compile time.
Each interface{} takes up two words (one word for the type of what is contained, the other word for either the contained data or a pointer to it). As a consequence, a slice with length N and with type []interface{} is backed by a chunk of data that is N*2 words long.
See also "what is the meaning of interface{} in golang?"
This is different than the chunk of data backing a slice with type []MyType and the same length. Its chunk of data will be N*sizeof(MyType) words long.
The result is that you cannot quickly assign something of type []MyType to something of type []interface{}; the data behind them just look different.
"why []string can not be converted to []interface{} in Go" adds a good illustration:
// imagine this is possible
var sliceOfInterface = []interface{}(sliceOfStrings)
// since it's array of interface{} now - we can do anything
// let's put integer into the first position
sliceOfInterface[0] = 1
// sliceOfStrings still points to the same array, and now "one" is replaced by 1
fmt.Println(strings.ToUpper(sliceOfStrings[0])) // BANG!
Read the blog article The Laws of Reflection, section The representation of an interface.
A variable of interface type stores a pair: the concrete value assigned to the variable, and that value's type descriptor. To be more precise, the value is the underlying concrete data item that implements the interface and the type describes the full type of that item.
So if you have a value of []T (a slice of T) where T is not an interface, the elements of such a slice only stores values of type T, but it does not store the type information, it belongs to the slice type.
If you have a value of type []inteface{}, the elements of such a slice holds the concrete values and the type descriptors of those values.
So elements in a []interface{} require more info (more memory) than in a non-interface []T. And if the occupied memory of those 2 slices are not the same, they cannot be just "looked at" differently (looked at as a differnet type). Producing one from the other requires additional work.

Problems with using delphi dta types with a C DLL

I am trying to use a .dll which has been written in C (although it wraps around a matlab .ddl)
The function I am trying to use is defined in C as:
__declspec(dllexport) int ss_scaling_subtraction(double* time, double** signals, double* amplitudes, int nSamples, int nChannels, double* intensities);
The .dll requires, amongst others, a 2 dimensional array - When I tried to use:
Array of array of double
In the declaration, the compiler gave an error so I defined my own data type:
T2DArray = Array of array of double;
I initialise the .dll function in a unit like so:
function ss_scaling_subtraction(const time: array of double; const signals: T2DArray; const amplituides : array of double; const nSamples: integer;const nChannels: integer; var intensities: array of double) : integer ; cdecl; external 'StirScanDLL.dll';
However, when called this function, I get an access violation from the .dll
Creating a new data type
T1DArray = array of double
and changing
Array of double
To
T1DArray
In the declaration seems to make things run but the result is still not correct.
I have read on here that it can be dangerous to pass delphi data types to .dll's coded in a different language so I thought this might be causing the issue.
But how do I NOT use a delphi data type when I HAVE to use it to properly declare the function in the first place?!
Extra Info, I have already opened the matlab runtime complier lib's and opened the entry point to the StirScanDLL.dll
The basic problem here is one of binary interop mismatch. Simply put, a pointer to an array is not the same thing at a binary level as a Delphi open array parameter. Whilst they both semantically represent an array, the binary representation differs.
The C function is declared as follows:
__declspec(dllexport) int ss_scaling_subtraction(
double* time,
double** signals,
double* amplitudes,
int nSamples,
int nChannels,
double* intensities
);
Declare your function like so in Delphi:
function ss_scaling_subtraction(
time: PDouble;
signals: PPDouble;
amplitudes: PDouble;
nSamples: Integer;
nChannels: Integer;
intensities: PDouble
): Integer; cdecl; external 'StirScanDLL.dll';
If you find that PPDouble is not declared, define it thus:
type
PPDouble = ^PDouble;
That is, pointer to pointer to double.
Now what remains is to call the functions. Declare your arrays in Delphi as dynamic arrays. Like this:
var
time, amplitudes, intensities: TArray<Double>;
signals: TArray<TArray<Double>>;
If you have an older pre-generics Delphi then declare some types:
type
TDoubleArray = array of Double;
T2DDoubleArray = array of TDoubleArray;
Then declare the variables with the appropriate types.
Next you need to allocate the arrays, and populate any that have data passing from caller to callee.
SetLength(time, nSamples); // I'm guessing here as to the length
SetLength(signals, nSamples, nChannels); // again, guessing
Finally it is time to call the function. Now it turns out that the good designers of Delphi arranged for dynamic arrays to be stored as pointers to the first element. That means that they are a simple cast away from being used as parameters.
retval := ss_scaling_subtraction(
PDouble(time),
PPDouble(signals),
PDouble(amplitudes),
nSamples,
nChannels,
PDouble(intensities)
);
Note that the casting of the dynamic arrays seen here does rely on an implementation detail. So, some people might argue that it would be better to use, for instance #time[0] and so on for the one dimensional arrays. And to create an array of PDouble for the amplitudes and copy over the addresses of the first elements of the inner arrays. Personally I am comfortable with relying on this implementation detail. It certainly makes the coding a lot simpler.
One final piece of advice. Interop can be tricky. It's easy to get wrong. When you get it wrong, the code compiles, but then dies horribly at runtime. With cryptic error messages. Leading to much head scratching.
So, start with the simplest possible interface. A function that receives scalar parameters. Say, receives an integer, and returns an integer. Prove that you can do that. Then move on to floating point scalars. Then one dimensional arrays. Finally two dimensional arrays. Each step along the way, build up the complexity. When you hit a problem you'll know that it is down to the most recently added parameter.
You've not taken that approach. You've gone straight for the kill and implemented everything in your first attempt. And when it fails, you've no idea where to look. Break a problem into small pieces, and build the more complex problem out of those smaller pieces.