I am learning how to create C aggregate extensions and using libpqxx with C++ on the client side to process the data.
My toy aggregate extension has one argument of type bytea, and the state is also of type bytea. The following is the simplest example of my problem:
Server side:
PG_FUNCTION_INFO_V1( simple_func );
Datum simple_func( PG_FUNCTION_ARGS ){
bytea *new_state = (bytea *) palloc( 128 + VARHDRSZ );
memset(new_state, 0, 128 + VARHDRSZ );
SET_VARSIZE( new_state,128 + VARHDRSZ );
PG_RETURN_BYTEA_P( new_state );
}
Client side:
std::basic_string< std::byte > buffer;
pqxx::connection c{"postgresql://user:simplepassword#localhost/contrib_regression"};
pqxx::work w(c);
c.prepare( "simple_func", "SELECT simple_func( $1 ) FROM table" );
pqxx::result r = w.exec_prepared( "simple_func", buffer );
for (auto row: r){
cout << " Result Size: " << row[ "simple_func" ].size() << endl;
cout << "Raw Result Data: ";
for( int jj=0; jj < row[ "simple_func" ].size(); jj++ ) printf( "%02" PRIx8, (uint8_t) row[ "simple_func" ].c_str()[jj] ) ;
cout << endl;
}
The result on the client side prints :
Result Size: 258
Raw Result Data: 5c783030303030303030303030303030...
Where the 30 pattern repeats until the end of the string and the printed string in hex is 512 bytes.
I expected to receive an array of length 128 bytes where every byte is set to zero. What am I doing wrong?
The libpqxx version is 7.2 and PostgreSQL 12 on Ubuntu 20.04.
Addendum
Installation of the extesion sql statement;
CREATE OR REPLACE FUNCTION agg_simple_func( state bytea, arg1 bytea)
RETURNS bytea
AS '$libdir/agg_simple_func'
LANGUAGE C IMMUTABLE STRICT;
CREATE OR REPLACE AGGREGATE simple_func( arg1 bytea)
(
sfunc = agg_simple_func,
stype = bytea,
initcond = "\xFFFF"
);
The answer appears to be that the bytea type data on the client side must be retrieved as follows in the libpqxx library as of 7.0 (Not tested in earlier versions):
row[ "simple_func" ].as<std::basic_string<std::byte>>()
This retrieves the right bytea data without any conversions, string idiosyncrasies or unexpected behavior like I was seeing.
I recommend that you tackle these things one by one: first get the function to work, testing it with psql in interactive queries, then write the client code (or vice versa).
I can't speak about libpqxx, but I have to complain about your function: what you presented won't even compile, because you wrote DATUM in upper case and forgot headers and other important stuff.
This function will compile and run as you expect:
#include "postgres.h"
#include "fmgr.h"
PG_MODULE_MAGIC;
PG_FUNCTION_INFO_V1(simplest_func);
Datum simplest_func(PG_FUNCTION_ARGS) {
bytea *new_state = (bytea *) palloc(128 + VARHDRSZ);
memset(new_state, 0, 128 + VARHDRSZ);
SET_VARSIZE(new_state, 128 + VARHDRSZ);
PG_RETURN_BYTEA_P(new_state);
}
The memset will work that way, but the better and more idiomatic and robust way to set the value of a varlena is
memset(VARDATA(new_state), 0, 128);
I have no idea, how you got your result, but since the code you presented doesn't compile, I don't know how your function really looks.
Related
I am generating a binary file from a SystemVerilog simulation environment. Currently, I'm doing the following:
module main;
byte arr[] = {0,32, 65, 66, 67};
initial begin
int fh=$fopen("/home/hagain/tmp/h.bin","w");
for (int idx=0; idx<arr.size; idx++) begin //{
$fwrite(fh, "%0s", arr[idx]);
end //}
$fclose(fh);
$system("xxd /home/hagain/tmp/h.bin | tee /home/hagain/tmp/h.txt");
end
endmodule : main
The problem is, that when b has the value of 0, nothing is written to the file. xxd output is:
0000000: 2041 4243 ABC
Same result when casting to string as follows:
$fwrite(fh, string'(arr[idx]));
I tried to change the write command to:
$fwrite(fh, $sformatf("%0c", arr[idx]));
And then I got the same value for the first two bytes ('d0 and 'd32):
0000000: 2020 4142 43 ABC
Any idea on how to generate this binary file?
You cannot have a null(0) character in the middle of a string, it is used to terminate the string.
You should use the %u format specifier for unformated data.
module main;
byte arr[] = {0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9};
int fh, c, tmp;
initial begin
fh = $fopen("h.bin","wb");
for (int idx=0; idx<arr.size; idx+=4) begin
tmp = {<<8{arr[idx+:4]}};
$fwrite(fh, "%u", tmp);
end
$fclose(fh);
fh = $fopen("h.bin","r");
while ((c = $fgetc(fh)) != -1)
$write("%d ",c[7:0]);
$display;
end
endmodule : main
Note that %u writes a 32-bit value in least-significant bytes first order, so I reversed the bytes being written with the streaming operator {<<8{arr[idx+:4]}}. If the number of bytes is not divisible by 4, it will just pad the file with null bytes.
If you need the exact number of bytes, the you will have to use some DPI C code
#include <stdio.h>
#include "svdpi.h"
void DPI_fwrite(const char* filename,
const svOpenArrayHandle buffer){
int size = svSize(buffer,1);
char *buf = (char *)svGetArrayPtr(buffer);
FILE *fp = fopen(filename,"wb");
fwrite(buf,1,size,fp);
}
And then import it with
import "DPI-C" function void DPI_fwrite(input string filename, byte buffer[]);
...
DPI_fwrite("filename", arr);
I'm working on an input system that would allow the user to translate input mappings between different input devices and operating systems and potentially define their own.
I'm trying to create a MaskField for an editor window where the user can select from a list of RuntimePlatforms, but selecting individual values results in multiple values being selected.
Mainly for debugging I set it up to generate an equivalent enum RuntimePlatformFlags that it uses instead of RuntimePlatform:
[System.Flags]
public enum RuntimePlatformFlags: long
{
OSXEditor=(0<<0),
OSXPlayer=(0<<1),
WindowsPlayer=(0<<2),
OSXWebPlayer=(0<<3),
OSXDashboardPlayer=(0<<4),
WindowsWebPlayer=(0<<5),
WindowsEditor=(0<<6),
IPhonePlayer=(0<<7),
PS3=(0<<8),
XBOX360=(0<<9),
Android=(0<<10),
NaCl=(0<<11),
LinuxPlayer=(0<<12),
FlashPlayer=(0<<13),
LinuxEditor=(0<<14),
WebGLPlayer=(0<<15),
WSAPlayerX86=(0<<16),
MetroPlayerX86=(0<<17),
MetroPlayerX64=(0<<18),
WSAPlayerX64=(0<<19),
MetroPlayerARM=(0<<20),
WSAPlayerARM=(0<<21),
WP8Player=(0<<22),
BB10Player=(0<<23),
BlackBerryPlayer=(0<<24),
TizenPlayer=(0<<25),
PSP2=(0<<26),
PS4=(0<<27),
PSM=(0<<28),
XboxOne=(0<<29),
SamsungTVPlayer=(0<<30),
WiiU=(0<<31),
tvOS=(0<<32),
Switch=(0<<33),
Lumin=(0<<34),
BJM=(0<<35),
}
In this linked screenshot, only the first 4 options were selected. The integer next to "Platforms: " is the mask itself.
I'm not a bitwise wizard by a large margin, but my assumption is that this occurs because EditorGUILayout.MaskField returns a 32bit int value, and there are over 32 enum options. Are there any workarounds for this or is something else causing the issue?
First thing I've noticed is that all values inside that Enum is the same because you are shifting 0 bits to left. You can observe this by logging your values with this script.
// Shifts 0 bits to the left, printing "0" 36 times.
for(int i = 0; i < 36; i++){
Debug.Log(System.Convert.ToString((0 << i), 2));
}
// Shifts 1 bits to the left, printing values up to 2^35.
for(int i = 0; i < 36; i++){
Debug.Log(System.Convert.ToString((1 << i), 2));
}
The reason inheriting from long does not work alone, is because of bit shifting. Check out this example I found about the issue:
UInt32 x = ....;
UInt32 y = ....;
UInt64 result = (x << 32) + y;
The programmer intended to form a 64-bit value from two 32-bit ones by shifting 'x' by 32 bits and adding the most significant and the least significant parts. However, as 'x' is a 32-bit value at the moment when the shift operation is performed, shifting by 32 bits will be equivalent to shifting by 0 bits, which will lead to an incorrect result.
So you should also cast the shifting bits. Like this:
public enum RuntimePlatformFlags : long {
OSXEditor = (1 << 0),
OSXPlayer = (1 << 1),
WindowsPlayer = (1 << 2),
OSXWebPlayer = (1 << 3),
// With literals.
tvOS = (1L << 32),
Switch = (1L << 33),
// Or with casts.
Lumin = ((long)1 << 34),
BJM = ((long)1 << 35),
}
I'm new in postgresql c function and I start following examples.
I want to write a simple function that has inside an SQL and receiving parameter evaluete and return 2 fields as sum of 2 columns, separately (for now to be simpler).
The function below has problem passing the check
(get_call_result_type(fcinfo, &resultTypeId, &resultTupleDesc) != TYPEFUNC_COMPOSITE)
If I remove this line I get 1 integer as result from this query
select * from pdc_imuanno(2012);
and error from
select (a).* from pdc_imuanno(2012) a;
because is not a composite type.
Question is I how I can prepare template for tuple if it's not correct this
resultTupleDesc = CreateTemplateTupleDesc(2, false);
TupleDescInitEntry(resultTupleDesc, (AttrNumber) 1, "abp1", FLOAT4OID, -1, 0);
TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "abp2", FLOAT4OID, -1, 0);
And more in
get_call_result_type(fcinfo, &resultTypeId, &resultTupleDesc)
fcinfo what is and where come from?
source table:
CREATE TABLE imu.calcolo (
codfis character varying(16) NOT NULL,
anno integer NOT NULL,
abp1 numeric,
abp2 numeric,
CONSTRAINT imucalcolo_pkey PRIMARY KEY (codfis, anno)
)
WITH ( OIDS=FALSE );
-------------------------------------------------------
#include "postgres.h"
#include "fmgr.h"
#include "catalog/pg_type.h"
#include "funcapi.h"
#include "executor/spi.h"
#include "lib/stringinfo.h"
#include "miscadmin.h"
#include <math.h>
#include "utils/builtins.h"
#include "utils/guc.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
#include "utils/numeric.h"
#include "access/htup_details.h"
#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif
PG_FUNCTION_INFO_V1(test_query);
Datum test_query(PG_FUNCTION_ARGS);
Datum
test_query(PG_FUNCTION_ARGS)
{
TupleDesc resultTupleDesc, tupledesc;
bool bisnull, cisnull;
Oid resultTypeId;
Datum retvals[2];
bool retnulls[2];
HeapTuple rettuple;
sprintf(query,"SELECT anno, abp1::real, abp2::real "
"FROM imu.calcolo WHERE anno = %d;",PG_GETARG_INT32(0));
int ret;
int proc;
float abp1 = 0;
float abp2 = 0;
SPI_connect();
ret = SPI_exec(query,0);
proc = SPI_processed;
if (ret > 0 && SPI_tuptable != NULL)
{
HeapTuple tuple;
tupledesc = SPI_tuptable->tupdesc;
SPITupleTable *tuptable = SPI_tuptable;
for (j = 0; j < proc; j++)
{
tuple = tuptable->vals[j];
abp1 += DatumGetFloat4(SPI_getbinval(tuple, tupledesc, 2, &bisnull));
abp2 += DatumGetFloat4(SPI_getbinval(tuple, tupledesc, 3, &cisnull));
}
}
resultTupleDesc = CreateTemplateTupleDesc(2, false);
TupleDescInitEntry(resultTupleDesc, (AttrNumber) 1, "abp1", FLOAT4OID, -1, 0);
TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "abp2", FLOAT4OID, -1, 0);
if (get_call_result_type(fcinfo, &resultTypeId, &resultTupleDesc) != TYPEFUNC_COMPOSITE) {
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("function returning record called in context that cannot accept type record")));
}
resultTupleDesc = BlessTupleDesc(resultTupleDesc);
SPI_finish();
retvals[0] = Float4GetDatum(abp1);
retvals[1] = Float4GetDatum(abp2);
retnulls[1] = bisnull;
retnulls[2] = cisnull;
rettuple = heap_form_tuple( resultTupleDesc, retvals, retnulls);
PG_RETURN_DATUM( HeapTupleGetDatum( rettuple ) );
}
creation function:
CREATE FUNCTION pdc_imuanno(integer)
RETURNS float
AS 'pdc','test_query'
LANGUAGE C STABLE STRICT;
ALTER FUNCTION pdc_imuanno(integer) OWNER TO www;
query:
select * from pdc_imuanno(2012);
Ok I find the simple and silly error
I create the function retunrning 1 field by myself and I expeted to see as return 2 fields.
with this creation sql it works
CREATE FUNCTION pdc_imuanno(integer)
RETURNS TABLE(abp1 real, abp2 real)
AS 'pdc','test_query'
LANGUAGE C STABLE STRICT;
Anyway it works with limited number of rows to sum and if I extend the rows number it crash
in this point
rettuple = heap_form_tuple( resultTupleDesc, retvals, retnulls);
I guess there are some error in types for values
so I query on the table fields as numeric::real, I get them as float4 and I output them as datum.
Where is my error?
Thanks a lot for any help.
This in my first post in StackOverflow.
Luca
in:
get_call_result_type(fcinfo, &resultTypeId, &resultTupleDesc)
fcinfo - what is and where come from?
fcinfo is part of PG_FUNCTION_ARGS in the v1 calling convention. It's the context of the function call and contains all sorts of details like parameters, etc. Much of this is handled behind the scenes by the GETARG macros, etc, but you need to pass it around to helper functions.
The rest of the question is hard to understand because of the typos etc. I'm guessing that you want to write a function in C to add two values together and return the result? If so, that doesn't make much sense to return as two fields in a composite type. Please show the relevant CREATE TYPE statements, the CREATE OR REPLACE FUNCTION declaration you used to define the function, and explain what you are trying to do and why.
(If you edit your question, post a comment in reply to this answer otherwise or I won't know you edited.)
Does Kyoto Cabinet support searching for a range of keys?
If so, what types of keys do support range search?
Can I do range search on a long (64bit) key?
Thanks
RG
it supports key prefix query, however, the efficiency of prefix query depends on what internal storage structure is. If you are using hashdb, it may be not a good idea, as keys & values are scattered around in the underline file.
Yes, for integers.
B+ tree database supports sequential access in order of the keys, which realizes forward matching search for strings and range search for integers - from docs
Yes you can, you just need a forward jump.
An example using C. Stores 5 records with 64 bits keys (from 1 to 5) and then apply a filter (from 2 to 4):
#include <kclangc.h>
#include <inttypes.h>
int main(void)
{
KCDB *db;
KCCUR *cur;
char *kbuf;
size_t ksiz, vsiz;
const char *cvbuf;
int64_t i, val, min, max;
int64_t keys[] = {1, 2, 3, 4, 5};
const char *values[] = {"one", "two", "three", "four", "five"};
char i64[8]; /* A buffer to store byte sequences */
/* create the database object */
db = kcdbnew();
/* open the database */
if (!kcdbopen(db, "db64.kct", KCOWRITER | KCOCREATE)) {
fprintf(stderr, "open error: %s\n", kcecodename(kcdbecode(db)));
}
/* store records */
for (i = 0; i < 5; i++) {
memcpy(i64, &keys[i], 8);
if (!kcdbset(db, i64, 8, values[i], strlen(values[i]))) {
fprintf(stderr, "set error: %s\n", kcecodename(kcdbecode(db)));
exit(EXIT_FAILURE);
}
}
/* traverse records */
min = 2;
max = 4;
printf("Range from %" PRId64 " to %" PRId64 "\n", min, max);
memcpy(i64, &min, 8);
cur = kcdbcursor(db);
kccurjumpkey(cur, i64, 8);
while ((kbuf = kccurget(cur, &ksiz, &cvbuf, &vsiz, 1)) != NULL) {
memcpy(&val, kbuf, 8);
if (val > max) {
break;
}
printf("Found %s\n", cvbuf);
kcfree(kbuf);
}
kccurdel(cur);
/* close the database */
if (!kcdbclose(db)) {
fprintf(stderr, "close error: %s\n", kcecodename(kcdbecode(db)));
}
/* delete the database object */
kcdbdel(db);
return 0;
}
LevelDB supports binary keys and ranged queries.
Edit: I forgot to mention that in order for the range query to work, the binary value needs to be packed in a comparable way. For your long example, you need to make sure it's big-endian encoded.
I'm trying to make it easy for me to move binary data between Perl and my C++ library.
I created a c++ struct to hand the binary_data:
struct binary_data {
unsigned long length;
unsigned char *data;
};
In my SWIG interface file for I have the following:
%typemap(in) binary_data * (binary_data temp) {
STRLEN len;
unsigned char *outPtr;
if(!SvPOK($input))
croak("argument must be a scalar string");
outPtr = (unsigned char*) SvPV($input, len);
printf("set binary_data '%s' [%d] (0x%X)\n", outPtr, len, $input);
temp.data = outPtr;
temp.length = len;
$1 = &temp;
}
%typemap(out) binary_data * {
SV *obj = sv_newmortal();
if ($1 != 0 && $1->data != 0 && $1->length > 0) {
sv_setpvn(obj, (const char*) $1->data, $1->length);
printf("get binary_data '%s' [%d] (0x%X)\n", $1->data, $1->length, obj);
} else {
sv_setsv(obj, &PL_sv_undef);
printf("get binary_data [set to undef]\n");
}
if( !SvPOK(obj) )
croak("The result is not a scalar string");
$result = obj;
}
I build my Perl module via "ExtUtils::MakeMaker" and it's all good.
I then run the following perl test script to ensure the binary data is being
set/get from a perl string correctly.
my $fr = ObjectThatContainsBinaryData->new();
my $data = "1234567890";
print ">>>PERL:swig_data_set\n";
$fr->swig_data_set($data);
print "<<<PERL:swig_data_set\n";
print ">>>PERL:swig_data_get\n";
my $rdata = $fr->swig_data_get();
print "<<<PERL: swig_data_get\n";
print "sent :" . \$data . " len=" . length($data). " '$data'\n"
."recieved:". \$rdata. " len=" . length($rdata). " '$rdata'\n";
Now the combined C++ and Perl printf stdout is:
>>>PERL:swig_data_set
set binary_data '1234567890' [10] (0x12B204D0)
<<<PERL:swig_data_set
>>>PERL:swig_data_get
get binary_data '1234567890' [10] (0x1298E4E0)
<<<PERL: swig_data_get
sent :SCALAR(0x12b204d0) len=10 '1234567890'
recieved:SCALAR(0x12bc71c0) len=0 ''
So why does it look like the perl call to sv_setpvn is failing or not working?
I don't know why when I print the returned binary data in perl, it shows as an empty scalar, but it looks fine within the SWIG C++ embedded typemap.
I'm using:
Perl v5.8.8 built for x86_64-linux-thread-multi
SWIG 2.0.1
gcc version 4.1.1 20070105 (Red Hat 4.1.1-52)
If you replace the following line of in your %typemap(out):
$result = obj;
With
$result = obj; argvi++; //This is a hack to get the hidden stack pointer to increment before the return
The SWIG Generated code will now look like:
...
ST(argvi) = obj; argvi++;
}
XSRETURN(argvi);
}
And your test script will return the Perl String as expected.
SV = PV(0x1eae7d40) at 0x1eac64d0
REFCNT = 1
FLAGS = (PADBUSY,PADMY,POK,pPOK)
PV = 0x1eb25870 "1234567890"\0
CUR = 10
LEN = 16
<<<PERL: swig_data_get
sent :SCALAR(0x1ea64530) len=10 '1234567890'
recieved:SCALAR(0x1eac64d0) len=10 '1234567890'
You should have read the SWIG 2.0 documentation on typemaps in Perl more closely:
"
30.8.2 Return values
Return values are placed on the argument stack of each wrapper function. The current value of the argument stack pointer is contained in a variable argvi. Whenever a new output value is added, it is critical that this value be incremented. For multiple output values, the final value of argvi should be the total number of output values.
"
What if you don't make it mortal? I was doing testing with Inline::C (since I've never used SWIG), and setting the SV to mortal caused problems since Inline::C was doing it for me. Perhaps SWIG uses a similar design?
Both
SV* obj = newSV(0);
sv_setpvn(obj, "abc", 3);
and
SV* obj = newSVpvn("abc", 3);
worked with Inline::C.
swig provides a module named cdata.i.
You should include this in the interface definition file.
Once you include this, it gives two functions cdata() and memmove(). Given a void * and the length of the binary data, cdata() converts it into a string type of the target language.
memmove() is the reverse. given a string type, it will copy the contents of the string(including embedded null bytes) into the C void* type.
Handling binary data becomes much simple with this module.
I hope this is what you need.
On the Perl side, could you add
use Devel::Peek;
Dump($fr->swig_data_get());
and provide the output? Thanks.