Is it possible in Cassandra (CQL3) insert statement binary data? - nosql

My goal is populate cassandra with some data using a script.
I'm looking for something like:
CREATE TABLE simplex.songs (id uuid PRIMARY KEY, img blob);
INSERT INTO simplex.songs (id, img) VALUES(2cc9ccb7-6221-4ccb-8387-f22b6a1b354d, hexAsBlob({hex representation of my image}));
or
INSERT INTO simplex.songs (id, img) VALUES(2cc9ccb7-6221-4ccb-8387-f22b6a1b354d, readFromFile({ image file name}));
Is it possible? I know hexAsBlob & readFromFile do not exist, but maybe there is something similar?
And because it's script I cannot use BoundStatement

You can use an hexadecimal literal in CQL.
For example:
INSERT INTO simplex.songs (id, img)
VALUES (2cc9ccb7-6221-4ccb-8387-f22b6a1b354d, 0xaa001112);
From CQL3 documentation: A blob constant is an hexadecimal number defined by 0xX+ where hex is an hexadecimal character, e.g. [0-9a-fA-F].

Related

How can I select just the first X characters from a text column

On my PostgreSQL DB I have a table that looks like below. The description column can store a string of any size.
What I'm looking for, is a way to select just the first X chars from the content of the description column, or the whole string if X > description.length
CREATE TABLE descriptions (
id uuid NOT NULL,
description text NULL,
);
E.g.: If X = 100 chars and if in the description column I store a string containing 150+ chars, when I run select <some method on description> from descriptions, I just want to get back the first 100 chars from the description column.
Bonus if the approach proposed is extremely fast!
Use a type cast or use substring():
SELECT CAST(description AS varchar(100)), substring(description for 100)
FROM descriptions;
Or if you want to do it the "PostgreSQL way":
SELECT description::varchar(100), substr(description, 1, 100)
FROM descriptions;

Adding Empty Document to Couchbase bucket

As context, I am creating a bucket of key values with empty documents to fulfill a want to quickly check IDs just through checking key existence in comparison to checking values. In the cluster, I have two buckets, source-bucket and new-bucket. The documents in source-bucket are in the form:
ID: {
ID: ...,
type: ...
}
You can move the contents of source to the new bucket using the query
INSERT INTO `new-bucket` (KEY k, VALUE v) SELECT meta(v).id AS k FROM `source-bucket` as v
Is there a way to copy over just the key? Something along the lines of this (although this example doesn't work):
INSERT INTO `new-bucket` (KEY k, VALUE v) values (SELECT meta().id FROM `source-bucket`, NULL)
I guess I'm not familiar enough with the n1ql syntax to under how to construct a query like this. Let me know if you have an answer to this. If this is a duplicate, feel free to point to the answer.
If you need empty object use {}.
CREATE PRIMARY INDEX ON `source-bucket`;
INSERT INTO `new-bucket` (KEY k, VALUE {})
SELECT meta(b).id AS k FROM `source-bucket` as b
NOTE: document value can be empty object or any data type. The following all are valid.
INSERT INTO default VALUES ("k01", {"a":1});
INSERT INTO default VALUES ("k02", {});
INSERT INTO default VALUES ("k03", 1);
INSERT INTO default VALUES ("k04", "aa");
INSERT INTO default VALUES ("k05", true);
INSERT INTO default VALUES ("k06", ["aa"]);
INSERT INTO default VALUES ("k07", NULL);

insert a CSV file with polygons into PostgreSQL

I've got a CSV file including 3 columns named boundaries,category and city.
the data in every cell,below the column "boundaries" is comprised of something like this:
{"coordinates"=>[[[-79.86938774585724, 43.206149439482836], [-79.87618446350098, 43.19090988330086], [-79.88626956939697, 43.19328385965552], [-79.88325476646423, 43.200029828720744], [-79.8932647705078, 43.20258723593195], [-79.88930583000183, 43.211150250203886], [-79.86938774585724, 43.206149439482836]]], "type"=>"Polygon"}
how can I create a table with a proper data type for column "boundaries"?
The data you have specified is in JSON format, so you could either store the boundaries data as a jsonb table column.
e.g: CREATE TABLE cities ( city varchar, category varchar, boundaries jsonb)
The alternative is to parse the JSON and store the coordinates in a PostgreSQL ARRAY column: something like:
CREATE TABLE cities (
city varchar,
category varchar,
boundary_coords point ARRAY,
boundary_type varchar
)

save file (.pdf) in database whit python 2.7

Craig Ringer Ican not work whit large object functions
My database looks like this
this is my table
-- Table: files
--
DROP TABLE files;
CREATE TABLE files
(
id serial NOT NULL,
orig_filename text NOT NULL,
file_data bytea NOT NULL,
CONSTRAINT files_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE files
I want save .pdf in my database, I saw you did the last answer, but using python27 (read the file and convert to a buffer object or use the large object functions)
I did the code would look like
path="D:/me/A/Res.pdf"
listaderuta = path.split("/")
longitud=len(listaderuta)
f = open(path,'rb')
f.read().__str__()
cursor = con.cursor()
cursor.execute("INSERT INTO files(id, orig_filename, file_data) VALUES (DEFAULT,%s,%s) RETURNING id", (listaderuta[longitud-1], f.read()))
but when I'm downloading, I save
fula = open("D:/INSTALL/pepe.pdf",'wb')
cursor.execute("SELECT file_data, orig_filename FROM files WHERE id = %s", (int(17),))
(file_data, orig_filename) = cursor.fetchone()
fula.write(file_data)
fula.close()
but when I'm downloading the file can not be opened, this damaged I repeat I can not work with large object functions
try this and turned me, can you help ?
I am thinking that psycopg2 Binary function does not user lob functions.
thus I used.....
path="salman.pdf"
f = open(path,'rb')
dat = f.read()
binary = psycopg2.Binary(dat)
cursor.execute("INSERT INTO files(id, file_data) VALUES ('1',%s)", (binary,))
conn.commit()
Just correction in INSERT statement, INSERT statement will be failed with null value in column "orig_filename" violates not-null constraint as orig_filename is defined as NOT NULL.... use instead
("INSERT INTO files(id, orig_filename,file_data) VALUES ('1','filename.pdf',%s)", (binary,))

How to create and store array of objects in postgresql

In postgresql allowed array types or integer and text.But i need to create array of objects.how can i do that.
myarray text[]; //for text ['a','b','c']
myarray integer[]; //for integer[1,2,3]
I need to create the array like below
[{'dsad':1},{'sdsad':34.6},{'sdsad':23}]
I dont want to use JSON type.Using array type i need to store the array of objects.
If you're running Postgres 9.2+, you can use the JSON type.
For example, we could do
create table jsontest (id serial primary key, data json);
insert into jsontest (data) values ('[{"dsad":1},{"sdsad":34.6},{"sdsad":23}]');
And query the data with
select data->1 from jsontest;
{"sdsad":34.6}
You say:
I dont want to use JSON type
but you cannot use an ordinary array, as PostgreSQL arrays must be of homogenous types. You can't have a 2-dimensional array of text and integer.
What you could do if you don't want to use json is to create a composite type:
CREATE TYPE my_pair AS (blah text, blah2 integer);
SELECT ARRAY[ ROW('dasd',2), ROW('sdsad', 34.6), ROW('sdsad', 23) ]::my_pair[]
which will emit:
array
----------------------------------------
{"(dasd,2)","(sdsad,35)","(sdsad,23)"}
(1 row)
If you don't want that, then json is probably your best bet. Or hstore:
SELECT hstore(ARRAY['a','b','c'], ARRAY[1,2,3]::text[])
hstore
------------------------------
"a"=>"1", "b"=>"2", "c"=>"3"
(1 row)
JSON is your preferred answer, but more info as to why.
You can do something like:
SELECT array_agg(v)
FROM mytable v;
However you get something that looks like this:
{"(dsad,1)","(sdsad,34.6)","(""sdsad,var"",23)"}
It is then up to you to know how to decode this (i.e. column order). This is possible to do programmatically but is much easier with JSON.
It's hacky, but what about using an array for each property in the object (and its corresponding scalar type). If you have a data model layer in your get/read you could put the arrays "back together" into an array of objects and in your save method you would break you objects apart into synchronized arrays. This might be complicated by your example of each object not having the same properties; IDK how you'd store undefined for a property unless you're willing for null to be the same semantically.
It's not entirely clear if you mean json:
# select '[{"dsad":1},{"sdsad":34.6},{"sdsad":23}]'::json;
json
------------------------------------------
[{"dsad":1},{"sdsad":34.6},{"sdsad":23}]
(1 row)
Or an array of json:
# select array['{"dsad":1}', '{"sdsad":34.6}', '{"sdsad":23}']::json[];
array
------------------------------------------------------
{"{\"dsad\":1}","{\"sdsad\":34.6}","{\"sdsad\":23}"}
(1 row)
Or perhaps hstore? If the latter, it's only for key-values pairs, but you can likewise use an array of hstore values.
You can do something like:
SELECT JSON_AGG(v) FROM mytable v;
However you get something that looks like this:
["00000000-0000-0000-0000-000000000001","00000000-0000-0000-0000-000000000002", "00000000-0000-0000-0000-000000000003"]
exemple :
SELECT title, (select JSON_AGG(v.video_id) FROM videos v WHERE v.channel_id = c.channel_id) AS videos FROM channel AS c
Use text[] myarray insted of myarray text[].