Cassandra CQL ALTER Table with timeuuid column - cql3

I tried ALTER TABLE with a valid timeuuid column name -
cqlsh:dbase> ALTER TABLE jdata ADD 4f8eca60-1498-11e4-b6e6-ed7706c00c12 timeuuid;
Bad Request: line 1:24 no viable alternative at input '4f8eca60-1498-11e4-b6e6-ed7706c00c12'
So, I next tried with quotes and it worked -
cqlsh:dbase> ALTER TABLE jdata ADD "4f8eca60-1498-11e4-b6e6-ed7706c00c12" timeuuid;
cqlsh:dbase>
But the table description now looks ugly with column name in quotes -
cqlsh:dbase> describe columnfamily jdata;
CREATE TABLE jdata (
abc text,
"4f8eca60-1498-11e4-b6e6-ed7706c00c12" timeuuid,
xyz text,
PRIMARY KEY ((abc), xyz)
) WITH
bloom_filter_fp_chance=0.010000 AND
blah blah;
So I need help with a ALTER command to create timeuuid column using CQL without quotes.

The NAME of the column is a String by definition.
So you can't put anything different from a String as column name.
create table invalidnames (2 text primary key, 5 int);
**Bad Request**: line 1:47 mismatched input '5' expecting ')'
while with strings works
create table validnames (two text primary key, five int);
The name of the column and the type of the column are not related in any way
HTH,
Carlo

Related

How to change a default separator for postgresql arrays?

I want to import csv with Postgres' arrays into a Postgres table.
This is my table:
create table dbo.countries (
id char(2) primary key,
name text not null,
elements text[]
CONSTRAINT const_dbo_countries_unique1 unique (id),
CONSTRAINT const_dbo_countries_unique2 unique (name)
);
and I want to insert into that a csv which looks like this:
AC,ac,{xx yy}
When I type copy dbo.mytable FROM '/home/file.csv' delimiter ',' csv; then the array is read as a one string: {"xx yy"}.
How to change a deafault separator for arrays from , to ?
You cannot to change array's separator symbol. You can read data to table, and later you can run a update on this table:
UPDATE dbo.countries
SET elements = string_to_array(elements[1], ' ')
WHERE strpos(elements[1], ' ') > 0;

access postgres field given field name as text string

I have a table in postgres:
create table fubar (
name1 text,
name2 text, ...,
key integer);
I want to write a function which returns field values from fubar given the column names:
function getFubarValues(col_name text, key integer) returns text ...
where getFubarValues returns the value of the specified column in the row identified by key. Seems like this should be easy.
I'm at a loss. Can someone help? Thanks.
Klin's answer is a good (i.e. safe) approach to the question as posed, but it can be simplified:
PostgreSQL's -> operator allows expressions. For example:
CREATE TABLE test (
id SERIAL,
js JSON NOT NULL,
k TEXT NOT NULL
);
INSERT INTO test (js,k) VALUES ('{"abc":"def","ghi":"jkl"}','abc');
SELECT js->k AS value FROM test;
Produces
value
-------
"def"
So we can combine that with row_to_json:
CREATE TABLE test (
id SERIAL,
a TEXT,
b TEXT,
k TEXT NOT NULL
);
INSERT INTO test (a,b,k) VALUES
('foo','bar','a'),
('zip','zag','b');
SELECT row_to_json(test)->k AS value FROM test;
Produces:
value
-------
"foo"
"zag"
Here I'm getting the key from the table itself but of course you could get it from any source / expression. It's just a value. Also note that the result returned is a JSON value type (it doesn't know if it's text, numeric, or boolean). If you want it to be text, just cast it: (row_to_json(test)->k)::TEXT
Now that the question itself is answered, here's why you shouldn't do this, and what you should do instead!
Never trust any data. Even if it already lives inside your database, you shouldn't trust it. The method I've posted here is safe against SQL injection attacks, but an attacker could still set k to 'id' and see a column which was not intended to be visible to them.
A much better approach is to structure your data with this type of query in mind. Postgres has some excellent datatypes for this; HSTORE and JSON/JSONB. Merge your dynamic columns into a single column with one of those types (I'd suggest HSTORE for its simplicity and generally being more complete).
This has several advantages: your schema is well-defined and does not need to change if you add more dynamic columns, you do not need to perform expensive re-casting (i.e. row_to_json), and you are able to take advantage of indexes on your columns (thanks to PostgreSQL's functional indexes).
The equivalent to the code I wrote above would be:
CREATE EXTENSION HSTORE; -- necessary if you're not already using HSTORE
CREATE TABLE test (
id SERIAL,
cols HSTORE NOT NULL,
k TEXT NOT NULL
);
INSERT INTO test (cols,k) VALUES
('a=>"foo",b=>"bar"','a'),
('a=>"zip",b=>"zag"','b');
SELECT cols->k AS value FROM test;
Or, for automatic escaping of your values when inserting, you can use one of:
INSERT INTO test (cols,k) VALUES
(hstore( 'a', 'foo' ) || hstore( 'b', 'bar' ), 'a'),
(hstore( ARRAY['a','b'], ARRAY['zip','zag'] ), 'b');
See http://www.postgresql.org/docs/9.1/static/hstore.html for more details.
You can use dynamic SQL to select a column by name:
create or replace function get_fubar_values (col_name text, row_key integer)
returns setof text language plpgsql as $$begin
return query execute 'select ' || quote_ident(col_name) ||
' from fubar where key = $1' using row_key;
end$$;

postgresql use column name value when quoted with single quotes

I'm trying to update hstore key value with another table reference column. Syntax as simple as
SET misc = misc || ('domain' => temp.domain)
But I get error because everything in parenthesis should be quoted:
SET misc = misc || ('domain=>temp.domain')::hstore
But this actually inserts temp.domain as a string and not its value. How can I pass temp.domain value instead?
You can concatenate text with a subquery, and cast the result to type hstore.
create temp table temp (
temp_id integer primary key,
domain text
);
insert into temp values (1, 'wibble');
select ('domain => ' || (select domain from temp where temp_id = 1) )::hstore as key_value
from temp
key_value
hstore
--
"domain"=>"wibble"
Updates would work in a similar way.

Import CSV text array into PostgreSQL 9.2

I have data something like this:
Akhoond,1,Akhoond,"{""Akhund"", ""Akhwan""}",0
pgAdmin's import is rejecting this. What format does the text[] need to be in the CSV?
I also tried this:
Akhoond,1,Akhoond,"{Akhund, Akhwan}",0
Here's the table create:
CREATE TABLE private."Titles"
(
"Abbrev" text NOT NULL,
"LangID" smallint NOT NULL REFERENCES private."Languages" ("LangID"),
"Full" text NOT NULL,
"Alt" text[],
"Affix" bit
)
WITH (
OIDS=FALSE
);
ALTER TABLE private."Titles" ADD PRIMARY KEY ("Abbrev", "LangID");
CREATE INDEX ix_titles_alt ON private."Titles" USING GIN ("Alt");
ALTER TABLE private."Titles"
OWNER TO postgres;
The best way to find out is to create a table with the desired values and COPY ... TO STDOUT to see:
craig=> CREATE TABLE copyarray(a text, b integer, c text[], d integer);
CREATE TABLE
craig=> insert into copyarray(a,b,c,d) values ('Akhoond',1,ARRAY['Akhund','Akhwan'],0);
INSERT 0 1
craig=> insert into copyarray(a,b,c,d) values ('Akhoond',1,ARRAY['blah with spaces','blah,with,commas''and"quotes'],0);
INSERT 0 1
craig=> \copy copyarray TO stdout WITH (FORMAT CSV)
Akhoond,1,"{Akhund,Akhwan}",0
Akhoond,1,"{""blah with spaces"",""blah,with,commas'and\""quotes""}",0
So it looks like "{Akhund,Akhwan}" is fine. Note the second example I added showing how to handle commas, quotes spaces in the array text.
This works with the psql \copy command; if it doesn't work with PgAdmin-III then I'd suggest using psql and \copy.

How to fetch TEXT column value from postgresql

I have a the following simple table in postgreSQL:
CREATE TABLE data ( id bigint NOT NULL, text_column text, );
The values of the text_column , as I see them in the phpPgAdmin web site, are numbers (long).
As I read, postgreSQL keeps a pointer to the actual data.
How can I fetch the actual string value of the text_column?
Doing:
select text_column from data
returns numbers...
Thanks
Following helped us:
select convert_from(loread(lo_open(value::int, x'40000'::int), x'40000'::int), 'UTF8') from t_field;
where value is field, which contains TEXT, and t_field is obviously name of table.
From psql run \lo_export ID FILE where ID is the number stored in the text column in your table and FILE is the path and filename for the results. The number is a reference to the large object table. You can view its contents by running \lo_list.
Provided it's text_column is text, which means it's an oid, this should work too :
select convert_from(lo_get(text_column::oid), 'UTF8') from data;
Works fine , May be the field values are in numbers:
> \d+ type
Column | Type
name | text
test_id | integer
select name from type;
name
AAA