suppose I have a list of columns in python of equal lengths (in the example below, each column has 4 elements, but in my actual work each column has around 500+ elements):
col0 = [1, 12, 23, 41] # also used as a primary key
col1 = ['asdas','asd', '1323', 'adge']
col2 = [true, false, true, true]
col4 = [312.12, 423.1, 243.56, 634.5]
and I have a postgresql table already defined, with columns: Col0 (integer, also primary key), Col1 (character varying), Col2 (boolean), Col3 (numeric)
I wrote the following code to connect to the postgresql database (which seemed to have worked fine):
import psycopg2
...
conn = psycopg2.connect("dbname='mydb' user='myuser' host='localhost' password = 'mypwd')
cur = conn.cursor()
Now suppose I want to push the columns to postgresql table myt where I want the rows to populated as :
Col1 Col2 Col3 Col4
1 'asdas' true 312.12
12 'asd' false 423.1
...
I saw examples on SO such as this one where the example is for reading from a csv file:
for row in reader:
cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (variable1, variable2))
(a) Can I adopt something similar for my case? Would this work:
for i in range(0, len(col0)):
cur.execute("INSERT INTO myt (Col1, Col2, Col3, Col4) VALUES (%??, %s, %??, %f)", (col0[i], col1[i], col2[i], col3[i]))
(b) If yes, what is the type specifier for python integer, boolean, float types , when the corresponding postgresql types are integer, boolean and numeric?
(c) Also, what if I have 40 tables, instead of 4 tables. Do I have to write a long line like this:
"INSERT INTO myt (Col1, Col2, ..., Col40) VALUES (%d, %s, ..., %f)", (col0[i], ...))
a,b:
yes, that will work, psycopg2 uses %s to represent all types, that's just the way it works.(so don't use %?? etc...)
c:
"insert into {t} ({c}) values ({v})".format(
t=tablename,
c=','.join(columns_list),
v=','.join(['%'] * len(columns_list))
gets you closer to a universal insert expression, but you still need to loop...
Related
I am looking at the PostgreSQL official documentation page on Table Partitioning for my version of postgres.
I would like to create table partitions over three columns, and I wish to use declarative partition with BY LIST method to do that.
However, I cannot seem to find a good example on how to deal with more columns, and BY LIST specifically.
In the aforementioned docs I only read:
You may decide to use multiple columns in the partition key for range
partitioning, if desired. (...) For example, consider a table range
partitioned using columns lastname and firstname (in that order) as
the partition key.
It seems that declarative partition on multiple columns is only for BY RANGE or is that not right?
However, if it is not, I found an answer on SO that tells me how to deal with BY LIST and one column. But in my case I have three columns.
My idea would be to do something like the following (I am pretty sure it's wrong):
CREATE TABLE my_partitioned_table (
col1 type CONSTRAINT col1_constraint CHECK (col1 = 1 or col1 = 0),
col2 type CONSTRAINT col2_constraint CHECK (col2 = 'A' or col2 = 'B'),
col3 type,
col4 type) PARTITION BY LIST (col1, col2);
CREATE TABLE part_1a PARTITION OF my_partitioned_table
FOR VALUES IN (1, 'A');
CREATE TABLE part_1b PARTITION OF my_partitioned_tabel
FOR VALUES IN (1, 'B');
...
I would need a correct implemenation as the combination of possible partitions in my case is quite a lot.
That is true, you cannot use list partitioning with more than one partitioning key. You also cannot bent range partitioning to do what you want.
But you could use a composite type to get what you want:
CREATE TYPE part_type AS (a integer, b text);
CREATE TABLE partme (p part_type, val text) PARTITION BY LIST (p);
CREATE TABLE partme_1_B PARTITION OF partme FOR VALUES IN (ROW(1, 'B'));
INSERT INTO partme VALUES (ROW(1, 'B'), 'x');
INSERT INTO partme VALUES (ROW(1, 'C'), 'x');
ERROR: no partition of relation "partme" found for row
DETAIL: Partition key of the failing row contains (p) = ((1,C)).
SELECT (p).a, (p).b, val FROM partme;
a | b | val
---+---+-----
1 | B | x
(1 row)
But perhaps the best way to go is to use subpartitioning: partition the original table by the first column and the partitions by the second column.
I am working in Sybase with these this table having column 'ID', 'File_Name'
Table1
IDS File_Name_Attached
123 ROSE1234_abcdefghi_03012014_04292014_190038.zip
456 ROSE1234_abcdefghi_08012014_04292014_190038.zip
All I need is to pickup the first date given in file name.
Required:
IDS Dates
123 03012014
456 08012014
You can use SUBSTRING and PATINDEX to find start_index of date:
LiveDemo
CREATE TABLE #table1(IDS int, File_Name_attached NVARCHAR(100));
INSERT INTO #table1
VALUES (123, 'ROSE1234_abcdefghi_03012014_04292014_190038.zip'),
(456, 'ROSE1234_abcdefghi_08012014_04292014_190038.zip');
SELECT
IDS,
[DATES] = SUBSTRING(File_Name_attached,
PATINDEX('%_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]_%', File_Name_attached) + 1,
8)
FROM #table1;
Warning
I have no Sybase DB for testing so if this won't work let me know.
I have a table with many(+1000) columns and rows(~1M). The columns have either the value 1 , or are NULL.
I want to be able to select, for a specific row (user) retrieve the column names that have a value of 1.
Since there are many columns on the table, specifying the columns would yield a extremely long query.
You're doing something SQL is quite bad at - dynamic access to columns, or treating a row as a set. It'd be nice if this were easier, but it doesn't work well with SQL's typed nature and the concept of a relation. Working with your data set in its current form is going to be frustrating; consider storing an array, json, or hstore of values instead.
Actually, for this particular data model, you could probably use a bitfield. See bit(n) and bit varying(n).
It's still possible to make a working query with your current model PostgreSQL extensions though.
Given sample:
CREATE TABLE blah (id serial primary key, a integer, b integer, c integer);
INSERT INTO blah(a,b,c) VALUES (NULL, NULL, 1), (1, NULL, 1), (NULL, NULL, NULL), (1, 1, 1);
I would unpivot each row into a key/value set using hstore (or in newer PostgreSQL versions, the json functions). SQL its self provides no way to dynamically access columns, so we have to use an extension. So:
SELECT id, hs FROM blah, LATERAL hstore(blah) hs;
then extract the hstores to sets:
SELECT id, k, v FROM blah, LATERAL each(hstore(blah)) kv(k,v);
... at which point your can filter for values matching the criteria. Note that all columns have been converted to text, so you may want to cast it back:
SELECT id, k FROM blah, LATERAL each(hstore(blah)) kv(k,v) WHERE v::integer = 1;
You also need to exclude id from matching, so:
regress=> SELECT id, k FROM blah, LATERAL each(hstore(blah)) kv(k,v) WHERE v::integer = 1 AND
k <> 'id';
id | k
----+---
1 | c
2 | a
2 | c
4 | a
4 | b
4 | c
(6 rows)
I have a column in a table that contains numeric data separated by a hyphen. I need to split this data into three columns so each part of this numeric value is in a separate column. The specific data is for learning purposes but I've also seen databases where name fields are one column instead of three (e.g. "FirstMiddleLast" instead of "First", "Middle", Last").
Here is a sample of the numeric value:
1234-56-78
I would like to split that so I have three columns
1234 | 56 | 78
How can I achieve this?
Try this (Sql Fiddle here);
declare #s varchar(50)='1234-56-78'
select left(#s,charindex('-',#s,1)-1) Col1,
substring(#s,charindex('-',#s,1)+1, len(#s)-charindex('-',reverse(#s),1)-
charindex('-',#s,1)) Col2,
right(#s,charindex('-',reverse(#s),1)-1) Col3
--results
Col1 Col2 Col3
1234 56 78
I'm using oracle database and facing a problem where two id_poduct.nextval is creating as error: ORA-00001: unique constraint (SYSTEM.SYS_C004166) violated
It is a primary key. To use all is a requirement. Can I use 2 .nextval in a statement?
insert all
into sale_product values (id_product.nextval, id.currval, 'hello', 123, 1)
into sale_product values (id_product.nextval, id.currval, 'hi', 123, 1)
select * from dual;
insert into sale_product
select id_product.nextval, id.currval, a, b, c
from
(
select 'hello' a, 123 b, 1 c from dual union all
select 'hi' a, 123 b, 1 c from dual
);
This doesn't use the insert all syntax, but it works the same way if you are only inserting into the same table.
The value of id_product.NEXTVAL in the first INSERT is the same as the second INSERT, hence you'll get the unique constraint violation. if you remove the constraint and perform the insert, you'll notice the duplicate values!
The only way is to perform two bulk INSERTS in sequence or to have two seperate sequences with a different range, the latter would require an awful lot of coding and checking.
create table temp(id number ,id2 number);
insert all
into temp values (supplier_seq.nextval, supplier_seq.currval)
into temp values (supplier_seq.nextval, supplier_seq.currval)
select * from dual;
ID ID2
---------- ----------
2 2
2 2
Refrence
The subquery of the multitable insert statement cannot use a sequence
http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_9014.htm#i2080134