i had managed to create tables in postgres but encountered issues when trying to insert values.
comands = (
CREATE TYPE student AS (
name TEXT,
id INTEGER
)
CREATE TABLE studentclass(
date DATE NOT NULL,
time TIMESTAMPTZ NOT NULL,
PRIMARY KEY (date, time),
class student
)
)
And in psycog2
command = (
INSERT INTO studentclass (date, time, student) VALUES (%s,%s, ROW(%s,%s)::student)
)
student_rec = ("John", 1)
record_to_insert = ("2020-05-21", "2020-05-21 08:10:00", student_rec)
cursor.execute(commands, record_to_insert)
When executed, the errors are the incorrect argument and if i tried to hard coded the student value inside the INSERT statement, it will inform me about the unrecognized column for student.
Please advise.
One issue is the column name is class not student. Second is psycopg2 does tuple adaption as composite type
So you can do:
insert_sql = "INSERT INTO studentclass (date, time, class) VALUES (%s,%s,%s)"
student_rec = ("John", 1)
record_to_insert = ("2020-05-21", "2020-05-21 08:10:00", student_rec)
cur.execute(insert_sql, record_to_insert)
con.commit()
select * from studentclass ;
date | time | class
------------+-------------------------+----------
05/21/2020 | 05/21/2020 08:10:00 PDT | (John,1)
Related
By given table creation statement and query it's necessary to get old values before update:
CREATE TABLE IF NOT EXISTS products(
id INT GENERATED BY DEFAULT AS IDENTITY NOT NULL PRIMARY KEY,
product_id INT UNIQUE,
image_link CHARACTER VARYING NOT NULL,
additional_image_links CHARACTER VARYING[] NOT NULL
);
WITH temp AS (
INSERT INTO products(product_id, image_link, additional_image_links)
VALUES(1, 'http://www.e1xazm1ple1k113.com',ARRAY['http://www.examkple1113.com','http://www.example2.com'])
ON CONFLICT (product_id) DO UPDATE SET image_link = EXCLUDED.image_link, additional_image_links = EXCLUDED.additional_image_links
WHERE products.image_link != EXCLUDED.image_link OR products.additional_image_links != EXCLUDED.additional_image_links OR products.image_link != EXCLUDED.image_link
RETURNING id, image_link, additional_image_links
)
SELECT image_link, additional_image_links FROM products WHERE id IN (SELECT id FROM temp);
If conflict happens and new values conform criteria result is generated, however I need to use sqlalchemy machinery for it. Approximate but not working example:
def upsert(table, rows, constraint, update_cols):
query = insert(table).values(rows)
return query.on_conflict_do_update(
constraint=constraint,
set_={c: getattr(query.excluded, c) for c in update_cols},
where=getattr(table.c, "additional_image_link") != getattr(query.excluded, "additional_image_link"),
).cte("upsert")
Calling which produces the exception:
sesh = session(autocommit=False, autoflush=False, engine=DEFAULT)
sesh.execute(upsert(*args))
sqlalchemy.exc.ArgumentError: Executable SQL or text() construct expected, got <sqlalchemy.sql.selectable.CTE at 0x1042c3f10; upsert>.
class datalog(display_clock):
def con_mysql(self):
cat = mysql.connector.connect(
host="localhost", user="subramanya", passwd="Sureshbabu#4155", database="CFM")
if (cat):
datacursor = cat.cursor()
todaydate = d
check_table = (
"SELECT count(*) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME=%s")
datacursor.execute(check_table, (todaydate,))
result = datacursor.fetchone()
if (result):
self.success_login()
else:
datacursor.execute(
"CREATE TABLE {today}(Sl_no INT NOT NULL AUTO_INCREMENT PRIMARY KEY,date DATE,Start_time TIME,End_time TIME,Item CHAR(255),Weight FLOAT, Amount INTEGER(10))".format(today=todaydate))
self.success_login()
else:
datacursor.Terminate
self.error_display.insert(0.0, "Connecting Database failed!!!")
I tried to check whether any table exists for today's date or not.
if not create the same.
no error occurred. But table not created for sysdate.
Welcome to stackoverflow!
I believe there is a small misconception here. You don't need to check if the table exists beforehand and create it afterward. Most of the current database technologies accept the condition IF NOT EXISTS on CREATE TABLE CLAUSE.
CREATE TABLE IF NOT EXISTS sales (
sale_id INT NOT NULL,
);
It means the table sales will be only created IF NOT EXISTS previously.
Also, I strongly recommend refactoring your code a wee bit. Take as a suggestion (please adapt accordingly your project needs):
from datetime import datetime
class Settings:
# please, avoid hard-coded credentials.
DB_HOST = "localhost"
DB_USER = "subramanya"
DB_PASSWD = "Sureshbabu#4155"
DB_SCHEMA = "CFM"
class datalog(display_clock):
def db_connect(self):
conn = mysql.connector.connect(
host=Settings.DB_HOST,
user=Settings.DB_USER,
passwd=Settings.DB_PASSWD,
database=Settings.DB_SCHEMA
)
if not conn:
raise Exception("Connecting Database failed!!!")
return conn
def ensure_table(self):
conn = self.db_connect()
conn.datacursor.execute("""
CREATE TABLE IF NOT EXISTS `{0}`(
Sl_no INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
date DATE,
Start_time TIME,
End_time TIME,
Item CHAR(255),
Weight FLOAT,
Amount INTEGER(10)
);
""".format(datetime.today().strftime('%Y%m%d')) # format 20200915
)
def run(self):
self.ensure_table()
self.success_login()
There are plenty of ways to write this code, but keep in mind that readability matters a lot.
I'm having problem with using the values of an array column in a where clause. Complete example to reproduce:
create type public.genre_type as enum ('scifi', 'fantasy', 'crime', 'horror', 'classics');
create table public.reader_profile(
id integer,
fave_genres genre_type ARRAY
);
create table public.books(
id serial not null,
title text,
genre_type public.genre_type
);
insert into public.reader_profile(id, fave_genres) values (1, array['crime', 'horror']::public.genre_type[]);
insert into public.reader_profile(id, fave_genres) values (2, array['fantasy', 'scifi']::public.genre_type[]);
insert into public.reader_profile(id, fave_genres) values (3, array['scifi', 'classics']::public.genre_type[]);
insert into public.books(title, genre_type) values ('gone with the wind', 'classics');
insert into public.books(title, genre_type) values ('Foundation', 'scifi');
insert into public.books(title, genre_type) values ('Dune', 'scifi');
-- THE FOLLOWING FAILS!!!
select * from public.books
where genre_type in (
select fave_genres from public.reader_profile where id = 2
);
I've tried ...where genre_type = ANY() per other stackoverflow answers as well as ...where genre_type <# () and I can't get anything to work! It seems the inner query (which works) is being return as an array type and not a list of values or something. Any help appreciated!!
I agree with #Hogan that this seems doable with a JOIN but the syntax you are looking for is the following:
SELECT *
FROM books
WHERE genre_type = ANY(ARRAY(SELECT fave_genres FROM reader_profile WHERE id = 2))
;
Demo
Can I suggest using a join instead?
select *
from public.books b
join public.reader_profile fg on b.genre_type = ANY(rp.fave_genres) and fg.id = 2
I'm trying to insert records on my trying to implement an SCD2 on Redshift
but get an error.
The target table's DDL is
CREATE TABLE ditemp.ts_scd2_test (
id INT
,md5 CHAR(32)
,record_id BIGINT IDENTITY
,from_timestamp TIMESTAMP
,to_timestamp TIMESTAMP
,file_id BIGINT
,party_id BIGINT
)
This is the insert statement:
INSERT
INTO ditemp.TS_SCD2_TEST(id, md5, from_timestamp, to_timestamp)
SELECT TS_SCD2_TEST_STAGING.id
,TS_SCD2_TEST_STAGING.md5
,from_timestamp
,to_timestamp
FROM (
SELECT '20150901 16:34:02' AS from_timestamp
,CASE
WHEN last_record IS NULL
THEN '20150901 16:34:02'
ELSE '39991231 11:11:11.000'
END AS to_timestamp
,CASE
WHEN rownum != 1
AND atom.id IS NOT NULL
THEN 1
WHEN atom.id IS NULL
THEN 1
ELSE 0
END AS transfer
,stage.*
FROM (
SELECT id
FROM ditemp.TS_SCD2_TEST_STAGING
WHERE file_id = 2
GROUP BY id
HAVING count(*) > 1
) AS scd2_count_ge_1
INNER JOIN (
SELECT row_number() OVER (
PARTITION BY id ORDER BY record_id
) AS rownum
,stage.*
FROM ditemp.TS_SCD2_TEST_STAGING AS stage
WHERE file_id IN (2)
) AS stage
ON (scd2_count_ge_1.id = stage.id)
LEFT JOIN (
SELECT max(rownum) AS last_record
,id
FROM (
SELECT row_number() OVER (
PARTITION BY id ORDER BY record_id
) AS rownum
,stage.*
FROM ditemp.TS_SCD2_TEST_STAGING AS stage
)
GROUP BY id
) AS last_record
ON (
stage.id = last_record.id
AND stage.rownum = last_record.last_record
)
LEFT JOIN ditemp.TS_SCD2_TEST AS atom
ON (
stage.id = atom.id
AND stage.md5 = atom.md5
AND atom.to_timestamp > '20150901 16:34:02'
)
) AS TS_SCD2_TEST_STAGING
WHERE transfer = 1
and to short things up, I am trying to insert 20150901 16:34:02 to from_timestamp and 39991231 11:11:11.000 to to_timestamp.
and get
ERROR: 42804: column "from_timestamp" is of type timestamp without time zone but expression is of type character varying
Can anyone please suggest how to solve this issue?
Postgres isn't recognizing 20150901 16:34:02 (your input) as a valid time/date format, so it assumes it's a string.
Use a standard date format instead, preferably ISO-8601. 2015-09-01T16:34:02
SQLFiddle example
Just in case someone ends up here trying to insert into a postgresql a timestamp or a timestampz from a variable in groovy or Java from a prepared statement and getting the same error (as I did), I managed to do it by setting the property stringtype to "unspecified". According to the documentation:
Specify the type to use when binding PreparedStatement parameters set
via setString(). If stringtype is set to VARCHAR (the default), such
parameters will be sent to the server as varchar parameters. If
stringtype is set to unspecified, parameters will be sent to the
server as untyped values, and the server will attempt to infer an
appropriate type. This is useful if you have an existing application
that uses setString() to set parameters that are actually some other
type, such as integers, and you are unable to change the application
to use an appropriate method such as setInt().
Properties props = [user : "user", password: "password",
driver:"org.postgresql.Driver", stringtype:"unspecified"]
def sql = Sql.newInstance("url", props)
With this property set, you can insert a timestamp as a string variable without the error raised in the question title. For instance:
String myTimestamp= Instant.now().toString()
sql.execute("""INSERT INTO MyTable (MyTimestamp) VALUES (?)""",
[myTimestamp.toString()]
This way, the type of the timestamp (from a String) is inferred correctly by postgresql. I hope this helps.
Inside apache-tomcat-9.0.7/conf/server.xml
Add "?stringtype=unspecified" to the end of url address.
For example:
<GlobalNamingResources>
<Resource name="jdbc/??" auth="Container" type="javax.sql.DataSource"
...
url="jdbc:postgresql://127.0.0.1:5432/Local_DB?stringtype=unspecified"/>
</GlobalNamingResources>
I have the following heap of text:
"BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,
URLConnectionSample,ShortVersion,1.0,Version,1.0,BundleSize,155648,DynamicSize,
16384,Identifier,com.IdentifierForVendor3,Name,IdentifierForVendor3,ShortVersion,
1.0,Version,1.0,".
What I'd like to do is extract data from this in the following manner:
BundleSize:155648
DynamicSize:204800
Identifier:com.URLConnectionSample
Name:URLConnectionSample
ShortVersion:1.0
Version:1.0
BundleSize:155648
DynamicSize:16384
Identifier:com.IdentifierForVendor3
Name:IdentifierForVendor3
ShortVersion:1.0
Version:1.0
All tips and suggestions are welcome.
It isn't quite clear what do you need to do with this data. If you really need to process it entirely in the database (looks like the task for your favorite scripting language instead), one option is to use hstore.
Converting records one by one is easy:
Assuming
%s =
BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,URLConnectionSample,ShortVersion,1.0,Version,1.0
SELECT * FROM each(hstore(string_to_array(%s, ',')));
Output:
key | value
--------------+-------------------------
Name | URLConnectionSample
Version | 1.0
BundleSize | 155648
Identifier | com.URLConnectionSample
DynamicSize | 204800
ShortVersion | 1.0
If you have table with columns exactly matching field names (note the quotes, populate_record is case-sensitive to key names):
CREATE TABLE data (
"BundleSize" integer, "DynamicSize" integer, "Identifier" text,
"Name" text, "ShortVersion" text, "Version" text);
You can insert hstore records into it like this:
INSERT INTO data SELECT * FROM
populate_record(NULL::data, hstore(string_to_array(%s, ',')));
Things get more complicated if you have comma-separated values for more than one record.
%s = BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,URLConnectionSample,ShortVersion,1.0,Version,1.0,BundleSize,155648,DynamicSize,16384,Identifier,com.IdentifierForVendor3,Name,IdentifierForVendor3,ShortVersion,1.0,Version,1.0,
You need to break up an array into chunks of number_of_fields * 2 = 12 elements first.
SELECT hstore(row) FROM (
SELECT array_agg(str) AS row FROM (
SELECT str, row_number() OVER () AS i FROM
unnest(string_to_array(%s, ',')) AS str
) AS str_sub
GROUP BY (i - 1) / 12) AS row_sub
WHERE array_length(row, 1) = 12;
Output:
"Name"=>"URLConnectionSample", "Version"=>"1.0", "BundleSize"=>"155648", "Identifier"=>"com.URLConnectionSample", "DynamicSize"=>"204800", "ShortVersion"=>"1.0"
"Name"=>"IdentifierForVendor3", "Version"=>"1.0", "BundleSize"=>"155648", "Identifier"=>"com.IdentifierForVendor3", "DynamicSize"=>"16384", "ShortVersion"=>"1.0"
And inserting this into the aforementioned table:
INSERT INTO data SELECT (populate_record(NULL::data, hstore(row))).* FROM ...
the rest of the query is the same.