How to record time in Postgresql via sqlalchemy - postgresql

I absolutely do not understand how to write the net time to the database.
I create a table via sqlalchemy using the time object. Am I doing everything right?
windows = Table(
"windows", meta,
Column("courier_id", Integer, ForeignKey("couriers.courier_id"), nullable=False),
Column("start_time", Time),
Column("end_time", Time)
)
And how do I upload the data now? I did it like this
query = datab.windows.insert().values([1, 09:00, 18:00])
await conn.execute(query)
Аnd there is one more question. How to specify which columns to fill in in insert()

Use a datetime.time() object to specify the time values and pass the .values() as a dict with the column names as the keys:
import datetime
from sqlalchemy import (
create_engine,
Table,
Column,
Integer,
Time,
ForeignKey,
MetaData,
)
engine = create_engine("sqlite:///:memory:", echo=True)
windows = Table(
"windows",
MetaData(),
Column(
"courier_id",
Integer,
# ForeignKey("couriers.courier_id"), # omit for this example
nullable=False,
),
Column("start_time", Time),
Column("end_time", Time),
)
windows.create(engine)
stmt = windows.insert().values(
{
"courier_id": 1,
"start_time": datetime.time(9),
"end_time": datetime.time(18),
}
)
with engine.begin() as conn:
conn.execute(stmt)
"""SQL generated:
2021-03-24 16:57:50,811 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2021-03-24 16:57:50,813 INFO sqlalchemy.engine.Engine INSERT INTO windows (courier_id, start_time, end_time) VALUES (?, ?, ?)
2021-03-24 16:57:50,813 INFO sqlalchemy.engine.Engine [generated in 0.00036s] (1, '09:00:00.000000', '18:00:00.000000')
2021-03-24 16:57:50,813 INFO sqlalchemy.engine.Engine COMMIT
"""

Related

getting started testing db functions pytest Process finished with exit code 5

I need to test loads of functions that used a sqlite db. To get started I want to use pytest for fixtures.
conftest.py:
import pytest
import sqlite3
#pytest.fixture
def session():
connection = sqlite3.connect(":memory:")
cursor = connection.cursor()
cursor.execute("CREATE TABLE Investment (ID INTEGER PRIMARY KEY, name VARCHAR(120), ticker VARCHAR(10), exchange VARCHAR(10), type INTEGER, relativeAddress VARCHAR(50), sharesiesFundID VARCHAR(36))")
cursor.execute("CREATE TABLE Orders (investmentID INTEGER NOT NULL, logTimestamp TIMESTAMP NOT NULL, amount INTEGER, PRIMARY KEY(investmentID, logTimestamp), FOREIGN KEY(investmentID) REFERENCES Investment(ID))")
cursor.execute("CREATE TABLE InvestmentType (typeID INTEGER NOT NULL PRIMARY KEY , entryName VARCHAR(20))")
cursor.execute("INSERT INTO InvestmentType (typeID, entryName) VALUES (0, 'Company'), (1, 'ETF'), (2, 'Managed Fund')")
cursor.execute("INSERT INTO Investment (name, ticker, exchange, type, relativeAddress, sharesiesFundID) VALUES ('3M Co.', 'MMM', 'NYSE', 0, 'nyse-mmm', '94de52ef-324f-4d24-8a80-a5d2f00656bf'), ('a2 Milk Company', 'ATM', 'NZX', 0, 'atm', 'deff31bd-625b-4a82-bbc2-064c7b70b97c'), ('Abbott Laboratories', 'ABT', 'NYSE', 0, 'nyse-abt', 'a367613c-a9bd-4562-a8fd-459e7bd4f5ae')")
connection.commit()
yield cursor
connection.close()
test_db.py:
def get_entry(session):
result = session.execute("SELECT name FROM Investment WHERE ID = 3").fetchone()
assert result[0][1] == 'Abbott Laboratories'
this keep resulting in "Process finished with exit code 5".
I've tried putting everything in the same file, and some other configurations for pytest.

Unable to insert nested record in postgres

i had managed to create tables in postgres but encountered issues when trying to insert values.
comands = (
CREATE TYPE student AS (
name TEXT,
id INTEGER
)
CREATE TABLE studentclass(
date DATE NOT NULL,
time TIMESTAMPTZ NOT NULL,
PRIMARY KEY (date, time),
class student
)
)
And in psycog2
command = (
INSERT INTO studentclass (date, time, student) VALUES (%s,%s, ROW(%s,%s)::student)
)
student_rec = ("John", 1)
record_to_insert = ("2020-05-21", "2020-05-21 08:10:00", student_rec)
cursor.execute(commands, record_to_insert)
When executed, the errors are the incorrect argument and if i tried to hard coded the student value inside the INSERT statement, it will inform me about the unrecognized column for student.
Please advise.
One issue is the column name is class not student. Second is psycopg2 does tuple adaption as composite type
So you can do:
insert_sql = "INSERT INTO studentclass (date, time, class) VALUES (%s,%s,%s)"
student_rec = ("John", 1)
record_to_insert = ("2020-05-21", "2020-05-21 08:10:00", student_rec)
cur.execute(insert_sql, record_to_insert)
con.commit()
select * from studentclass ;
date | time | class
------------+-------------------------+----------
05/21/2020 | 05/21/2020 08:10:00 PDT | (John,1)

Table not created, even after validating the table existence

class datalog(display_clock):
def con_mysql(self):
cat = mysql.connector.connect(
host="localhost", user="subramanya", passwd="Sureshbabu#4155", database="CFM")
if (cat):
datacursor = cat.cursor()
todaydate = d
check_table = (
"SELECT count(*) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME=%s")
datacursor.execute(check_table, (todaydate,))
result = datacursor.fetchone()
if (result):
self.success_login()
else:
datacursor.execute(
"CREATE TABLE {today}(Sl_no INT NOT NULL AUTO_INCREMENT PRIMARY KEY,date DATE,Start_time TIME,End_time TIME,Item CHAR(255),Weight FLOAT, Amount INTEGER(10))".format(today=todaydate))
self.success_login()
else:
datacursor.Terminate
self.error_display.insert(0.0, "Connecting Database failed!!!")
I tried to check whether any table exists for today's date or not.
if not create the same.
no error occurred. But table not created for sysdate.
Welcome to stackoverflow!
I believe there is a small misconception here. You don't need to check if the table exists beforehand and create it afterward. Most of the current database technologies accept the condition IF NOT EXISTS on CREATE TABLE CLAUSE.
CREATE TABLE IF NOT EXISTS sales (
sale_id INT NOT NULL,
);
It means the table sales will be only created IF NOT EXISTS previously.
Also, I strongly recommend refactoring your code a wee bit. Take as a suggestion (please adapt accordingly your project needs):
from datetime import datetime
class Settings:
# please, avoid hard-coded credentials.
DB_HOST = "localhost"
DB_USER = "subramanya"
DB_PASSWD = "Sureshbabu#4155"
DB_SCHEMA = "CFM"
class datalog(display_clock):
def db_connect(self):
conn = mysql.connector.connect(
host=Settings.DB_HOST,
user=Settings.DB_USER,
passwd=Settings.DB_PASSWD,
database=Settings.DB_SCHEMA
)
if not conn:
raise Exception("Connecting Database failed!!!")
return conn
def ensure_table(self):
conn = self.db_connect()
conn.datacursor.execute("""
CREATE TABLE IF NOT EXISTS `{0}`(
Sl_no INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
date DATE,
Start_time TIME,
End_time TIME,
Item CHAR(255),
Weight FLOAT,
Amount INTEGER(10)
);
""".format(datetime.today().strftime('%Y%m%d')) # format 20200915
)
def run(self):
self.ensure_table()
self.success_login()
There are plenty of ways to write this code, but keep in mind that readability matters a lot.

Possible to use pandas/sqlalchemy to insert arrays into sql database? (postgres)

With the following:
engine = sqlalchemy.create_engine(url)
df = pd.DataFrame({
"eid": [1,2],
"f_i": [123, 1231],
"f_i_arr": [[123], [0]],
"f_53": ["2013/12/1","2013/12/1",],
"f_53a": [["2013/12/1"], ["2013/12/1"],],
})
with engine.connect() as con:
con.execute("""
DROP TABLE IF EXISTS public.test;
CREATE TABLE public.test
(
eid integer NOT NULL,
f_i INTEGER NULL,
f_i_arr INTEGER NULL,
f_53 DATE NULL,
f_53a DATE[] NULL,
PRIMARY KEY(eid)
);;
""")
df.to_sql("test", con, if_exists='append')
If I try to insert only column "f_53" (an date) it succeeds.
If I try to add column "f_53a" (a date[]) it fails with:
^
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) column "f_53a" is of type date[] but expression is of type text[]
LINE 1: ..._53, f_53a, f_i, f_i_arr) VALUES (1, '2013/12/1', ARRAY['201...
^
HINT: You will need to rewrite or cast the expression.
[SQL: 'INSERT INTO test (eid, f_53, f_53a, f_i, f_i_arr) VALUES (%(eid)s, %(f_53)s, %(f_53a)s, %(f_i)s, %(f_i_arr)s)'] [parameters: ({'f_53': '2013/12/1', 'f_53a': ['2013/12/1', '2013/12/1'], 'f_i_arr': [123], 'eid': 1, 'f_i': 123}, {'f_53': '2013/12/1', 'f_53a': ['2013/12/1', '2013/12/1'], 'f_i_arr': [0], 'eid': 2, 'f_i': 1231})]
I have mentioned the dtypes explicitly and it worked for me for postgres.
//sample code
import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.dialects import postgresql
df.to_sql('mytable',pgConn, if_exists='append', index=False, dtype={'datetime': sqlalchemy.TIMESTAMP(), 'cur_c':postgresql.ARRAY(sqlalchemy.types.REAL),
'volt_c':postgresql.ARRAY(sqlalchemy.types.REAL)
})
Yes -- is possible to insert [] and [][] types from a dataframe into postgres form a dataframe.
Unlike flat DATE types, which are may be correctly parsed by sql, DATE[] and DATE[][] need to be converted to datetime objects first. Like so.
with engine.connect() as con:
con.execute("""
DROP TABLE IF EXISTS public.test;
CREATE TABLE public.test
(
eid integer NOT NULL,
f_i INTEGER NULL,
f_ia INTEGER[] NULL,
f_iaa INTEGER[][] NULL,
f_d DATE NULL,
f_da DATE[] NULL,
f_daa DATE[][] NULL,
PRIMARY KEY(eid)
);
""")
d = pd.to_datetime("2013/12/1")
i = 99
df = pd.DataFrame({
"eid": [1,2],
"f_i": [i,i],
"f_ia": [None, [i,i]],
"f_iaa": [[[i,i],[i,i]], None],
"f_d": [d,d],
"f_da": [[d,d],None],
"f_daa": [[[d,d],[d,d]],None],
})
df.to_sql("test", con, if_exists='append', index=None)

nextval(seq_name) not fetching correct value from DB

I have a flask with sqlalchemy tied to a postgres db. All components are working with reads fully functional. I have a simple model:
class School(db.Model):
__tablename__ = 'schools'
id = db.Column(db.Integer, db.Sequence('schools_id_seq'), primary_key=True)
name = db.Column(db.String(80))
active = db.Column(db.Boolean)
created = db.Column(db.DateTime)
updated = db.Column(db.DateTime)
def __init__(self, name, active, created, updated):
self.name = name
self.active = active
self.created = created
self.updated = updated
which is working on a postgres table:
CREATE SEQUENCE schools_id_seq;
CREATE TABLE schools(
id int PRIMARY KEY NOT NULL DEFAULT nextval('schools_id_seq'),
name varchar(80) NOT NULL,
active boolean DEFAULT TRUE,
created timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
);
ALTER SEQUENCE schools_id_seq OWNED BY schools.id;
when I work with an insert on this table from psql, all is well:
cake=# select nextval('schools_id_seq');
nextval
---------
65
(1 row)
cake=# INSERT INTO schools (id, name, active, created, updated) VALUES (nextval('schools_id_seq'),'Test', True, current_timestamp, current_timestamp);
INSERT 0 1
resulting in:
66 | Test | 0 | t | 2016-08-25 14:12:24.928456 | 2016-08-25 14:12:24.928456
but when I try the same insert from flask, stack trace complains about a duplicate id, but it is using nextval to get that value:
sqlalchemy.exc.IntegrityError: (psycopg2.IntegrityError) duplicate key value violates unique constraint "schools_pkey"
DETAIL: Key (id)=(7) already exists.
[SQL: "INSERT INTO schools (id, name, active, created, updated) VALUES (nextval('schools_id_seq'), %(name)s, %(active)s, %(created)s, %(updated)s) RETURNING schools.id"] [parameters: {'active': True, 'name': 'Testomg', 'updated': datetime.datetime(2016, 8, 25, 14, 10, 5, 703471), 'created': datetime.datetime(2016, 8, 25, 14, 10, 5, 703458)}]
Why would the sqlalchemy call to nextval not return the same next val that the same call within the postgres db yields?
UPDATE: #RazerM told me about the echo=true param that I didn't know about. With
app.config['SQLALCHEMY_ECHO']=True
I yielded from a new insert (note that on this try it fetched 10, should be 67):
2016-08-25 14:47:40,127 INFO sqlalchemy.engine.base.Engine select version()
2016-08-25 14:47:40,128 INFO sqlalchemy.engine.base.Engine {}
2016-08-25 14:47:40,314 INFO sqlalchemy.engine.base.Engine select current_schema()
2016-08-25 14:47:40,315 INFO sqlalchemy.engine.base.Engine {}
2016-08-25 14:47:40,499 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1
2016-08-25 14:47:40,499 INFO sqlalchemy.engine.base.Engine {}
2016-08-25 14:47:40,594 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1
2016-08-25 14:47:40,594 INFO sqlalchemy.engine.base.Engine {}
2016-08-25 14:47:40,780 INFO sqlalchemy.engine.base.Engine show standard_conforming_strings
2016-08-25 14:47:40,780 INFO sqlalchemy.engine.base.Engine {}
2016-08-25 14:47:40,969 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2016-08-25 14:47:40,971 INFO sqlalchemy.engine.base.Engine INSERT INTO schools (id, name, active, created, updated) VALUES (nextval('schools_id_seq'), %(name)s, %(active)s, %(created)s, %(updated)s) RETURNING schools.id
2016-08-25 14:47:40,971 INFO sqlalchemy.engine.base.Engine {'name': 'Testing', 'created': datetime.datetime(2016, 8, 25, 14, 47, 38, 785031), 'active': True, 'updated': datetime.datetime(2016, 8, 25, 14, 47, 38, 785050)}
2016-08-25 14:47:41,064 INFO sqlalchemy.engine.base.Engine ROLLBACK
sqlalchemy.exc.IntegrityError: (psycopg2.IntegrityError) duplicate key value violates unique constraint "schools_pkey"
DETAIL: Key (id)=(10) already exists.
[SQL: "INSERT INTO schools (id, name, active, created, updated) VALUES (nextval('schools_id_seq'), %(name)s, %(active)s, %(created)s, %(updated)s) RETURNING schools.id"] [parameters: {'updated': datetime.datetime(2016, 8, 25, 14, 54, 18, 262873), 'created': datetime.datetime(2016, 8, 25, 14, 54, 18, 262864), 'active': True, 'name': 'Testing'}]
Well, solution is simple in that case, it doesn't explain why, because I think we should look at entire environment, which you cannot show us or it will take too long. So try to insert as many records as it will reach 67 and next inserts should apply without any error, because sequence minimum will reach proper value. Of course you can try to add server_default option to id property first:
server_default=db.Sequence('schools_id_seq').next_value()
So
seq = db.Sequence('schools_id_seq')
And in a class:
id = db.Column(db.Integer, seq, server_default=seq.next_value(), primary_key=True)
Sqlalchemy mention about that in this way:
Sequence was originally intended to be a Python-side directive first and foremost so it’s probably a good idea to specify it in this way as well.
Sequences are always incremented, so both your select statement and SQLAlchemy incremented the value.
As stated in Sequence Manipulation Functions:
Advance the sequence object to its next value and return that value. This is done atomically: even if multiple sessions execute nextval concurrently, each will safely receive a distinct sequence value.
If a sequence object has been created with default parameters, successive nextval calls will return successive values beginning with 1. Other behaviors can be obtained by using special parameters in the CREATE SEQUENCE command; see its command reference page for more information.
Important: To avoid blocking concurrent transactions that obtain numbers from the same sequence, a nextval operation is never rolled back; that is, once a value has been fetched it is considered used and will not be returned again. This is true even if the surrounding transaction later aborts, or if the calling query ends up not using the value. For example an INSERT with an ON CONFLICT clause will compute the to-be-inserted tuple, including doing any required nextval calls, before detecting any conflict that would cause it to follow the ON CONFLICT rule instead. Such cases will leave unused "holes" in the sequence of assigned values. Thus, PostgreSQL sequence objects cannot be used to obtain "gapless" sequences.