Expressing Postgresql VALUES command in SQLAlchemy ORM? - postgresql

How to express the query
VALUES ('alice'), ('bob') EXCEPT ALL SELECT name FROM users;
(i.e. "list all names in VALUES that are not in table 'users'") in SQLAlchemy ORM? In other words, what should the statement 'X' below be like?
def check_for_existence_of_all_users_in_list(list):
logger.debug(f"checking that each user in {list} is in the database")
query = X(list)
(There is sqlalchemy.values which could be used like this:
query = sa.values(sa.column('name', sa.String)).data(['alice', 'bob']) # .???
but it appears that it can only be used as argument to INSERT or UPDATE.)
I am using SQLAlchemy 1.4.4.

This should work for you:
user_names = ['alice', 'bob']
q = values(column('name', String), name="temp_names").data([(_,) for _ in user_names])
query = select(q).except_all(select(users.c.name)) # 'users' is Table instance

Related

Sqlalchemy asyncio translate postgres query for GROUP_BY clause

I want to translate the below postgres query into Sqlalchemy asyncio format, but so far, I could only retrieve the first column only, or the whole row at once, while I need only to retrieve only two columns per record:
SELECT
table.xml_uri,
max(table.created_at) AS max_1
FROM
table
GROUP BY
table.xml_uri
ORDER BY
max_1 DESC;
I reach out to the below translation, but this only returns the first column xml_uri, while I need both columns. I left the order_by clause commented out for now as it generates also the below error when commented in:
Sqlalchemy query:
from sqlalchemy.ext.asyncio import AsyncSession
query = "%{}%".format(query)
records = await session.execute(
select(BaseModel.xml_uri, func.max(BaseModel.created_at))
.order_by(BaseModel.created_at.desc())
.group_by(BaseModel.xml_uri)
.filter(BaseModel.xml_uri.like(query))
)
# Get all the records
result = records.scalars().all()
Error generated when commenting in order_by clause:
column "table.created_at" must appear in the GROUP BY clause or be used in an aggregate function
The query is returning a resultset consisting of two-element tuples. session.scalars() is taking the first element of each tuple. Using session.execute instead will provide the desired behaviour.
It's not permissable to order by the date field directly as it isn't part of the projection, but you can give the max column a label and use that to order.
Here's an example script:
import sqlalchemy as sa
from sqlalchemy import orm
Base = orm.declarative_base()
class MyModel(Base):
__tablename__ = 't73018397'
id = sa.Column(sa.Integer, primary_key=True)
code = sa.Column(sa.String)
value = sa.Column(sa.Integer)
engine = sa.create_engine('postgresql:///test', echo=True, future=True)
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
Session = orm.sessionmaker(engine, future=True)
with Session.begin() as s:
for i in range(10):
# Split values based on odd or even
code = 'AB'[i % 2 == 0]
s.add(MyModel(code=code, value=i))
with Session() as s:
q = (
sa.select(MyModel.code, sa.func.max(MyModel.value).label('mv'))
.group_by(MyModel.code)
.order_by(sa.text('mv desc'))
)
res = s.execute(q)
for row in res:
print(row)
which generates this query:
SELECT
t73018397.code,
max(t73018397.value) AS mv
FROM t73018397
GROUP BY t73018397.code
ORDER BY mv desc

What is the sql query equivalent to sqlalchemy query?

I am having issues ietrating over results from SQL query with using encode databases (https://pypi.org/project/databases/)
but this sqlalchemy query works fine for the celery tasks
query = session.query(orders).filter(orders.shipped == True)
I have tried the following (celery task unable to iterate over multiple rows from postgresql database with python) but does not work
def check_all_orders():
query = "SELECT * FROM orders WHERE shipped=True"
return database.fetch_all(query)
...
...
...
#app.task
async def check_orders():
query = await check_all_orders()
today = datetime.utcnow()
for q in query:
if q.last_notification is not None:
if (today - q.last_notification).total_seconds() < q.cooldown:
continue
Does anyone know what SQL statement will generate what i can iterate like it does for this sqlalchemy query?
query = session.query(orders).filter(orders.shipped == True)
SQLAlchemy's resulting query depends on what particular backend engine you use. For example, filter(orders.shipped == True) can be converted to something like WHERE shipped = 't' for PostgreSQL. You can always log the query it sends to database backend. For your particular case SELECT * FROM orders WHERE shipped should be enough.

Slick:Insert into a Table from Raw SQL Select

Insert into a Table from Raw SQL Select
val rawSql: DBIO[Vector[(String, String)]] = sql"SELECT id, name FROM SomeTable".as[(String, String)]
val myTable :TableQuery[MyClass] // with columns id (String), name(String) and some other columns
Is there a way to use forceInsert functions to insert data from select into the tables?
If not, Is there a way to generate a sql string by using forceInsertStatements?
Something like:
db.run {
myTable.map{ t => (t.id, t.name)}.forceInsert????(rawSql)
}
P.S. I don't want to make two I/O calls because my RAW SQL might be returning thousands of records.
Thanks for the help.
If you can represent your rawSql query as a Slick query instead...
val query = someTable.map(row => (row.id, row.name))
...for example, then forceInsertQuery will do what you need. An example might be:
val action =
myTable.map(row => (row.someId, row.someName))
.forceInsertQuery(
someTable.map(query)
)
However, I presume you're using raw SQL for a good reason. In that case, I don't believe you can use forceInsert (without a round-trip to the database) because the raw SQL is already an action (not a query).
But, as you're using raw SQL, why not do the whole thing in raw SQL? Something like:
val rawEverything =
sqlu" insert into mytable (someId, someName) select id, name from sometable "
...or similar.

DB2 Update statement not working using JDBC

I have a few rows stored in a source table (as defined as $schema.$sourceTable in the UPDATE query below). This table has 3 columns: TABLE_NAME, PERMISSION_TAG_COL, PT_DEPLOYED
I have an update statement stored in a string like:
var update_PT_Deploy = s"UPDATE $schema.$sourceTable SET PT_DEPLOYED = 'Y' WHERE TABLE_NAME = '$tableName';"
My source table does have rows with TABLE_NAME as $tableName (parameter) as I inserted rows into this table using another function of my program. The default value of PT_DEPLOYED when I inserted the rows was specified as NULL.
I'm trying to execute update using JDBC in the following manner:
println(update_PT_Deploy)
val preparedStatement: PreparedStatement = connection.prepareStatement(update_PT_Deploy)
val row = preparedStatement.execute()
println(row)
println("row updated in table successfully")
preparedStatement.close()
The above piece of code does not throw any exception, but when I query my table in a tool like DBeaver, the NULL value of PT_DEPLOYED does not get updated to Y.
If I execute the same query as mentioned in update_PT_Deploy inside DBeaver, the query works and the table updates. I am sure I am following the correct steps..

How to use ilike sqlalchemy on postgresql array field?

I'm trying to match some names with the names I already have in my Postgresql database, using the following code:
last_like = '{last_name}%'.format(last_name)
matches = Session.query(MyTable).filter(or_(
MyTable.name.ilike(last_like),
MyTable.other_names.any(last_like, operator=ilike_op),
)).all()
Essentially it tries to match the name column or any of the other names stored as an array in the other_names column.
But I get:
KeyError: <function ilike_op at 0x7fb5269e2500>
What am I doing wrong?
For use postgresql array field you need to use unnest() function.
But you can't use result of unnest() in where clause.
Instead, you can use array_to_string function. Searching on string of other_names will give the same effect
from sqlalchemy import func as F
last_like = "%qq%"
matches = session.query(MyTable).filter(or_(
MyTable.name.ilike(last_like),
F.array_to_string(MyTable.other_names, ',').ilike(last_like),
)).all()