How to append to array field with bulk_update - postgresql

I have a ArrayField in a Peewee model backed by postgresql DB. How can I append to that field on conflict while upserting in bulk?
Model:
class User(Model):
id = BigAutoField(primary_key=True, unique=True)
tags = ArrayField(CharField, null=True)
SQL query I want to execute:
update user as u set tags = array_append(u.tags, val.tag)
from (values (2, 'test'::varchar)) as val(id, tag)
where u.id = val.id;
I've been trying for to figure out even for single insert, but facing issues in typecasting:
User.insert(user).on_conflict(
conflict_target=[User.id],
update={User.tags: fn.array_append(user['tag'])}
).execute()
Error I'm getting:
function array_append(unknown) does not exist
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
How do I type cast the text to varchar in peewee?

You can try the following:
User.insert(user).on_conflict(
conflict_target=[User.id],
update={User.tags: fn.array_append(user['tag'].cast('varchar'))}
).execute()

Related

Use UUID in Doobie SQL update

I have the following simple (cut down for brevity) Postgres table:
create table users(
id uuid NOT NULL,
year_of_birth smallint NOT NULL
);
Within a test I have seeded data.
When I run the following SQL update to correct a year_of_birth the error implies that I'm not providing the necessary UUID correctly.
The Doobie SQL I run is:
val id: String = "6ee7a37c-6f58-4c14-a66c-c17083adff81"
val correctYear: Int = 1980
sql"update users set year_of_birth = $correctYear where id = $id".update.run
I have tried both with and without quotes around the given $id e.g. the other version is:
sql"update users set year_of_birth = $correctYear where id = '$id'".update.run
The error upon running the above is:
org.postgresql.util.PSQLException: ERROR: operator does not exist: uuid = character varying
Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
Both comments provided viable solutions.
a_horse_with_no_name suggested the use of cast which works though the SQL becomes no so nice when compared to the other solution.
AminMal suggested the use of available Doobie implicits which can handle a UUID within SQL and thus avoid a cast.
So I changed my code to the following:
import doobie.postgres.implicits._
val id: UUID = UUID.fromString("6ee7a37c-6f58-4c14-a66c-c17083adff81")
sql"update users set year_of_birth = $correctYear where id = $id".update.run
So I'd like to mark this question as resolved because of the comment provided by AminMal

Slick Postgres: How to search in a list of strings using the like operator

There is a simple database entity:
case class Foo(id: Option[UUID], keywords: Seq[String])
I want to implement a search function which returns all entities of type Foo which have at least one keyword that contains the search string.
I'm using Slick and tried this:
def searchKeywords(txt: String): Future[Seq[Foo]] = {
val action = Foos.filter(p => p.keywords.any like s"%$txt%").result
db.run(action)
}
This piece of code compiles, but when executing, I get this SQL error:
PSQLException: ERROR: syntax error at or near "any"
The generated sql statement looks like:
select "id", "title", "tagline", "logo", "short_desc", "keywords", "initial_condition", "work_process", "end_result", "ts", "lm", "v" from "projects" where any("keywords") like '%foo%'
And it does not work with postgresql. (I'm using v12)
Schema for the table looks like this:
CREATE TABLE foos
(
id UUID NOT NULL PRIMARY KEY,
keywords varchar[] NOT NULL
);
How can I achieve to search in a list of strings using the like operator?
From a pure SQL point of view, you need a derived table to achieve that. I hope some expert corrects me if I'm wrong but you can't use SQL operator like on a array.
Supposing your table construction is :
CREATE TABLE foos
(
id UUID NOT NULL PRIMARY KEY,
keywords varchar[] NOT NULL
);
Then an SQL way of retrieving the results would be :
select * from (
select id, unnest(keywords) as keyw from foos
) myTable where keyw like '%foo%'
Otherwise, the syntax you're using for the like operator seems correct.
myProperty like s"%$myVariable"

execute "insert" raw sql query with on conflict "ignore" replacing bulk create in django rest framework

This is the listserializer I am using to create multiple objects.
class listSerializer(serializers.ListSerializer):
def create(self, validated_data):
objs = [klass(**item) for item in validated_data]
return klass.objects.bulk_create(objs)
But bulk_create throws error for unique key error in postgres,So I need to execute this raw sql in same create function.I need help on translating validated_data to sql query.
insert into table (count,value,type,mark,created_at) values
(3,32,2,162,CURRENT_TIMESTAMP),
(4,33,1,162,CURRENT_TIMESTAMP),
(3,33,1,162,CURRENT_TIMESTAMP)
on CONFLICT do nothing
class listSerializer(serializers.ListSerializer):
def create(self, validated_data):
values = ''
for data in validated_data:
values+='('
for item in data.values():
values+=str(item)+','
values+='CURRENT_TIMESTAMP),'
cursor = connection.cursor()
cursor.execute("insert into table (count,value,type,mark,created_at) values "+values[:-1]+"
on conflict do nothing)

mybatis - Passing multiple parameters on #One annotation

I am trying to access a table in my Secondary DB whose name I am obtaining from my Primary DB. My difficulty is to pass the "DB-Name" as a parameter into my secondary query, (BTW I am using MyBatis annotation based Mappers).
This is my Mapper
#SelectProvider(type = DealerQueryBuilder.class, method = "retrieveDealerListQuery")
#Results({
#Result(property="dealerID", column="frm_dealer_master_id"),
#Result(property="dealerTypeID", column="frm_dealer_type_id", one=#One(select="retrieveDealerTypeDAO")),
#Result(property="dealerName", column="frm_dealer_name")
})
public List<Dealer> retrieveDealerListDAO(#Param("firmDBName") String firmDBName);
#Select("SELECT * from ${firmDBName}.frm_dealer_type where frm_dealer_type_id=#{frm_dealer_type_id}")
#Results({
#Result(property="dealerTypeID", column="frm_dealer_type_id"),
#Result(property="dealerType", column="frm_dealer_type")
})
public DealerType retrieveDealerTypeDAO(#Param("firmDBName") String firmDBName, #Param("frm_dealer_type_id") int frm_dealer_type_id);
The firmDBName I have is obtained from my "Primary DB".
If I omit ${firmDBName} in my second query, the query is trying to access my Primary Database and throws out table "PrimaryDB.frm_dealer_type" not found. So it is basically trying to search for a table named "frm_dealer_type" in my Primary DB.
If I try to re-write the #Result like
#Result(property="dealerTypeID", column="firmDBName=firmDBName, frm_dealer_type_id=frm_dealer_type_id", one=#One(select="retrieveDealerTypeDAO")),
It throws an error that Column"firmDBName" does not exist.
Changing ${firmDBName} to #{firmDBName} also did not help.
I did refer to this blog - here
I want a solution to pass my parameter firmDBName from my primary query into secondary query.
The limitation here is that your column must be returned by the first #SELECT.
If you look at the test case here you will see that parent_xxx values returned by the first Select.
Your DealerQueryBuilder must select firmDBName as a return value and your column must map the name of the return column to that.
Your column definition is always wrong, it should be:
{frm_dealer_type_id=frm_dealer_type_id,firmDBName=firmDBName} or whatever it was returned as from your first select.
Again you can refer to the test case I have above as well as the documentation here http://www.mybatis.org/mybatis-3/sqlmap-xml.html#Nested_Select_for_Association

Default value doesn't work in SQLAlchemy + PostgreSQL + aiopg + psycopg2

I've found an unexpected behavior in SQLAlchemy. I'm using the following versions:
SQLAlchemy (0.9.8)
PostgreSQL (9.3.5)
psycopg2 (2.5.4)
aiopg (0.5.1)
This is the table definition for the example:
import asyncio
from aiopg.sa import create_engine
from sqlalchemy import (
MetaData,
Column,
Integer,
Table,
String,
)
metadata = MetaData()
users = Table('users', metadata,
Column('id_user', Integer, primary_key=True, nullable=False),
Column('name', String(20), unique=True),
Column('age', Integer, nullable=False, default=0),
)
Now if I try to execute a simple insert to the table just populating the id_user and name, the column age should be auto-generated right? Lets see...
#asyncio.coroutine
def go():
engine = yield from create_engine('postgresql://USER#localhost/DB')
data = {'id_user':1, 'name':'Jimmy' }
stmt = users.insert(values=data, inline=False)
with (yield from engine) as conn:
result = yield from conn.execute(stmt)
loop = asyncio.get_event_loop()
loop.run_until_complete(go())
This is the resulting statement with the corresponding error:
INSERT INTO users (id_user, name, age) VALUES (1, 'Jimmy', null);
psycopg2.IntegrityError: null value in column "age" violates not-null constraint
I didn't provide the age column, so where is that age = null value coming from? I was expecting something like this:
INSERT INTO users (id_user, name) VALUES (1, 'Jimmy');
Or if the default flag actually works should be:
INSERT INTO users (id_user, name, Age) VALUES (1, 'Jimmy', 0);
Could you put some light on this?
This issue has been confirmed has an aiopg bug. Seems like at the moment it's ignoring the default argument on data manipulation.
I've fixed the issue using server_default instead:
users = Table('users', metadata,
Column('id_user', Integer, primary_key=True, nullable=False),
Column('name', String(20), unique=True),
Column('age', Integer, nullable=False, server_default='0'))
I think you need to use inline=True in your insert. This turns off 'pre-execution'.
Docs are a bit cryptic on what exactly this 'pre-execution' entails, but they mentions default parameters:
:param inline:
if True, SQL defaults present on :class:`.Column` objects via
the ``default`` keyword will be compiled 'inline' into the statement
and not pre-executed. This means that their values will not
be available in the dictionary returned from
:meth:`.ResultProxy.last_updated_params`.
This piece of docstring is from Update class, but they have a shared behavior with Insert.
Besides, that's the only way they test it:
https://github.com/zzzeek/sqlalchemy/blob/rel_0_9/test/sql/test_insert.py#L385