How do you index a text column in postgreSQL? - postgresql

In MySQL Key Length is added as a type-modifier and placed in parenthesis colname(), one can provide it to CREATE INDEX like this,
CREATE INDEX foo_bar_idx ON foo ( bar(500) );
scenario :
Mysql custom db api function:
def add_index(self, doctype, fields, index_name=None):
"""Creates an index with given fields if not already created.
Index name will be `fieldname1_fieldname2_index`"""
index_name = index_name or self.get_index_name(fields)
table_name = 'tab' + doctype
if not self.has_index(table_name, index_name):
self.commit()
self.sql("""ALTER TABLE `%s`
ADD INDEX `%s`(%s)""" % (table_name, index_name, ", ".join(fields)))
using the above for mysql as;
hotelier.db.add_index("Item", ["route(500)"])
postgresql custom db api function:
def add_index(self, doctype, fields, index_name=None):
"""Creates an index with given fields if not already created.
Index name will be `fieldname1_fieldname2_index`"""
index_name = index_name or self.get_index_name(fields)
table_name = 'tab' + doctype
self.commit()
self.sql("""CREATE INDEX IF NOT EXISTS "{}" ON `{}`("{}")""".format(index_name, table_name, '", "'.join(fields)))
How to call the same thing in postgresql ?
hotelier.db.add_index("Item", ["route(500)"])

Related

The column index is out of range: 2, number of columns: 1 error while updating jsonb column

I am trying to update jsonb column in java with mybatis.
Following is my mapper method
#Update("update service_user_assn set external_group = external_group || '{\"service_name\": \"#{service_name}\" }' where user=#{user} " +
" and service_name= (select service_name from services where service_name='Google') " )
public int update(#Param("service_name")String service_name,#Param("user") Integer user);
I am getting the following error while updating the jsonb (external_group) cloumn.
### Error updating database. Cause: org.postgresql.util.PSQLException: The column index is out of range: 2, number of columns: 1.
### The error may involve com.apds.mybatis.mapper.ServiceUserMapper.update-Inline
I am able to update with the same way for non-jsonb columns.
Also if I am putting hardcoded value it's working for jsonb columns.
How to solve this error while updating jsonb column?
You should not enclose #{} in single quotes because it will become part of a literal rather than a placeholder. i.e.
external_group = external_group || '{"service_name": "?"}' where ...
So, there will be only one placeholder in the PreparedStatement and you get the error.
The correct way is to concatenate the #{} in SQL.
You may also need to cast the literal to jsonb type explicitly.
#Update({
"update service_user_assn set",
"external_group = external_group",
"|| ('{\"service_name\": \"' || #{service_name} || '\" }')::jsonb",
"where user=#{user} and",
"service_name= (select service_name from services where service_name='Google')"})
The SQL being executed would look as follows.
external_group = external_group || ('{"service_name": "' || ? || '"}')::jsonb where ...

What is the correct syntax to write "merge statement" in PostgreSQL 9.6.2

code :It is merge query which is running on Postgres 9.6.2 and giving syntax error.
<<!--It is giving syntax error--->
MERGE INTO timesheets.timesheet_report AS tgt USING timesheets.tmp_timesheet_report AS src ON src.FMNo = tgt.FMNo
AND src.ts_start_dt = tgt.ts_start_dt
AND src.charge_code = tgt.charge_code WHEN NOT MATCHED
INSERT (tgt.FIRST_NAME,
tgt.LAST_NAME)
VALUES(src.FIRST_NAME,
src.LAST_NAME) WHEN MATCHED
UPDATE
SET tgt.FIRST_NAME = src.FIRST_NAME,
tgt.LAST_NAME = src.LAST_NAME;
It is ON CONFLICT
INSERT INTO table_name [your usual insert syntax here]
ON CONFLICT [some conflict definition]
DO UPDATE SET column1 = EXCLUDED.value1
So I guess your query would look like this:
INSERT INTO timesheets.timesheet_report (FMNo, ts_start_dt, charge_code, FIRST_NAME, LAST_NAME)
SELECT src.FMNo, src.ts_start_dt, src.charge_code, src.FIRST_NAME, src.LAST_NAME FROM timesheets.tmp_timesheet_report AS src
ON CONFLICT (FMNo, ts_start_dt, charge_code)
DO UPDATE
SET FIRST_NAME = EXCLUDED.FIRST_NAME,
LAST_NAME = EXCLUDED.LAST_NAME;
If you don't have primary key or unique index, then you need to create unique index on timesheets.timesheet_report using btree (FMNo, ts_start_dt, charge_code);

AS400 index configuration table

How can I view index of particular table in AS400? In which table index description of table is stored?
If your "index" is really a logical file, you can see a list of these using:
select * from qsys2.systables
where table_schema = 'YOURLIBNAME' and table_type = 'L'
To complete the previous answers: if your AS400/IBMi's files are "IBM's old style" Physical and Logical files, the qsys2.syskeys and qsys2.sysindexes are empty.
==> you retrieve index infos in QADBKFLD (for "indexes" info) and QADBXREF(for fields list) tables
select * from QSYS.QADBXREF where DBXFIL = 'YOUR_LOGICAL_FILE_NAME' and DBXLIB = 'YOUR_LIBRARY'
select * from QSYS.QADBKFLD where DBKFIL = 'YOUR_LOGICAL_FILE_NAME' and DBKLB2 = 'YOUR_LIBRARY'
WARNING: YOUR_LOGICAL_FILE_NAME is not your "table name", but the name of the file ! You have to join another table QSYS.QADBFDEP to match LOGICAL_FILE_NAME / TABLE_NAME :
To found indexes from your table's name:
Select r.*
from QSYS.QADBXREF r, QSYS.QADBFDEP d
where d.DBFFDP = r.DBXFIL and d.DBFLIB=r.DBXLIB
and d.DBFFIL = 'YOUR_TABLE_NAME' and d.DBFLIB = 'YOUR_LIBRARY'
To found all indexes' fields from your table:
Select DBXFIL , f.DBKFLD, DBKPOS , t.DBXUNQ
from QSYS.QADBXREF t
INNER JOIN QSYS.QADBKFLD f on DBXFIL = DBKFIL and DBXLIB = DBKLIB
INNER JOIN QSYS.QADBFDEP d on d.DBFFDP = t.DBXFIL and d.DBFLIB=t.DBXLIB
where d.DBFFIL = 'YOUR_TABLE_NAME' and d.DBFLIB = 'YOUR_LIBRARY'
order by DBXFIL, DBKPOS
if your indexes is create with SQL you can see liste of index in sysindexes system view
SELECT * FROM qsys2.sysindexes WHERE TABLE_SCHEMA='YOURLIBNAME' and
TABLE_NAME = 'YOURTABLENAME'
if you want detail columns for index you can join syskeys tables
SELECT KEYS.INDEX_NAME, KEYS.COLUMN_NAME
FROM qsys2.syskeys KEYS
JOIN qsys2.sysindexes IX ON KEYS.ixname = IX.name
WHERE TABLE_SCHEMA='YOURLIBNAME' and TABLE_NAME = 'YOURTABLENAME'
order by INDEX_NAME
You could also use commands to get the information. Command DSPDBR FILE(LIBNAME/FILENAME) will show a list of the objects dependent on a physical file. The objects that show a data dependency can then be further explored by running DSPFD FILE(LIBNAME/FILENAME). This will show the access paths of the logical file.

Psycopg2 insert python dictionary in postgres database

In python 3+, I want to insert values from a dictionary (or pandas dataframe) into a database. I have opted for psycopg2 with a postgres database.
The problems is that I cannot figure out the proper way to do this. I can easily concatenate a SQL string to execute, but the psycopg2 documentation explicitly warns against this. Ideally I wanted to do something like this:
cur.execute("INSERT INTO table VALUES (%s);", dict_data)
and hoped that the execute could figure out that the keys of the dict matches the columns in the table. This did not work. From the examples of the psycopg2 documentation I got to this approach
cur.execute("INSERT INTO table (" + ", ".join(dict_data.keys()) + ") VALUES (" + ", ".join(["%s" for pair in dict_data]) + ");", dict_data)
from which I get a
TypeError: 'dict' object does not support indexing
What is the most phytonic way of inserting a dictionary into a table with matching column names?
Two solutions:
d = {'k1': 'v1', 'k2': 'v2'}
insert = 'insert into table (%s) values %s'
l = [(c, v) for c, v in d.items()]
columns = ','.join([t[0] for t in l])
values = tuple([t[1] for t in l])
cursor = conn.cursor()
print cursor.mogrify(insert, ([AsIs(columns)] + [values]))
keys = d.keys()
columns = ','.join(keys)
values = ','.join(['%({})s'.format(k) for k in keys])
insert = 'insert into table ({0}) values ({1})'.format(columns, values)
print cursor.mogrify(insert, d)
Output:
insert into table (k2,k1) values ('v2', 'v1')
insert into table (k2,k1) values ('v2','v1')
I sometimes run into this issue, especially with respect to JSON data, which I naturally want to deal with as a dict. Very similar. . .But maybe a little more readable?
def do_insert(rec: dict):
cols = rec.keys()
cols_str = ','.join(cols)
vals = [ rec[k] for k in cols ]
vals_str = ','.join( ['%s' for i in range(len(vals))] )
sql_str = """INSERT INTO some_table ({}) VALUES ({})""".format(cols_str, vals_str)
cur.execute(sql_str, vals)
I typically call this type of thing from inside an iterator, and usually wrapped in a try/except. Either the cursor (cur) is already defined in an outer scope or one can amend the function signature and pass a cursor instance in. I rarely insert just a single row. . .And like the other solutions, this allows for missing cols/values provided the underlying schema allows for it too. As long as the dict underlying the keys view is not modified as the insert is taking place, there's no need to specify keys by name as the values will be ordered as they are in the keys view.
[Suggested answer/workaround - better answers are appreciated!]
After some trial/error I got the following to work:
sql = "INSERT INTO table (" + ", ".join(dict_data.keys()) + ") VALUES (" + ", ".join(["%("+k+")s" for k in dict_data]) + ");"
This gives the sql string
"INSERT INTO table (k1, k2, ... , kn) VALUES (%(k1)s, %(k2)s, ... , %(kn)s);"
which may be executed by
with psycopg2.connect(database='deepenergy') as con:
with con.cursor() as cur:
cur.execute(sql, dict_data)
Post/cons?
using %(name)s placeholders may solve the problem:
dict_data = {'key1':val1, 'key2':val2}
cur.execute("""INSERT INTO table (field1, field2)
VALUES (%(key1)s, %(key2)s);""",
dict_data)
you can find the usage in psycopg2 doc Passing parameters to SQL queries
Here is another solution inserting a dictionary directly
Product Model (has the following database columns)
name
description
price
image
digital - (defaults to False)
quantity
created_at - (defaults to current date)
Solution:
data = {
"name": "product_name",
"description": "product_description",
"price": 1,
"image": "https",
"quantity": 2,
}
cur = conn.cursor()
cur.execute(
"INSERT INTO products (name,description,price,image,quantity) "
"VALUES(%(name)s, %(description)s, %(price)s, %(image)s, %(quantity)s)", data
)
conn.commit()
conn.close()
Note: The columns to be inserted is specified on the execute statement .. INTO products (column names to be filled) VALUES ..., data <- the dictionary (should be the same **ORDER** of keys)

Firebird dynamic Var New and Old

I need validate dynamic Fields from a Table. For example:
CREATE TRIGGER BU_TPROYECTOS FOR TPROYECTOS
BEFORE UPDATE AS
DECLARE VARIABLE vCAMPO VARCHAR(64);
BEGIN
/*In then table "TCAMPOS" are the fields to validate*/
for Select CAMPO from TCAMPOS where TABLA = TPROYECTOS and ACTUALIZA = 'V' into :vCAMPO do
Begin
if (New.:vCAMPO <> Old.:vCampo) then
/*How i get dynamic New.Field1, New.Field2 on query return*/
End;
END ;
The question is : How can I put "The name of the field that the query returns me " in the above code .
Ie if the query returns me the field1 and field5 , I would put the trigger
if ( New.Field1 < > Old.Field1 ) or ( New.Field5 < > Old.Field5 ) then
There is no such feature in Firebird. You will need to create (and preferably) generate triggers that will reference all fields hard coded. If the underlying table changes or the requirements for validation, you will need to recreate the trigger to take the added or removed fields into account.