I would like to create computed column for profileapplication for below code.
Substring(Substring(P.propertyvaluesstring,
Charindex('ProfileApplication', P.propertyvaluesstring),
Charindex('</ProfileApplication',P.propertyvaluesstring) -
Charindex('ProfileApplication',P.propertyvaluesstring)),
Charindex('>', Substring(P.propertyvaluesstring,
Charindex( 'ProfileApplication',P.propertyvaluesstring),
Charindex('</ProfileApplication',P.propertyvaluesstring) - Charindex('ProfileApplication',P.propertyvaluesstring)))
+ 1, Len(Substring(P.propertyvaluesstring,
Charindex('ProfileApplication',
P.propertyvaluesstring),
Charindex('</ProfileApplication',
P.propertyvaluesstring) -
Charindex('ProfileApplication',
P.propertyvaluesstring)))) AS
ProfileApplication,
Again, i would like to use ProfileApplication into other query using computed column. I am not sure but would it be possible?
SUBSTRING
(SUBSTRING
(P.ProfileApplication,
CHARINDEX('RequisitionStartDate', P.ProfileApplication),
CHARINDEX('</RequisitionStartDate',P.ProfileApplication) -
CHARINDEX('RequisitionStartDate',P.ProfileApplication)
),
CHARINDEX('>', SUBSTRING(P.ProfileApplication,
CHARINDEX('RequisitionStartDate', P.ProfileApplication),
CHARINDEX('</RequisitionStartDate',P.ProfileApplication) -
CHARINDEX('RequisitionStartDate', P.ProfileApplication))) + 1,
LEN(SUBSTRING(P.ProfileApplication,
CHARINDEX('RequisitionStartDate',P.ProfileApplication),
CHARINDEX('</RequisitionStartDate',P.ProfileApplication) -
CHARINDEX('RequisitionStartDate',P.ProfileApplication))))
Assuming propertyvaluesstring column exists on the same table as the calculated column.
this is what you need to do:
ALTER TABLE <your table name here>
ADD ProfileApplication AS SUBSTRING(SUBSTRING(propertyvaluesstring, CHARINDEX('ProfileApplication', propertyvaluesstring), CHARINDEX('</ProfileApplication', propertyvaluesstring)-CHARINDEX('ProfileApplication', propertyvaluesstring)), CHARINDEX('>', SUBSTRING(propertyvaluesstring, CHARINDEX('ProfileApplication', propertyvaluesstring), CHARINDEX('</ProfileApplication', propertyvaluesstring)-CHARINDEX('ProfileApplication', propertyvaluesstring)))+1, LEN(SUBSTRING(propertyvaluesstring, CHARINDEX('ProfileApplication', propertyvaluesstring), CHARINDEX('</ProfileApplication', propertyvaluesstring)-CHARINDEX('ProfileApplication', propertyvaluesstring))))
Note that calculated columns sometime adversely affect performance so I would be cautious when adding them.
Related
I have an exercise on triggers in PostgreSQL.
I have 2 tables : "commande" and "ligne_commande".
In the table "ligne_commande" there are 4 columns :
codecommande - codeligne - quantité - codepdt
In the table "commande" there are 6 columns :
codecommande - datecommande - montantht - codeclient - codevendeur - coderemise
I need to create a trigger that for each modification / insertion / deletion in ligne_commande calculate the amount of an order (montantht in the table commande).
I already have the following function that calculate the amount of an order (made in a previous exercise) :
CREATE OR REPLACE FUNCTION verifmontantht(VARCHAR(10)) RETURNS boolean AS '
DECLARE
verifmontantht commande.montantht%TYPE;
BEGIN
SELECT INTO verifmontantht (SUM(produit.prixpdt * ligne_commande.quantite) * (100 - COALESCE(commande.coderemise,0))/100)::numeric(7,2) AS "Montant vérifié"
FROM commande JOIN ligne_commande ON commande.codecommande = ligne_commande.codecommande
JOIN produit ON ligne_commande.codepdt = produit.codepdt
WHERE commande.codecommande = $1
GROUP BY commande.codecommande, commande.datecommande, commande.montantht, commande.coderemise
ORDER BY commande.codecommande;
END;
'
LANGUAGE 'plpgsql';
I can't find a way to make a trigger that can use this function or should I create another function because this one doesn't have RETURN TRIGGER AS $..$
Do you have any advice that could help me ? Because I'm really stuck on that one..
Thank you very much for you help :)
In python 3+, I want to insert values from a dictionary (or pandas dataframe) into a database. I have opted for psycopg2 with a postgres database.
The problems is that I cannot figure out the proper way to do this. I can easily concatenate a SQL string to execute, but the psycopg2 documentation explicitly warns against this. Ideally I wanted to do something like this:
cur.execute("INSERT INTO table VALUES (%s);", dict_data)
and hoped that the execute could figure out that the keys of the dict matches the columns in the table. This did not work. From the examples of the psycopg2 documentation I got to this approach
cur.execute("INSERT INTO table (" + ", ".join(dict_data.keys()) + ") VALUES (" + ", ".join(["%s" for pair in dict_data]) + ");", dict_data)
from which I get a
TypeError: 'dict' object does not support indexing
What is the most phytonic way of inserting a dictionary into a table with matching column names?
Two solutions:
d = {'k1': 'v1', 'k2': 'v2'}
insert = 'insert into table (%s) values %s'
l = [(c, v) for c, v in d.items()]
columns = ','.join([t[0] for t in l])
values = tuple([t[1] for t in l])
cursor = conn.cursor()
print cursor.mogrify(insert, ([AsIs(columns)] + [values]))
keys = d.keys()
columns = ','.join(keys)
values = ','.join(['%({})s'.format(k) for k in keys])
insert = 'insert into table ({0}) values ({1})'.format(columns, values)
print cursor.mogrify(insert, d)
Output:
insert into table (k2,k1) values ('v2', 'v1')
insert into table (k2,k1) values ('v2','v1')
I sometimes run into this issue, especially with respect to JSON data, which I naturally want to deal with as a dict. Very similar. . .But maybe a little more readable?
def do_insert(rec: dict):
cols = rec.keys()
cols_str = ','.join(cols)
vals = [ rec[k] for k in cols ]
vals_str = ','.join( ['%s' for i in range(len(vals))] )
sql_str = """INSERT INTO some_table ({}) VALUES ({})""".format(cols_str, vals_str)
cur.execute(sql_str, vals)
I typically call this type of thing from inside an iterator, and usually wrapped in a try/except. Either the cursor (cur) is already defined in an outer scope or one can amend the function signature and pass a cursor instance in. I rarely insert just a single row. . .And like the other solutions, this allows for missing cols/values provided the underlying schema allows for it too. As long as the dict underlying the keys view is not modified as the insert is taking place, there's no need to specify keys by name as the values will be ordered as they are in the keys view.
[Suggested answer/workaround - better answers are appreciated!]
After some trial/error I got the following to work:
sql = "INSERT INTO table (" + ", ".join(dict_data.keys()) + ") VALUES (" + ", ".join(["%("+k+")s" for k in dict_data]) + ");"
This gives the sql string
"INSERT INTO table (k1, k2, ... , kn) VALUES (%(k1)s, %(k2)s, ... , %(kn)s);"
which may be executed by
with psycopg2.connect(database='deepenergy') as con:
with con.cursor() as cur:
cur.execute(sql, dict_data)
Post/cons?
using %(name)s placeholders may solve the problem:
dict_data = {'key1':val1, 'key2':val2}
cur.execute("""INSERT INTO table (field1, field2)
VALUES (%(key1)s, %(key2)s);""",
dict_data)
you can find the usage in psycopg2 doc Passing parameters to SQL queries
Here is another solution inserting a dictionary directly
Product Model (has the following database columns)
name
description
price
image
digital - (defaults to False)
quantity
created_at - (defaults to current date)
Solution:
data = {
"name": "product_name",
"description": "product_description",
"price": 1,
"image": "https",
"quantity": 2,
}
cur = conn.cursor()
cur.execute(
"INSERT INTO products (name,description,price,image,quantity) "
"VALUES(%(name)s, %(description)s, %(price)s, %(image)s, %(quantity)s)", data
)
conn.commit()
conn.close()
Note: The columns to be inserted is specified on the execute statement .. INTO products (column names to be filled) VALUES ..., data <- the dictionary (should be the same **ORDER** of keys)
There are an alias for "old value" in the ON CONFLICT DO UPDATE?
My real life problem is
INSERT INTO art.validterm (namespace,term,X,info)
SELECT namespace,term,array_agg(Xi), 'etc'
FROM term_raw_Xs
GROUP BY namespace,term
ON CONFLICT (term) DO
UPDATE SET aliases=OLD.X||EXCLUDED.X
WHERE term=EXCLUDED.term
PS: no "OLD" exists, is the question. The parser say that only X is ambigous.
Simply replacing OLD with the name of the table, in your case: validterm, worked for me.
My test:
DROP TABLE IF EXISTS work.term_raw;
CREATE TABLE work.term_raw
(
unique_field INT UNIQUE,
x_field TEXT
);
INSERT INTO work.term_raw VALUES (1, 'A');
INSERT INTO work.term_raw VALUES (1, 'B')
ON CONFLICT (unique_field) DO UPDATE SET x_field = term_raw.x_field || EXCLUDED.x_field;
SELECT * FROM work.term_raw;
My result:
I want to insert a new row which copies the two fields from the original row, and changes the last field to a new value. This is all done on one table.
Please excuse the table names/fields, they are very long.
Table 1 - alert_template_allocations
alert_template_allocation_id - pkey (ignored)
alert_template_allocation_io_id - (copy)
alert_template_allocation_alert_template_id - (copy)
alert_template_allocation_user_group_id - (change to a static value)
Table 2 - io
io_id - copy io_ids that belong to station 222
io_station_id - want to only copy rows where the station id = 222
My Attempt
insert into alert_template_allocations
(alert_template_allocation_io_id,
alert_template_allocation_alert_template_id,
alert_template_allocation_user_group_id)
values
(
(Select at.alert_template_allocation_io_id,
at.alert_template_allocation_alert_template_id
from alert_template_allocations at join io i on
i.io_id = at.alert_template_allocation_io_id
and i.io_station_id = 222)
, 4);
Use INSERT INTO SELECT syntax:
INSERT INTO alert_template_allocations (alert_template_allocation_io_id,
alert_template_allocation_alert_template_id,
alert_template_allocation_user_group_id)
SELECT at.alert_template_allocation_io_id,
at.alert_template_allocation_alert_template_id,
4
FROM alert_template_allocations at
JOIN io i
ON i.io_id = at.alert_template_allocation_io_id
AND i.io_station_id = 222;
I have the following heap of text:
"BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,
URLConnectionSample,ShortVersion,1.0,Version,1.0,BundleSize,155648,DynamicSize,
16384,Identifier,com.IdentifierForVendor3,Name,IdentifierForVendor3,ShortVersion,
1.0,Version,1.0,".
What I'd like to do is extract data from this in the following manner:
BundleSize:155648
DynamicSize:204800
Identifier:com.URLConnectionSample
Name:URLConnectionSample
ShortVersion:1.0
Version:1.0
BundleSize:155648
DynamicSize:16384
Identifier:com.IdentifierForVendor3
Name:IdentifierForVendor3
ShortVersion:1.0
Version:1.0
All tips and suggestions are welcome.
It isn't quite clear what do you need to do with this data. If you really need to process it entirely in the database (looks like the task for your favorite scripting language instead), one option is to use hstore.
Converting records one by one is easy:
Assuming
%s =
BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,URLConnectionSample,ShortVersion,1.0,Version,1.0
SELECT * FROM each(hstore(string_to_array(%s, ',')));
Output:
key | value
--------------+-------------------------
Name | URLConnectionSample
Version | 1.0
BundleSize | 155648
Identifier | com.URLConnectionSample
DynamicSize | 204800
ShortVersion | 1.0
If you have table with columns exactly matching field names (note the quotes, populate_record is case-sensitive to key names):
CREATE TABLE data (
"BundleSize" integer, "DynamicSize" integer, "Identifier" text,
"Name" text, "ShortVersion" text, "Version" text);
You can insert hstore records into it like this:
INSERT INTO data SELECT * FROM
populate_record(NULL::data, hstore(string_to_array(%s, ',')));
Things get more complicated if you have comma-separated values for more than one record.
%s = BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,URLConnectionSample,ShortVersion,1.0,Version,1.0,BundleSize,155648,DynamicSize,16384,Identifier,com.IdentifierForVendor3,Name,IdentifierForVendor3,ShortVersion,1.0,Version,1.0,
You need to break up an array into chunks of number_of_fields * 2 = 12 elements first.
SELECT hstore(row) FROM (
SELECT array_agg(str) AS row FROM (
SELECT str, row_number() OVER () AS i FROM
unnest(string_to_array(%s, ',')) AS str
) AS str_sub
GROUP BY (i - 1) / 12) AS row_sub
WHERE array_length(row, 1) = 12;
Output:
"Name"=>"URLConnectionSample", "Version"=>"1.0", "BundleSize"=>"155648", "Identifier"=>"com.URLConnectionSample", "DynamicSize"=>"204800", "ShortVersion"=>"1.0"
"Name"=>"IdentifierForVendor3", "Version"=>"1.0", "BundleSize"=>"155648", "Identifier"=>"com.IdentifierForVendor3", "DynamicSize"=>"16384", "ShortVersion"=>"1.0"
And inserting this into the aforementioned table:
INSERT INTO data SELECT (populate_record(NULL::data, hstore(row))).* FROM ...
the rest of the query is the same.