I'm using a postgres database, and need to update my jsonb data.
When my key inside jsonb data don't have spaces, it works well. But when I try to update a key that have spaces, don't update.
update forms
set data = data || '{"RAZÃO SOCIAL":"aaaA","SIGLA":"ASCOM SOUREEEE"}'
where id = 10;
In that case, my query update "SIGLA" , but don't update "RAZÃO SOCIAL".
Any idea to manage this issue? Already replaced the whitespace with %20 and no success.
Related
How to delete the attributes value of a table in oracle 10g using sql command?
Example of problem like: Delete the email_id of employee James.-> I tried this code
DELETE FROM EMPLOYEE01 WHERE MAIL_ID= 'james#gmail.com'
but after run output showed me that entire row was deleted but I want only email id delete from particular row.
If you only want to update a row, use UPDATE:
UPDATE EMPLOYEE01
SET MAIL_ID = NULL
WHERE MAIL_ID = 'james#gmail.com'
Here, SET MAIL_ID = NULL will remove the value from MAIL_ID field for the record identified by the WHERE clause.
I have an insert that is recording data from a webform and inserting it into my table. I'd like to run an update immediately after my insert that reads the previous insert and finds all the null fields and updates that record's null fields with a string of --
The data-type for all my fields is varchar
I have 20+ forms each with 100+ fields so i'm looking for a function that would be smart enough to read/update the fields that have null values without specifically enumerating/writing out each field for the update statement. This would just take way too long.
Does anyone know of a way to read simply which fields have null values and update any fields that are null to a string, in my case --
IF you can't alter your existing code,I would go with insert trigger...so after every insert,you can check and see the null values and update them like below
create trigger triggername
on table
after insert
as
begin
update t
set t.col1=isnull(i.col1,'--'),
t.col2=isnull(i.col2,'--')
rest of cols
from table t
join
inserted i
on i.matchingcol=t.mtachingcol
end
The issue with above approach is,you will have to check all inserted rows..I would go with this approach only,since filtering many cols with many or clauses is not good for performance
If is to just for display purposes,i would go with view
Instead of update after insert you may try changing table structure.
Set default value of the columns to --. If while insert no value is provided, -- will be inserted automatically.
How can I ignore duplicate rows while storing a dataframe in a postgres DB with Blaze's Odo?
For example, I store the first 3 rows like this:
>>> odo(df[:3], 'postgresql:///my_db::my_table')
my_table has a column ID as a primary key. If I add a few more but this time including the previous last row, I want to skip that row and add others instead of getting that IntegrityError.
>>> odo(df[2:5], 'postgresql:///my_db::my_table')
IntegrityError: (psycopg2.IntegrityError) duplicate key value violates unique constraint ...
How can I do that? Loading ID values from DB and checking for duplicates seems expensive to me if the DB has millions of rows. Is there any better alternative?
Something like this:
INSERT...ON DUPLICATE KEY IGNORE
Blaze: 0.8.3, Postgres: 9.4.4, Psycopg2: 2.6.1
The data model for odo only supports appending, not merging. You'll need to either remove duplicates before passing it through odo, or use the database to remove the duplicates. Try adding an auto-increment field, and set it as the primary key. This will fix your IntegrityError problems on insertion, and you can remove the duplicates after that.
I'm currently working with Firebird and attempting to utilize UPDATE OR INSERT functionality in order to solve a particular new case within our software. Basically, we are needing to pull data off of a source and put it into an existing table and then update that data at regular intervals and adding any new references. The source is not a database so it isn't a matter of using MERGE to link the two tables (unless we make a separate table and then merge it, but that seems unnecessary).
The problem rests on the fact we cannot use the primary key of the existing table for matching, because we need to match based off of the ID we get from the source. We can use the MATCHING clause no problem but the issue becomes that the primary key of the existing table will be updated to the next key every time because it has to be in the query because of the insertion chance. Here is the query (along with c# parameter additions) to demonstrate the problem.
UPDATE OR INSERT INTO existingtable (PrimaryKey, UniqueSourceID, Data) VALUES (?,?,?) MATCHING (UniqueSourceID);
this.AddInParameter("PrimaryKey", FbDbType.Integer, itemID);
this.AddInParameter("UniqueSourceID", FbDbType.Integer, source.id);
this.AddInParameter("Data", FbDbType.SmallInt, source.data);
Problem is shown that every time the UPDATE triggers, the primary key will also change to the next incremented key I need a way to leave the primary key alone when updating, but if it is inserting I need to insert it.
Do not generate primary key manually, let a trigger generate it when nessesary:
CREATE SEQUENCE seq_existingtable;
SET TERM ^ ;
CREATE TRIGGER Gen_PK FOR existingtable
ACTIVE BEFORE INSERT
AS
BEGIN
IF(NEW.PrimaryKey IS NULL)THEN NEW.PrimaryKey = NEXT VALUE FOR seq_existingtable;
END^
SET TERM ; ^
Now you can omit the PK field from your statement:
UPDATE OR INSERT INTO existingtable (UniqueSourceID, Data) VALUES (?,?) MATCHING (UniqueSourceID);
and when the insert is triggered by the statement then the trigger will take care of creating the PK. If you need to know the generated PK then use the RETURNING clause of the UPDATE OR INSERT statement.
I am programming for iPhone and i am using SQLITE DB for my app.I have a situation where i want to insert records into the table,only if the records doesn't exist previously.Otherwise the records should not get inserted.
How can i do this?Please any body suggest me a suitable query for this.
Thank you one and all,
Looking at SQLite's INSERT page http://www.sqlite.org/lang_insert.html.
You can do it using the following syntax
INSERT OR IGNORE INTO tablename ....
Example
INSERT OR IGNORE INTO tablename(id, value, data) VALUES(2, 4562, 'Sample Data');
Note : You need to have a KEY on the table columns which uniquely identify a row. It is only if a duplicate KEY is tried to be inserted that INSERT OR IGNORE will not insert a new row.
In the above example if you have a KEY on id, then another row with id = 2 will not be inserted.
If you have a KEY only on id and value then a combination of id = 2 and value = 4562 will cause a new row not be inserted.
In short there must be a key to uniquely identify a ROW only then will the Database know there is a duplicate which SHOULD NOT Be allowed.
Otherwise if you do not have a KEY you would need to go the SELECT and then check if a row is already there, route. But here also whichever condition you are using on columns you can add them as a KEY to the table and simply use the INSERT OR IGNORE
In SQLite it is not possible to ALTER the table and add a constraint like UNIQUE or PRIMAY KEY. For that you need to recreate the table. Look at this FAQ on sqlite.org
http://sqlite.org/faq.html#q11
Hello Sankar what you can do is perform a select query with the record you wish to insert and then check the response via SQLite's SQLITE_NOTFOUND flag you can check whether that record already exists or not. If it doesn't exist you can insert it otherwise you skip inserting.
I hope this is helpful.