I created a entity based on a sqlite view, and I want to insert data in it using EF6. I altered the .edmx file to make EF see the view as a table and define a primary key on the view.
The problem is that when my program tries to insert data into the view, (which of course has a INSTEAD OF INSERT trigger that insert the data on 2 underlying tables of the view), I get the exception:
Store update, insert, or delete statement affected an unexpected
number of rows (0). Entities may have been modified or deleted since
entities were loaded. Handling optimistic concurrency exceptions.
The underlying query called from EF, is that:
Opened connection at 12/07/2019 16:58:18 +02:00
Started transaction at 12/07/2019 16:58:18 +02:00
INSERT INTO [testcasesView]([name], [deviceversion_id], [user_id], [timestamp], [comment], [static_result], [description], [precondition], [action], [expected_result], [reviewed], [automated])
VALUES (#p0, #p1, #p2, #p3, NULL, #p4, #p5, #p6, #p7, #p8, #p9, #p10);
SELECT [id]
FROM [testcasesView]
WHERE last_rows_affected() > 0 AND [id] = last_insert_rowid()
;
-- #p0: 'asd' (Type = String)
-- #p1: '67' (Type = Int64)
-- #p2: '20' (Type = Int64)
-- #p3: '1562943498' (Type = Int64)
-- #p4: 'as' (Type = String)
-- #p5: 'asd' (Type = String)
-- #p6: 'asd' (Type = String)
-- #p7: 'asd' (Type = String)
-- #p8: 'asd' (Type = String)
-- #p9: '0' (Type = Int64)
-- #p10: '0' (Type = Int64)
-- Executing at 12/07/2019 16:58:18 +02:00
-- Completed in 1 ms with result: SQLiteDataReader
Closed connection at 12/07/2019 16:58:18 +02:00
Disposed transaction at 12/07/2019 16:58:18 +02:00
I understood why but I don't know if I can solve this. The problem is that since the real insert happens inside the trigger (I don't post the code because it's two simple insert), the query
SELECT [id]
FROM [testcasesView]
WHERE last_rows_affected() > 0 AND [id] = last_insert_rowid()
Returns no contents, then EF thinks that no rows was affected.
Is there a statement or code that I can add to my trigger that makes EF see that the insert happened?
Related
i had managed to create tables in postgres but encountered issues when trying to insert values.
comands = (
CREATE TYPE student AS (
name TEXT,
id INTEGER
)
CREATE TABLE studentclass(
date DATE NOT NULL,
time TIMESTAMPTZ NOT NULL,
PRIMARY KEY (date, time),
class student
)
)
And in psycog2
command = (
INSERT INTO studentclass (date, time, student) VALUES (%s,%s, ROW(%s,%s)::student)
)
student_rec = ("John", 1)
record_to_insert = ("2020-05-21", "2020-05-21 08:10:00", student_rec)
cursor.execute(commands, record_to_insert)
When executed, the errors are the incorrect argument and if i tried to hard coded the student value inside the INSERT statement, it will inform me about the unrecognized column for student.
Please advise.
One issue is the column name is class not student. Second is psycopg2 does tuple adaption as composite type
So you can do:
insert_sql = "INSERT INTO studentclass (date, time, class) VALUES (%s,%s,%s)"
student_rec = ("John", 1)
record_to_insert = ("2020-05-21", "2020-05-21 08:10:00", student_rec)
cur.execute(insert_sql, record_to_insert)
con.commit()
select * from studentclass ;
date | time | class
------------+-------------------------+----------
05/21/2020 | 05/21/2020 08:10:00 PDT | (John,1)
I am new to Entity Framework.
When I review the T-query generated by Entity Framework, it shows a comment-out line for parameter with assigned type and value information, eg.
-- p__linq__0: 'ABC' (Type = String, Size = 100) .
I am trying to re-execute these generated T-query. Is there a way/tool to do so? Currently, I have to manually update T-query in order to execute it in SSMS .
[original generated text from EF]
SELECT [Books].[BookId] AS [BookId]
FROM [dbo].[Books]
WHERE [dbo].[Books].[Name] = #p__linq__0
-- p__linq__0: 'ABC' (Type = String, Size = 100)
[after manually re-format]
DECLARE #p__linq__0 nvarchar(100)
SET #p__linq__0 = 'ABC'
SELECT [Books].[BookId] AS [BookId]
FROM [dbo].[Books]
WHERE [dbo].[Books].[Name] = #p__linq__0
I have a table that I created in postgresql:
> CREATE TABLE issuer(
> cik char(10) NOT null ,issuer_name char(150) NOT NULL ,trading_symbol char(10) NOT
> NULL ,SIC char(6) NOt NULL
> ,date_added timestamp NULL DEFAULT
> CURRENT_TIMESTAMP ,CONSTRAINT issuer_pk PRIMARY key (cik) );
I am trying to either update a row if it exists or insert it if it doesn't.
I have searched the documentation on how to make this work, but I am baffled by the errors I get.
I have a function that I call
io = postgres_update_issuer(con,cur,cik,coname,ticker,'')
When I call this function, python calls threading and then quits.
Here is the function I call:
def postgres_update_issuer(conn,cur,issuer_cik,name,ticker,sic):
sql = """
INSERT INTO issuer ( cik,issuer_name,trading_symbol,SIC)
VALUES (%s,%s,%s,%s)
ON CONFLICT (cik)
DO UPDATE SET
(issuer_name,trading_symbol,SIC )
= (EXCLUDED.issuer_name, EXCLUDED.trading_symbol, EXCLUDED.SIC)
;"""
try:
# data = (issuer_cik,name,ticker,sic)
cur.execute(sql,(issuer_cik,name,ticker,sic) )
return True
except (Exception, psycopg2.DatabaseError) as error:
print(error)
When I change the function to this, I get the couldn't move all fields error message:
def postgres_update_issuer(conn,cur,issuer_cik,name,ticker,sic):
sql = """
INSERT INTO issuer ( cik,issuer_name,trading_symbol,SIC)
VALUES (%s)
ON CONFLICT (cik)
DO UPDATE SET
(issuer_name,trading_symbol,SIC )
= (EXCLUDED.issuer_name, EXCLUDED.trading_symbol, EXCLUDED.SIC)
;"""
try:
data = (issuer_cik,name,ticker,sic)
cur.execute(sql,(data )
return True
except (Exception, psycopg2.DatabaseError) as error:
print(error)
What is the correct way to do this. I am using python 3.6 psycopg2, and postgresql 10
I'm trying to insert records on my trying to implement an SCD2 on Redshift
but get an error.
The target table's DDL is
CREATE TABLE ditemp.ts_scd2_test (
id INT
,md5 CHAR(32)
,record_id BIGINT IDENTITY
,from_timestamp TIMESTAMP
,to_timestamp TIMESTAMP
,file_id BIGINT
,party_id BIGINT
)
This is the insert statement:
INSERT
INTO ditemp.TS_SCD2_TEST(id, md5, from_timestamp, to_timestamp)
SELECT TS_SCD2_TEST_STAGING.id
,TS_SCD2_TEST_STAGING.md5
,from_timestamp
,to_timestamp
FROM (
SELECT '20150901 16:34:02' AS from_timestamp
,CASE
WHEN last_record IS NULL
THEN '20150901 16:34:02'
ELSE '39991231 11:11:11.000'
END AS to_timestamp
,CASE
WHEN rownum != 1
AND atom.id IS NOT NULL
THEN 1
WHEN atom.id IS NULL
THEN 1
ELSE 0
END AS transfer
,stage.*
FROM (
SELECT id
FROM ditemp.TS_SCD2_TEST_STAGING
WHERE file_id = 2
GROUP BY id
HAVING count(*) > 1
) AS scd2_count_ge_1
INNER JOIN (
SELECT row_number() OVER (
PARTITION BY id ORDER BY record_id
) AS rownum
,stage.*
FROM ditemp.TS_SCD2_TEST_STAGING AS stage
WHERE file_id IN (2)
) AS stage
ON (scd2_count_ge_1.id = stage.id)
LEFT JOIN (
SELECT max(rownum) AS last_record
,id
FROM (
SELECT row_number() OVER (
PARTITION BY id ORDER BY record_id
) AS rownum
,stage.*
FROM ditemp.TS_SCD2_TEST_STAGING AS stage
)
GROUP BY id
) AS last_record
ON (
stage.id = last_record.id
AND stage.rownum = last_record.last_record
)
LEFT JOIN ditemp.TS_SCD2_TEST AS atom
ON (
stage.id = atom.id
AND stage.md5 = atom.md5
AND atom.to_timestamp > '20150901 16:34:02'
)
) AS TS_SCD2_TEST_STAGING
WHERE transfer = 1
and to short things up, I am trying to insert 20150901 16:34:02 to from_timestamp and 39991231 11:11:11.000 to to_timestamp.
and get
ERROR: 42804: column "from_timestamp" is of type timestamp without time zone but expression is of type character varying
Can anyone please suggest how to solve this issue?
Postgres isn't recognizing 20150901 16:34:02 (your input) as a valid time/date format, so it assumes it's a string.
Use a standard date format instead, preferably ISO-8601. 2015-09-01T16:34:02
SQLFiddle example
Just in case someone ends up here trying to insert into a postgresql a timestamp or a timestampz from a variable in groovy or Java from a prepared statement and getting the same error (as I did), I managed to do it by setting the property stringtype to "unspecified". According to the documentation:
Specify the type to use when binding PreparedStatement parameters set
via setString(). If stringtype is set to VARCHAR (the default), such
parameters will be sent to the server as varchar parameters. If
stringtype is set to unspecified, parameters will be sent to the
server as untyped values, and the server will attempt to infer an
appropriate type. This is useful if you have an existing application
that uses setString() to set parameters that are actually some other
type, such as integers, and you are unable to change the application
to use an appropriate method such as setInt().
Properties props = [user : "user", password: "password",
driver:"org.postgresql.Driver", stringtype:"unspecified"]
def sql = Sql.newInstance("url", props)
With this property set, you can insert a timestamp as a string variable without the error raised in the question title. For instance:
String myTimestamp= Instant.now().toString()
sql.execute("""INSERT INTO MyTable (MyTimestamp) VALUES (?)""",
[myTimestamp.toString()]
This way, the type of the timestamp (from a String) is inferred correctly by postgresql. I hope this helps.
Inside apache-tomcat-9.0.7/conf/server.xml
Add "?stringtype=unspecified" to the end of url address.
For example:
<GlobalNamingResources>
<Resource name="jdbc/??" auth="Container" type="javax.sql.DataSource"
...
url="jdbc:postgresql://127.0.0.1:5432/Local_DB?stringtype=unspecified"/>
</GlobalNamingResources>
I want to have one trigger to handle updates and inserts. Most of the sql actions in the trigger are for both. The only exception is the fields I'm using to record date and username for an insert and an update. This is what I have, but the updates of the fields used to track update and insert are not firing right. If I insert a new record, I get CreatedBy, CreatedOn, LastEditedBy, LastEditedOn populated, with LastEditedOn as 1 second after CreatedOn (which I dont want to happen). When I update the record, only the LastEditedBy & LastEditedOn changes (which is correct). I'm including my full trigger for reference:
SET ANSI_NULLS ON;
GO
SET QUOTED_IDENTIFIER ON;
GO
-- =================================================================================
-- Author: Paul J. Scipione
-- Create date: 2/15/2012
-- Update date: 6/5/2012
-- Description: To concatenate several fields into a set formatted UnitDescription,
-- to total Span & Loop footages, to set appropriate AcctCode, & track
-- user inserts
-- =================================================================================
IF OBJECT_ID('ProcessCable', 'TR') IS NOT NULL
DROP TRIGGER ProcessCable
GO
CREATE TRIGGER ProcessCable
ON Cable
AFTER INSERT, UPDATE
AS
BEGIN
SET NOCOUNT ON;
-- IF TRIGGER_NESTLEVEL() > 1 RETURN
IF ((SELECT TRIGGER_NESTLEVEL()) > 1 )
RETURN
ELSE
BEGIN
-- record user and date of insert or update
IF EXISTS (SELECT * FROM DELETED)
UPDATE Cable SET LastEditedOn = getdate(), LastEditedBy = REPLACE(user_name(), 'GRTINET\', '')
ELSE IF NOT EXISTS (SELECT * FROM DELETED)
UPDATE Cable SET CreatedOn = getdate(), CreatedBy = REPLACE(user_name(), 'GRTINET\', '')
-- reset Suffix if applicable
UPDATE Cable SET Suffix = NULL WHERE Suffix = 'n/a'
-- create UnitDescription value
UPDATE Cable SET UnitDescription =
isnull (Type, '') +
isnull (CONVERT (NVARCHAR (10), Size), '') +
'-' +
isnull (CONVERT (NVARCHAR (10), Gauge), '') +
CASE
WHEN ExtraTrench IS NOT NULL AND ExtraTrench > 0 THEN
CASE
WHEN Suffix IS NULL THEN 'TE' + '(' + CONVERT (NVARCHAR (10), ExtraTrench) + ')'
ELSE 'TE' + '(' + CONVERT (NVARCHAR (10), ExtraTrench) + ')' + Suffix
END
ELSE isnull (Suffix, '')
END
-- convert any accidental negative numbers entered
UPDATE Cable SET Length = ABS(Length)
-- sum Length with LoopFootage into TotalFootage
UPDATE Cable SET TotalFootage = isnull(Length, 0) + isnull(LoopFootage, 0)
-- set proper AcctCode based on Type
UPDATE Cable SET AcctCode =
CASE
WHEN Type IN ('SEA', 'CW', 'CJ') THEN '32.2421.2'
WHEN Type IN ('BFC', 'BJ', 'SEB') THEN '32.2423.2'
WHEN Type IN ('TIP','UF') THEN '32.2422.2'
WHEN Type = 'unknown' OR Type IS NULL THEN 'unknown'
END
WHERE AcctCode IS NULL OR AcctCode = ' '
END
END
GO
A few things jump out at me when I look at your trigger:
You are doing several additional updates rather than a single update (performance-wise, a single update would be better).
Your update statements are unconstrained (there is no JOIN to the inserted/deleted tables to limit the number of records that you perform these additional updates on).
Most of this logic feels like it should be in the application layer rather than in the database; Or, perhaps in some cases implemented differently.
Some quick examples:
Suffix of "n/a" should be removed before inserted.
Cable length absolute value should be done before inserted (with a CHECK CONSTRAINT to verify that bad data cannot be inserted).
TotalFootage should be a computed column so it is always correct.
The Type/AcctCode relationship seems like it should be a column value in a foreign key reference.
But ultimately, I think the reason you are seeing the unexpected dates is because of the unconstrained updates. Without addressing any of the other concerns I brought up above, the statement that sets the audit fields should be more like this:
UPDATE Cable SET LastEditedOn = getdate(), LastEditedBy = REPLACE(user_name(), 'GRTINET\', '')
FROM Cable
JOIN deleted on Cable.PrimaryKeyColumn = deleted.PrimaryKeyColumn
UPDATE Cable SET CreatedOn = getdate(), CreatedBy = REPLACE(user_name(), 'GRTINET\', '')
FROM Cable
JOIN inserted on Cable.PrimaryKeyColumn = inserted.PrimaryKeyColumn
LEFT JOIN deleted on Cable.PrimaryKeyColumn = deleted.PrimaryKeyColumn
WHERE deleted.PrimaryKeyColumn IS NULL