I have a piece of code that inserts into a table. Simply, I have a table with structure
Table1:
Id int PK
Type tinyint
Status tinyint
After digging in our PROD, I observed that EF might be intermittently defaulting the value to 255 when the value should range (1-3). I understand 255 is the max value for tinyint in SQL Server but I have no defaults in both table definition and in EDMX.
Usage:
//OUTAGE_TYPE
public enum OUTAGE_TYPE
{
Unknown = -1,
SwitchingPlan = 1,
PlannedIncident = 2,
UnplannedIncident = 3
}
//CREATE INSTANCE
var outage = new Outage
{
OutageTypeId = (byte)OUTAGE_TYPE.SwitchingPlan,
OutageStatusId = (byte)OUTAGE_STATUS.Approved,
IncidentRID = incident.RID,
WorkRID = incident.WorkRID,
ETR = incident.ETR,
DateInserted = Time.Now(),
DateUpdated = Time.Now()
};
Captured SQL:
INSERT [dbo].[Outage]([OutageTypeId], [OutageStatusId], [IncidentRID], [WorkRID], [ETR], [DateInserted], [DateUpdated])
VALUES (#0, #1, #2, NULL, #3, #4, #5)
SELECT [Id]
FROM [dbo].[Outage]
WHERE ##ROWCOUNT > 0 AND [Id] = scope_identity()
-- #0: '255' (Type = Byte, Size = 1)
-- #1: '1' (Type = Byte, Size = 1)
-- #2: 'INC 11000700' (Type = String, Size = 255)
-- #3: '05-08-2016 10:05:52' (Type = DateTime2)
-- #4: '05-08-2016 22:05:22' (Type = DateTime2)
-- #5: '05-08-2016 22:05:22' (Type = DateTime2)
-- Executing at 06-08-2016 00:06:00 +02:00
Error:
The INSERT statement conflicted with the FOREIGN KEY constraint "FK_Outage_OutageType". The conflict occurred in database "ONSP", table "dbo.OutageType", column 'Id'.
I was not able to reproduce on my local machine, maybe you guys have experienced this before. Many thanks.
After my colleague and me digging around, we found the root cause of this problem.
First -1, is not a valid value for a byte and direct conversion will not throw exception but will yield to byte.MaxValue
void Main()
{
var type = OUTAGE_TYPE.Unknown;
var result = (byte)type;
//result = 255!!!
Console.WriteLine(result);
}
public enum OUTAGE_TYPE
{
Unknown = -1,
SwitchingPlan = 1,
PlannedIncident = 2,
UnplannedIncident = 3
}
Related
I'm using a set of user properties on DataBricks Delta Tables for metadata management. The problem is when I need to change one of those properties I'm getting the 'FAILED Error: The specified properties do not match the existing properties at /mnt/silver/...' error message.
Databricks documentation only states that an Exception will be raised and I didn't find any argument to force it to accept the new values.
Is it possible to just update table Properties?
Any Suggestions?
Sample Code:
query = f'''
CREATE TABLE if not exists {tableMetadataDBName}.{tableMetadataTableName}
(
... my columns ...
-- COMMON COLUMNS
,Hash string
,sourceFilename STRING
,HashWithFileDate string
,Surrogate_Key STRING
,SessionId STRING
,SessionRunDate TIMESTAMP
,Year INT GENERATED ALWAYS AS ( YEAR(fileDate))
,Month INT GENERATED ALWAYS AS ( MONTH(fileDate))
,fileDate DATE
)
USING DELTA
COMMENT '{tableDocumentationURL}'
LOCATION "{savePath}/parquet"
OPTIONS( "compression"="snappy")
PARTITIONED BY (Year, Month, fileDate )
TBLPROPERTIES ("DataStage"="{txtDataStage.upper()}"
,"Environment"="{txtEnvironment}"
,"source"="{tableMetadataSource}"
,"CreationDate" = "{tableMetadataCreationDate}"
,"CreatedBy" = "{tableMetadataCreatedBy}"
,"Project" = "{tableMetadataProject}"
,"AssociatedReports" = "{tableMetadataAssociatedReports}"
,"UpstreamDependencies" = "{tableMetadataUpstreamDependencies}"
,"DownstreamDependencies" = "{tableMetadataDownstreamDependencies}"
,"Source" = "{tableMetadataSource}"
,"PopulationFrequency" = "{tableMetadataPopulationFrequency}"
,"BusinessSubject" = "{tableMetadataBusinessSubject}"
,"JiraProject" = "{tableMetadataJiraProject}"
,"DevOpsProject" = "{tableMetadataDevOpsProject}"
,"DevOpsRepository" = "{tableMetadataDevOpsRepository}"
,"URL" = "{tableMetadataURL}") '''
spark.sql(query)
Yes, it's possible to change just properties - you need to use "ALTER TABLE [table_name] SET TBLPROPERTIES ..." for that:
query = f"""ALTER TABLE {table_name} SET TBLPROPERTIES (
'CreatedBy' = '{tableMetadataCreatedBy}
,'Project' = '{tableMetadataProject}'
....
)"""
spark.sql(query)
We are benchmarking some queries to see if they will still work reliably for "a lot of" data. (1 million isn't that much to be honest, but Postgres already fails here, so it evidently is.)
Our Java code to call this queries looks something like that:
#PersistenceContext
private EntityManager em;
#Resource
private UserTransaction utx;
for (int i = 0; i < 20; i++) {
this.utx.begin();
for (int inserts = 0; inserts < 50_000; inserts ++) {
em.createNativeQuery(SQL_INSERT).executeUpdate();
}
this.utx.commit();
for (int parameter = 0; parameter < 25; parameter ++)
long time = System.currentTimeMillis();
Assert.assertNotNull(this.em.createNativeQuery(SQL_SELECT).getResultList());
System.out.println(i + " iterations \t" + parameter + "\t" + (System.currentTimeMillis() - time) + "ms");
}
}
Or with plain JDBC:
Connection connection = //...
for (int i = 0; i < 20; i++) {
for (int inserts = 0; inserts < 50_000; inserts ++) {
try (Statement statement = connection.createStatement();) {
statement.execute(SQL_INSERT);
}
}
for (int parameter = 0; parameter < 25; parameter ++)
long time = System.currentTimeMillis();
try (Statement statement = connection.createStatement();) {
statement.execute(SQL_SELECT);
}
System.out.println(i + " iterations \t" + parameter + "\t" + (System.currentTimeMillis() - time) + "ms");
}
}
The queries we tried were a simple INSERT into a table with JSON and a INSERT over two tables with about 25 lines. The SELECT has one or two JOINs and is pretty easy. One set of queries is (I had to anonymize the SQL else I wouldn't have been allowed to post it):
CREATE TABLE ts1.p (
id integer NOT NULL,
CONSTRAINT p_pkey PRIMARY KEY ("id")
);
CREATE TABLE ts1.m(
pId integer NOT NULL,
mId character varying(100) NOT NULL,
a1 character varying(50),
a2 character varying(50),
CONSTRAINT m_pkey PRIMARY KEY (pI, mId)
);
CREATE SEQUENCE ts1.seq_p;
/*
* SQL_INSERT
*/
WITH p AS (
INSERT INTO ts1.p (id)
VALUES (nextval('ts1.seq_p'))
RETURNING id AS pId
)
INSERT INTO ts1.m(pId, mId, a1, a2)
VALUES ((SELECT pId from p), 'M1', '11', '12'),
((SELECT pId from p), 'M2', '13', '14'),
/* ... about 20 to 25 rows of values */
/*
* SQL_SELECT
*/
WITH userInput (mId, a1, a2) AS (
VALUES
('M1', '11', '11'),
('M2', '12', '15'),
/* ... about "parameter" rows of values */
)
SELECT m.pId, COUNT(m.a1) AS matches
FROM userInput u
LEFT JOIN ts1.m m ON (m.mId) = (u.mId)
WHERE (m.a1 IS NOT DISTINCT FROM u.a1) AND
(m.a2 IS NOT DISTINCT FROM u.a2) OR
(m.a1 IS NULL AND m.a2 IS NULL)
GROUP BY m.pId
/* plus HAVING, additional WHERE clauses etc. according to the use case, but that just speeds up the query */
When executing, we get the following output (the values are supposed to rise steadly and linearly):
271ms
414ms
602ms
820ms
995ms
1192ms
1396ms
1594ms
1808ms
1959ms
110ms
33ms
14ms
10ms
11ms
10ms
21ms
8ms
13ms
10ms
As you can see, after some value (usually at around 300,000 to 500,000 inserts) the time needed for the query drops significantly. Sadly we can't really debug what the result is at that point (other than that it's not null), but we assume it's an empty list, because the database tables are empty.
Let me repeat that: After half a million INSERTS, Postgres clears tables.
Of course that's not acceptable at all.
We tried different queries, all of easy to medium difficulty, and all produced this behavior, so we assume it's not the queries.
We thought that maybe the sequence returned a value too high for a column integer, so we droped and recreated the sequence.
Once there was this exception:
org.postgresql.util.PSQLException : FEHLER: Verklemmung (Deadlock) entdeckt
Detail: Prozess 1620 wartet auf AccessExclusiveLock-Sperre auf Relation 2001098 der Datenbank 1937678; blockiert von Prozess 2480.
Which I'm entirely unable to translate. I guess it's something like:
org.postgresql.util.PSQLException : ERROR: Jamming? Clamping? Constipation? (Deadlock) found
But I don't think this error has anything to do with the clearing of the table. We just tested against the wrong database, so multiple queries were run on the same table. Normally we have one database per benchmark test.
Of course it's important that we find out what the error is, so that we can decide if there is any risk to our customers losing their data (because again, on error the database empties some table of its choice).
Postgres version: PostgreSQL 10.6, compiled by Visual C++ build 1800, 64-bit
We tried PostgreSQL 9.6.11, compiled by Visual C++ build 1800, 64-bit, too. And we never had the same problem there (even though that could just be luck, since it's not 100% reproducible).
Do you have any idea what the error is? Or how we could debug it? The entire benchmark test runs for an hour, so there is no immediate feedback.
I have update statement that keeps failing .
Update dbo.Marker
set MarkId= case
when Ninja = 55 and cast(Tometo as varchar) = 0031A then 22
else MarkId
end
the data type for MarkId is int data type ,
Ninja is int data type and Tometo is varchar (9) .
this is the error i am getting
Msg 245, Level 16, State 1, Line 48
Conversion failed when converting the varchar value '031A' to data type int.
You have missing quotes around 0031A
Update dbo.Marker
set MarkId= case
when Ninja = 55 and cast(Tometo as varchar) = '0031A' then 22
else MarkId
end
To find the rows that cause the issue run below query
select case
when Ninja = 55 and cast(Tometo as varchar) = '0031A' then 22
else MarkId
end
from dbo.Marker
then find out what rows return non-integer value. I can't help more w/o having your data.
I searching for help. I have to map my Postgres 9.4 Database (DB) with Hibernate 5.2, of course it's an study task. The biggest Problem is, that I'm no brain in Hibernate, Java and coding itself XD
It's an SozialNetwork DB. To map the DB with Hibernate doing fine.
Now I should map a stored produce. This Produce should find the shortest friendship path between two persons. In Postgres the produce working fine.
That are the relevant DB-Tables:
For Person:
CREATE TABLE Person (
PID bigint NOT NULL,
firstName varchar(50) DEFAULT NULL,
lastName varchar(50) DEFAULT NULL,
(some more...)
PRIMARY KEY (PID)
);
And for the Relationship between to Persons:
CREATE TABLE Person_knows_Person (
ApID bigint NOT NULL,
BpID bigint REFERENCES Person (PID) (..)
knowsCreationDate timestamp,
PRIMARY KEY (ApID,BpID));
And that is the Stored Produce in short:
CREATE OR REPLACE FUNCTION ShortFriendshipPath(pid bigint, pid2 bigint)
RETURNS TABLE (a_pid bigint, b_pid bigint, depth integer, path2 bigint[], cycle2 boolean)
AS $$
BEGIN
RETURN QUERY
SELECT * FROM (
WITH RECURSIVE FriendshipPath(apid, bpid, depth, path, cycle) AS(
SELECT pkp.apid, pkp.bpid,1,
ARRAY[pkp.apid], false
FROM person_knows_person pkp
WHERE apid=$1 --OR bpid=$1
UNION ALL
SELECT pkp.apid, pkp.bpid, fp.depth+1, path || pkp.apid,
pkp.apid = ANY(path)
FROM person_knows_person pkp, FriendshipPath fp
WHERE pkp.apid = fp.bpid AND NOT cycle)
SELECT *
FROM FriendshipPath WHERE bpid=$2) AS OKOK
UNION
SELECT * FROM (
WITH RECURSIVE FriendshipPath(apid, bpid, depth, path, cycle) AS(
SELECT pkp.apid, pkp.bpid,1,
ARRAY[pkp.apid], false
FROM person_knows_person pkp
WHERE apid=$2 --OR bpid=$1
UNION ALL
SELECT pkp.apid, pkp.bpid, fp.depth+1, path || pkp.apid,
pkp.apid = ANY(path)
FROM person_knows_person pkp, FriendshipPath fp
WHERE pkp.apid = fp.bpid AND NOT cycle)
SELECT *
FROM FriendshipPath WHERE bpid=$1) AS YOLO
ORDER BY depth ASC LIMIT 1;
END;
$$ LANGUAGE 'plpgsql' ;
(Sorry for so much code, but it's for both directions, and before I post some copy+reduce misttakes^^)
The Call in Postgre for example:
SELECT * FROM ShortFriendshipPath(10995116277764, 94);
gives me this Output:
enter image description here
I use the internet for help and find 3 solutions for calling:
direct SQL call
call with NamedQuery and
map via XML
(fav found here)
I faild with all of them XD
I favorite the 1. solution with this call in session:
Session session = HibernateUtility.getSessionfactory().openSession();
Transaction tx = null;
try {
tx = session.beginTransaction();
System.out.println("Please insert a second PID:");
Scanner scanner = new Scanner(System.in);
long pid2 = Long.parseLong(scanner.nextLine());
// **Insert of second ID*/
Query query2 = session.createQuery("FROM " + Person.class.getName() + " WHERE pid = :pid ");
query2.setParameter("pid", pid2);
List<Person> listB = ((org.hibernate.Query) query2).list();
int cnt1 = 0;
while (cnt1 < listB.size()) {
Person pers1 = listB.get(cnt1++);
pid2 = pers1.getPid();
}
// Query call directly:
Query querySP = session.createSQLQuery("SELECT a_pid,path2 FROM ShortFriendshipPath(" + pid + "," + pid2 + ")");
List <Object[]> list = ((org.hibernate.Query) querySP).list();
for (int i=0; i<list.size();i++){
Personknowsperson friendship = (Personknowsperson)result.get(i);
}
} catch (Exception e) { (bla..)}
} finally { (bla....) }
Than I get following Error:
javax.persistence.PersistenceException:
org.hibernate.MappingException: No Dialect mapping for JDBC type: 2003
(..blabla...)
I understand why. Because my output is not of type Personknowsperson. I found an answer: that I have to say Hibernate what is the correct formate. And should use 'UserType'. So I try to find some explanations for how I create my UserType. But I found nothing, that I understand. Second Problem: I'm not sure what I should use for the bigint[] (path2). You see I'm expert -.-
Than I got the idea to try the 3.solution. But the first problem I had was where should I write the xml stuff. Because my Output is no table. So I try in the .cfg.xml but than Hibernate say that
Caused by: java.lang.IllegalArgumentException: org.hibernate.internal.util.config.ConfigurationException: Unable to perform unmarshalling at line number -1 and column -1 in RESOURCE hibernate.cfg.xml. Message: cvc-complex-type.2.4.a: Ungültiger Content wurde beginnend mit Element 'sql-query' gefunden. '{some links}' wird erwartet.
translation:
invalid content found starts with 'sql-query'
Now I'm a nervous wreck. And ask you.
Could someone explain what I have to do and what I did wrong (for dummies please). If more code need (java classes or something else) please tell me. Critic for coding also welcome, cause I want improve =)
Ok, I'm not an expert in postgressql, not hibernate, nor java. (I'm working with C#, SQL Server, NHibernate so ...) I still try to give you some hints.
You probably can set the types of the columns using addXyz methods:
Query querySP = session
.createSQLQuery("SELECT * FROM ShortFriendshipPath(...)")
.addScalar("a_pid", LongType.INSTANCE)
...
// add user type?
You need to create a user type for the array. I don't know how and if you can add it to the query. See this answer here.
You can also add the whole entity:
Query querySP = session
.createSQLQuery("SELECT * FROM ShortFriendshipPath(...)")
.addEntity(Personknowsperson.class)
...;
I hope it takes the mapping definition of the corresponding mapping file, where you can specify the user type.
Usually it's much easier to get a flat list of values, I mean a separate row for each different value in the array. Like this:
Instead of
1 | 2 | (3, 4, 5) | false
You would get:
1 | 2 | 3 | false
1 | 2 | 4 | false
1 | 2 | 5 | false
Which seems denormalized, but is actually the way how you build relational data.
In general: use parameters when passing stuff like ids to queries.
Query querySP = session
.createSQLQuery("SELECT * FROM ShortFriendshipPath(:pid1, :pid2)")
.setParameter("pid1", pid1)
.setParameter("pid2", pid2)
...
I know how to use a prefix "N" in raw SQL query. (For unicode string)
But how to use a prefix N in LINQ query? any suggestion?
------------update------------
Below is the log generated by LINQ:
UPDATE [dbo].[Table1]
SET [Content] = #p1
WHERE ([ID] = #p0)
-- #p0: Input BigInt (Size = -1; Prec = 0; Scale = 0) [3]
-- #p1: Input Text (Size = -1; Prec = 0; Scale = 0) [こんにちは]
There's no N in the above generated SQL statement.
(Note I've set datatype of Content as nvarchar)
you can use EntityFunctions.AsUnicode Method