Inserting many rows causes locking conflicts with Hibernate and Postgres, leaving the table empty - postgresql

We are benchmarking some queries to see if they will still work reliably for "a lot of" data. (1 million isn't that much to be honest, but Postgres already fails here, so it evidently is.)
Our Java code to call this queries looks something like that:
#PersistenceContext
private EntityManager em;
#Resource
private UserTransaction utx;
for (int i = 0; i < 20; i++) {
this.utx.begin();
for (int inserts = 0; inserts < 50_000; inserts ++) {
em.createNativeQuery(SQL_INSERT).executeUpdate();
}
this.utx.commit();
for (int parameter = 0; parameter < 25; parameter ++)
long time = System.currentTimeMillis();
Assert.assertNotNull(this.em.createNativeQuery(SQL_SELECT).getResultList());
System.out.println(i + " iterations \t" + parameter + "\t" + (System.currentTimeMillis() - time) + "ms");
}
}
Or with plain JDBC:
Connection connection = //...
for (int i = 0; i < 20; i++) {
for (int inserts = 0; inserts < 50_000; inserts ++) {
try (Statement statement = connection.createStatement();) {
statement.execute(SQL_INSERT);
}
}
for (int parameter = 0; parameter < 25; parameter ++)
long time = System.currentTimeMillis();
try (Statement statement = connection.createStatement();) {
statement.execute(SQL_SELECT);
}
System.out.println(i + " iterations \t" + parameter + "\t" + (System.currentTimeMillis() - time) + "ms");
}
}
The queries we tried were a simple INSERT into a table with JSON and a INSERT over two tables with about 25 lines. The SELECT has one or two JOINs and is pretty easy. One set of queries is (I had to anonymize the SQL else I wouldn't have been allowed to post it):
CREATE TABLE ts1.p (
id integer NOT NULL,
CONSTRAINT p_pkey PRIMARY KEY ("id")
);
CREATE TABLE ts1.m(
pId integer NOT NULL,
mId character varying(100) NOT NULL,
a1 character varying(50),
a2 character varying(50),
CONSTRAINT m_pkey PRIMARY KEY (pI, mId)
);
CREATE SEQUENCE ts1.seq_p;
/*
* SQL_INSERT
*/
WITH p AS (
INSERT INTO ts1.p (id)
VALUES (nextval('ts1.seq_p'))
RETURNING id AS pId
)
INSERT INTO ts1.m(pId, mId, a1, a2)
VALUES ((SELECT pId from p), 'M1', '11', '12'),
((SELECT pId from p), 'M2', '13', '14'),
/* ... about 20 to 25 rows of values */
/*
* SQL_SELECT
*/
WITH userInput (mId, a1, a2) AS (
VALUES
('M1', '11', '11'),
('M2', '12', '15'),
/* ... about "parameter" rows of values */
)
SELECT m.pId, COUNT(m.a1) AS matches
FROM userInput u
LEFT JOIN ts1.m m ON (m.mId) = (u.mId)
WHERE (m.a1 IS NOT DISTINCT FROM u.a1) AND
(m.a2 IS NOT DISTINCT FROM u.a2) OR
(m.a1 IS NULL AND m.a2 IS NULL)
GROUP BY m.pId
/* plus HAVING, additional WHERE clauses etc. according to the use case, but that just speeds up the query */
When executing, we get the following output (the values are supposed to rise steadly and linearly):
271ms
414ms
602ms
820ms
995ms
1192ms
1396ms
1594ms
1808ms
1959ms
110ms
33ms
14ms
10ms
11ms
10ms
21ms
8ms
13ms
10ms
As you can see, after some value (usually at around 300,000 to 500,000 inserts) the time needed for the query drops significantly. Sadly we can't really debug what the result is at that point (other than that it's not null), but we assume it's an empty list, because the database tables are empty.
Let me repeat that: After half a million INSERTS, Postgres clears tables.
Of course that's not acceptable at all.
We tried different queries, all of easy to medium difficulty, and all produced this behavior, so we assume it's not the queries.
We thought that maybe the sequence returned a value too high for a column integer, so we droped and recreated the sequence.
Once there was this exception:
org.postgresql.util.PSQLException : FEHLER: Verklemmung (Deadlock) entdeckt
Detail: Prozess 1620 wartet auf AccessExclusiveLock-Sperre auf Relation 2001098 der Datenbank 1937678; blockiert von Prozess 2480.
Which I'm entirely unable to translate. I guess it's something like:
org.postgresql.util.PSQLException : ERROR: Jamming? Clamping? Constipation? (Deadlock) found
But I don't think this error has anything to do with the clearing of the table. We just tested against the wrong database, so multiple queries were run on the same table. Normally we have one database per benchmark test.
Of course it's important that we find out what the error is, so that we can decide if there is any risk to our customers losing their data (because again, on error the database empties some table of its choice).
Postgres version: PostgreSQL 10.6, compiled by Visual C++ build 1800, 64-bit
We tried PostgreSQL 9.6.11, compiled by Visual C++ build 1800, 64-bit, too. And we never had the same problem there (even though that could just be luck, since it's not 100% reproducible).
Do you have any idea what the error is? Or how we could debug it? The entire benchmark test runs for an hour, so there is no immediate feedback.

Related

PreparedStatement slower than Statement with JDBC

I am currently working on weather monitoring.
For example a record of temperature has a date and a location (coordinates).
All of the coordinates are already in the database, what I need to add is time and the value of the temperature. Values and metadata are in a CSV file.
Basically what I'm doing is:
Get time through the file's name
Insert time into DB, and keep the primary key
Reading file, get the value and coordinates
Select query to get the id of the coordinates
Insert weather value with foreign keys (time and coordinates)
The issue is that the
"SELECT id FROM location WHERE latitude = ... AND longitude = ..."
is too slow. I have got 230k files and currently one file takes more than 2 minutes to be processed... Edit: by changing the index, it now takes 25 seconds and is still too slow. Moreover, the PreparedStatement is also still slower and I cannot figure out why.
private static void putFileIntoDB(String variableName, ArrayList<String[]> matrix, File file, PreparedStatement prepWeather, PreparedStatement prepLoc, PreparedStatement prepTime, Connection conn){
try {
int col = matrix.size();
int row = matrix.get(0).length;
String ts = getTimestamp(file);
Time time = getTime(ts);
// INSERT INTO takes 14ms
prepTime.setInt(1, time.year);
prepTime.setInt(2, time.month);
prepTime.setInt(3, time.day);
prepTime.setInt(4, time.hour);
ResultSet rs = prepTime.executeQuery();
rs.next();
int id_time = rs.getInt(1);
//for each column (longitude)
for(int i = 1 ; i < col ; ++i){
// for each row (latitude)
for(int j = 1 ; j < row ; ++j){
try {
String lon = matrix.get(i)[0];
String lat = matrix.get(0)[j];
String var = matrix.get(i)[j];
lat = lat.substring(1, lat.length()-1);
lon = lon.substring(1, lon.length()-1);
double latitude = Double.parseDouble(lat);
double longitude = Double.parseDouble(lon);
double value = Double.parseDouble(var);
// With this prepared statement, instruction needs 16ms to be executed
prepLoc.setDouble(1, latitude);
prepLoc.setDouble(2, longitude);
ResultSet rsLoc = prepLoc.executeQuery();
rsLoc.next();
int id_loc = rsLoc.getInt(1);
// Whereas this block takes 1ms
Statement stm = conn.createStatement();
ResultSet rsLoc = stm.executeQuery("SELECT id from location WHERE latitude = " + latitude + " AND longitude =" + longitude + ";" );
rsLoc.next();
int id_loc = rsLoc.getInt(1);
// INSERT INTO takes 1ms
prepWeather.setObject(1, id_time);
prepWeather.setObject(2, id_loc);
prepWeather.setObject(3, value);
prepWeather.execute();
} catch (SQLException ex) {
Logger.getLogger(ECMWFHelper.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
} catch (SQLException ex) {
Logger.getLogger(ECMWFHelper.class.getName()).log(Level.SEVERE, null, ex);
}
}
What I already did:
Set two B-tree index on table location on columns latitude and longitude
Drop foreign keys constraints
PreparedStatements in parameters are :
// Prepare selection for weather_radar foreign key
PreparedStatement prepLoc = conn.prepareStatement("SELECT id from location WHERE latitude = ? AND longitude = ?;");
PreparedStatement prepTime = conn.prepareStatement("INSERT INTO time(dataSetID, year, month, day, hour) " +
"VALUES(" + dataSetID +", ?, ? , ?, ?)" +
" RETURNING id;");
// PrepareStatement for weather_radar table
PreparedStatement prepWeather = conn.prepareStatement("INSERT INTO weather_radar(dataSetID, id_1, id_2, " + variableName + ")"
+ "VALUES(" + dataSetID + ", ?, ?, ?)");
Any idea to get things go quicker?
Ubuntu 16.04 LTS 64-bits
15.5 Gio
Intel® Core™ i7-6500U CPU # 2.50GHz × 4
PostgreSQL 9.5.11 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit
Netbeans IDE 8.2
JDK 1.8
postgresql-42.2.0.jar
The key issue you have here is you miss ResultSet.close() and Statement.close() kind of calls.
As you resolve that (add relevant close calls) you might find that having SINGLE con.prepareStatement call (before both for loops) would improve the performance even further (of course, you will not need to close the statement in a loop, however you still would need to close resultsets in a loop).
Then you might apply batch SQL
Using EXPLAIN, the point where query becomes latent could be figured out.
One of the situation where I have encountered case alike being:
Compound queries e.g. parameterized similar date ranges, from different tables and then joining them on some indexed value. Even if the date in the above serve as index still the query produced in preparedStatement, could not hit the indexes and ended up doing a scan over the joining data.

Hibernate: StoredProcedure with recursive depthsearch: Mapping/Output Problems

I searching for help. I have to map my Postgres 9.4 Database (DB) with Hibernate 5.2, of course it's an study task. The biggest Problem is, that I'm no brain in Hibernate, Java and coding itself XD
It's an SozialNetwork DB. To map the DB with Hibernate doing fine.
Now I should map a stored produce. This Produce should find the shortest friendship path between two persons. In Postgres the produce working fine.
That are the relevant DB-Tables:
For Person:
CREATE TABLE Person (
PID bigint NOT NULL,
firstName varchar(50) DEFAULT NULL,
lastName varchar(50) DEFAULT NULL,
(some more...)
PRIMARY KEY (PID)
);
And for the Relationship between to Persons:
CREATE TABLE Person_knows_Person (
ApID bigint NOT NULL,
BpID bigint REFERENCES Person (PID) (..)
knowsCreationDate timestamp,
PRIMARY KEY (ApID,BpID));
And that is the Stored Produce in short:
CREATE OR REPLACE FUNCTION ShortFriendshipPath(pid bigint, pid2 bigint)
RETURNS TABLE (a_pid bigint, b_pid bigint, depth integer, path2 bigint[], cycle2 boolean)
AS $$
BEGIN
RETURN QUERY
SELECT * FROM (
WITH RECURSIVE FriendshipPath(apid, bpid, depth, path, cycle) AS(
SELECT pkp.apid, pkp.bpid,1,
ARRAY[pkp.apid], false
FROM person_knows_person pkp
WHERE apid=$1 --OR bpid=$1
UNION ALL
SELECT pkp.apid, pkp.bpid, fp.depth+1, path || pkp.apid,
pkp.apid = ANY(path)
FROM person_knows_person pkp, FriendshipPath fp
WHERE pkp.apid = fp.bpid AND NOT cycle)
SELECT *
FROM FriendshipPath WHERE bpid=$2) AS OKOK
UNION
SELECT * FROM (
WITH RECURSIVE FriendshipPath(apid, bpid, depth, path, cycle) AS(
SELECT pkp.apid, pkp.bpid,1,
ARRAY[pkp.apid], false
FROM person_knows_person pkp
WHERE apid=$2 --OR bpid=$1
UNION ALL
SELECT pkp.apid, pkp.bpid, fp.depth+1, path || pkp.apid,
pkp.apid = ANY(path)
FROM person_knows_person pkp, FriendshipPath fp
WHERE pkp.apid = fp.bpid AND NOT cycle)
SELECT *
FROM FriendshipPath WHERE bpid=$1) AS YOLO
ORDER BY depth ASC LIMIT 1;
END;
$$ LANGUAGE 'plpgsql' ;
(Sorry for so much code, but it's for both directions, and before I post some copy+reduce misttakes^^)
The Call in Postgre for example:
SELECT * FROM ShortFriendshipPath(10995116277764, 94);
gives me this Output:
enter image description here
I use the internet for help and find 3 solutions for calling:
direct SQL call
call with NamedQuery and
map via XML
(fav found here)
I faild with all of them XD
I favorite the 1. solution with this call in session:
Session session = HibernateUtility.getSessionfactory().openSession();
Transaction tx = null;
try {
tx = session.beginTransaction();
System.out.println("Please insert a second PID:");
Scanner scanner = new Scanner(System.in);
long pid2 = Long.parseLong(scanner.nextLine());
// **Insert of second ID*/
Query query2 = session.createQuery("FROM " + Person.class.getName() + " WHERE pid = :pid ");
query2.setParameter("pid", pid2);
List<Person> listB = ((org.hibernate.Query) query2).list();
int cnt1 = 0;
while (cnt1 < listB.size()) {
Person pers1 = listB.get(cnt1++);
pid2 = pers1.getPid();
}
// Query call directly:
Query querySP = session.createSQLQuery("SELECT a_pid,path2 FROM ShortFriendshipPath(" + pid + "," + pid2 + ")");
List <Object[]> list = ((org.hibernate.Query) querySP).list();
for (int i=0; i<list.size();i++){
Personknowsperson friendship = (Personknowsperson)result.get(i);
}
} catch (Exception e) { (bla..)}
} finally { (bla....) }
Than I get following Error:
javax.persistence.PersistenceException:
org.hibernate.MappingException: No Dialect mapping for JDBC type: 2003
(..blabla...)
I understand why. Because my output is not of type Personknowsperson. I found an answer: that I have to say Hibernate what is the correct formate. And should use 'UserType'. So I try to find some explanations for how I create my UserType. But I found nothing, that I understand. Second Problem: I'm not sure what I should use for the bigint[] (path2). You see I'm expert -.-
Than I got the idea to try the 3.solution. But the first problem I had was where should I write the xml stuff. Because my Output is no table. So I try in the .cfg.xml but than Hibernate say that
Caused by: java.lang.IllegalArgumentException: org.hibernate.internal.util.config.ConfigurationException: Unable to perform unmarshalling at line number -1 and column -1 in RESOURCE hibernate.cfg.xml. Message: cvc-complex-type.2.4.a: Ungültiger Content wurde beginnend mit Element 'sql-query' gefunden. '{some links}' wird erwartet.
translation:
invalid content found starts with 'sql-query'
Now I'm a nervous wreck. And ask you.
Could someone explain what I have to do and what I did wrong (for dummies please). If more code need (java classes or something else) please tell me. Critic for coding also welcome, cause I want improve =)
Ok, I'm not an expert in postgressql, not hibernate, nor java. (I'm working with C#, SQL Server, NHibernate so ...) I still try to give you some hints.
You probably can set the types of the columns using addXyz methods:
Query querySP = session
.createSQLQuery("SELECT * FROM ShortFriendshipPath(...)")
.addScalar("a_pid", LongType.INSTANCE)
...
// add user type?
You need to create a user type for the array. I don't know how and if you can add it to the query. See this answer here.
You can also add the whole entity:
Query querySP = session
.createSQLQuery("SELECT * FROM ShortFriendshipPath(...)")
.addEntity(Personknowsperson.class)
...;
I hope it takes the mapping definition of the corresponding mapping file, where you can specify the user type.
Usually it's much easier to get a flat list of values, I mean a separate row for each different value in the array. Like this:
Instead of
1 | 2 | (3, 4, 5) | false
You would get:
1 | 2 | 3 | false
1 | 2 | 4 | false
1 | 2 | 5 | false
Which seems denormalized, but is actually the way how you build relational data.
In general: use parameters when passing stuff like ids to queries.
Query querySP = session
.createSQLQuery("SELECT * FROM ShortFriendshipPath(:pid1, :pid2)")
.setParameter("pid1", pid1)
.setParameter("pid2", pid2)
...

Count previous occurences of a value split by date ranges

Here's a simple query we do for ad hoc requests from our Marketing department on the leads we received in the last 90 days.
SELECT ID
,FIRST_NAME
,LAST_NAME
,ADDRESS_1
,ADDRESS_2
,CITY
,STATE
,ZIP
,HOME_PHONE
,MOBILE_PHONE
,EMAIL_ADDRESS
,ROW_ADDED_DTM
FROM WEB_LEADS
WHERE ROW_ADDED_DTM BETWEEN #START AND #END
They are asking for more derived columns to be added that show the number of previous occurences of ADDRESS_1 where the EMAIL_ADDRESS matches. But they want is for different date ranges.
So the derived columns would look like this:
,COUNT_ADDRESS_1_LAST_1_DAYS,
,COUNT_ADDRESS_1_LAST_7_DAYS
,COUNT_ADDRESS_1_LAST_14_DAYS
etc.
I've manually filled these derived columns using update statements when there was just a few. The above query is really just a sample of a much larger query with many more columns. The actual request has blossomed into 6 date ranges for 13 columns. I'm asking if there's a better way then using 78 additional update statements.
I think you will have a hard time writing a query that includes all of these 78 metrics per e-mail address without actually creating a query that hard-codes the different choices. However you can generate such a pivot query with dynamic SQL, which will save you some keystrokes and will adjust dynamically as you add more columns to the table.
The result you want to end up with will look something like this (but of course you won't want to type it):
;WITH y AS
(
SELECT
EMAIL_ADDRESS,
/* aggregation portion */
[ADDRESS_1] = COUNT(DISTINCT [ADDRESS_1]),
[ADDRESS_2] = COUNT(DISTINCT [ADDRESS_2]),
... other columns
/* end agg portion */
FROM dbo.WEB_LEADS AS wl
WHERE ROW_ADDED_DTM >= /* one of 6 past dates */
GROUP BY wl.EMAIL_ADDRESS
)
SELECT EMAIL_ADDRESS,
/* pivot portion */
COUNT_ADDRESS_1_LAST_1_DAYS = *count address 1 from 1 day ago*,
COUNT_ADDRESS_1_LAST_7_DAYS = *count address 1 from 7 days ago*,
... other date ranges ...
COUNT_ADDRESS_2_LAST_1_DAYS = *count address 2 from 1 day ago*,
COUNT_ADDRESS_2_LAST_7_DAYS = *count address 2 from 7 days ago*,
... other date ranges ...
... repeat for 11 more columns ...
/* end pivot portion */
FROM y
GROUP BY EMAIL_ADDRESS
ORDER BY EMAIL_ADDRESS;
This is a little involved, and it should all be run as one script, but I'm going to break it up into chunks to intersperse comments on how the above portions are populated without typing them. (And before long #Bluefeet will probably come along with a much better PIVOT alternative.) I'll enclose my interspersed comments in /* */ so that you can still copy the bulk of this answer into Management Studio and run it with the comments intact.
Code/comments to copy follows:
/*
First, let's build a table of dates that can be used both to derive labels for pivoting and to assist with aggregation. I've added the three ranges you've mentioned and guessed at a fourth, but hopefully it is clear how to add more:
*/
DECLARE #d DATE = SYSDATETIME();
CREATE TABLE #L(label NVARCHAR(15), d DATE);
INSERT #L(label, d) VALUES
(N'LAST_1_DAYS', DATEADD(DAY, -1, #d)),
(N'LAST_7_DAYS', DATEADD(DAY, -8, #d)),
(N'LAST_14_DAYS', DATEADD(DAY, -15, #d)),
(N'LAST_MONTH', DATEADD(MONTH, -1, #d));
/*
Next, let's build the portions of the query that are repeated per column name. First, the aggregation portion is just in the format col = COUNT(DISTINCT col). We're going to go to the catalog views to dynamically derive the list of column names (except ID, EMAIL_ADDRESS and ROW_ADDED_DTM) and stuff them into a #temp table for re-use.
*/
SELECT name INTO #N FROM sys.columns
WHERE [object_id] = OBJECT_ID(N'dbo.WEB_LEADS')
AND name NOT IN (N'ID', N'EMAIL_ADDRESS', N'ROW_ADDED_DTM');
DECLARE #agg NVARCHAR(MAX) = N'', #piv NVARCHAR(MAX) = N'';
SELECT #agg += ',
' + QUOTENAME(name) + ' = COUNT(DISTINCT '
+ QUOTENAME(name) + ')' FROM #N;
PRINT #agg;
/*
Next we'll build the "pivot" portion (even though I am angling for the poor man's pivot - a bunch of CASE expressions). For each column name we need a conditional against each range, so we can accomplish this by cross joining the list of column names against our labels table. (And we'll use this exact technique again in the query later to make the /* one of past 6 dates */ portion work.
*/
SELECT #piv += ',
COUNT_' + n.name + '_' + l.label
+ ' = MAX(CASE WHEN label = N''' + l.label
+ ''' THEN ' + QUOTENAME(n.name) + ' END)'
FROM #N as n CROSS JOIN #L AS l;
PRINT #piv;
/*
Now, with those two portions populated as we'd like them, we can build a dynamic SQL statement that fills out the rest:
*/
DECLARE #sql NVARCHAR(MAX) = N';WITH y AS
(
SELECT
EMAIL_ADDRESS, l.label' + #agg + '
FROM dbo.WEB_LEADS AS wl
CROSS JOIN #L AS l
WHERE wl.ROW_ADDED_DTM >= l.d
GROUP BY wl.EMAIL_ADDRESS, l.label
)
SELECT EMAIL_ADDRESS' + #piv + '
FROM y
GROUP BY EMAIL_ADDRESS
ORDER BY EMAIL_ADDRESS;';
PRINT #sql;
EXEC sp_executesql #sql;
GO
DROP TABLE #N, #L;
/*
Now again, this is a pretty complex piece of code, and perhaps it can be made easier with PIVOT. But I think even #Bluefeet will write a version of PIVOT that uses dynamic SQL because there is just way too much to hard-code here IMHO.
*/

Fast Array Inserts with Postgres

In Oracle OCI and OCCI there are API facilities to perform array inserts where you build up an array of values in the client and send this array along with a prepared statement to the server to insert thousands of entries into a table in a single shot resulting in huge performance improvements in some scenarios. Is there anything similar in PostgreSQL ?
I am using the stock PostgreSQL C API.
Some pseudo code to illustrate what i have in mind:
stmt = con->prepare("INSERT INTO mytable VALUES ($1, $2, $3)");
pg_c_api_array arr(stmt);
for triplet(a, b, c) in mylongarray:
pg_c_api_variant var = arr.add();
var.bind(1, a);
var.bind(2, b);
var.bind(3, c);
stmt->bindarray(arr);
stmt->exec()
PostgreSQL has similar functionality - statement COPY and COPY API - it is very fast
libpq documentation
char *data = "10\t20\40\n20\t30\t40";
pres = PQexec(pconn, "COPY mytable FROM stdin");
/* can be call repeatedly */
copy_result = PQputCopyData(pconn, data, sizeof(data));
if (copy_result != 1)
{
fprintf(stderr, "Copy to target table failed: %s\n",
PQerrorMessage(pconn));
EXIT;
}
if (PQputCopyEnd(pconn, NULL) == -1)
{
fprintf(stderr, "Copy to target table failed: %s\n",
PQerrorMessage(pconn));
EXIT;
}
pres = PQgetResult(pconn);
if (PQresultStatus(pres) != PGRES_COMMAND_OK)
{
fprintf(stderr, "Copy to target table failed:%s\n",
PQerrorMessage(pconn));
EXIT;
}
PQclear(pres);
As Pavel Stehule points out, there is the COPY command and, when using libpq in C, associated functions for transmitted the copy data. I haven't used these. I mostly program against PostgreSQL in Python, have have used similar functionality from psycopg2. It's extremely simple:
conn = psycopg2.connect(CONN_STR)
cursor = conn.cursor()
f = open('data.tsv')
cusor.copy_from(f, 'incoming')
f.close()
In fact I've often replaced open with a file-like wrapper object that performs some basic data cleaning first. It's pretty seamless.
I like this way of creating thousands of rows in a single command:
INSERT INTO mytable VALUES (UNNEST($1), UNNEST($2), UNNEST($3));
Bind an array of the values of column 1 to $1, an array of the values of column 2 to $2 etc.! Providing the values in columns may seem a bit strange at first when you are used to thinking in rows.
You need PostgreSQL >= 8.4 for UNNEST or your own function to convert arrays into sets.

Oracle error ORA-01722 while updating DECIMAL value

I'm using ODP to update an Oracle 10g DB with no success updating decimal values.
Ex:
UPDATE usertable.fiche SET DT_MAJ = '20110627',var = 60.4 WHERE NB = '2143'
Result: 604 in the var column ('.' disappears)
UPDATE usertable.fiche SET DT_MAJ = '20110627',var = 60,4 WHERE NB = '2143'
Result: INVALID NUMBER
UPDATE usertable.fiche SET DT_MAJ = '20110627',var = ‘60,4’ WHERE NB = '2143'
Result: INVALID NUMBER
I also tried to use TO_NUMBER function without any success.
Any idea on the correct format I should use?
Thanks.
You didn't give us much to go on (only the insert statements, not the casting of types or what not)
but here is a test case that shows the how to do it.
create table numTest(numA number(3) ,
numB number(10,8) ,
numC number(10,2) )
/
--test insert
insert into numTest(numA, numB, numC) values (123, 12.1241, 12.12)
/
select * from numTest
/
/*
NUMA NUMB NUMC
---------------------- ---------------------- ----------------------
123 12.1241 12.12
*/
--delete to start clean
rollback
/
/*by marking these table.col%type we can change the table type and not have to worry about changing these in the future!*/
create or replace procedure odpTestNumberInsert(
numA_in IN numTest.numA%type ,
numB_in IN numTest.numB%type ,
numC_in IN numTest.numC%type)
AS
BEGIN
insert into numTest(numA, numB, numC) values (numA_in, numB_in, numC_in) ;
END odpTestNumberInsert ;
/
begin
odpTestNumberInsert(numA_in => 10
,numB_in => 12.55678
,numC_in => 13.13);
odpTestNumberInsert(numA_in => 20
,numB_in => 30.667788
,numC_in => 40.55);
end ;
/
select *
from numTest
/
/*
NUMA NUMB NUMC
---------------------- ---------------------- ----------------------
10 12.55678 13.13
20 30.667788 40.55
*/
rollback
/
okay, so we have created a table, got data in it (removed it), created a procedure to verify it works (then rollback the changes) and all looks good. So let's go to the .net side (I'll assume C#)
OracleCommand cmd = new OracleCommand("odpTestNumberInsert", con);
cmd.CommandType = CommandType.StoredProcedure;
cmd.BindByName = true;
OracleParameter oparam0 = cmd.Parameters.Add("numA_in", OracleDbType.Int64);
oparam0.Value = 5 ;
oparam0.Direction = ParameterDirection.Input;
decimal deciVal = (decimal)55.556677;
OracleParameter oparam1 = cmd.Parameters.Add("numB_in", OracleDbType.Decimal);
oparam1.Value = deciVal ;
oparam1.Direction = ParameterDirection.Input;
OracleParameter oparam2 = cmd.Parameters.Add("numC_in", OracleDbType.Decimal);
oparam2.Value = 55.66 ;
oparam2.Direction = ParameterDirection.Input;
cmd.ExecuteNonQuery ();
con.Close();
con.Dispose();
And then to finish things off:
select *
from numTest
/
NUMA NUMB NUMC
---------------------- ---------------------- ----------------------
5 55.556677 55.66
all of our data was inserted.
Without more code on your part I would recommend that you verify that the correct param is being passed in and assoc. to the insert. the above proves it works.
You Should Not re-cast your variables via a TO_NUMBER when you can do so when creating the parameters.
I found the problem just after posting my question !!! I was not looking at the right place... Oracle update was not concerned at all. The problem was in the Decimal.parse method I was using to convert my input string (containing a coma as decimal separator) into the decimal number (with a dot as decimal separatot) I wanted to update in the DB. The thing is that the system culture is not the same on my own development computer than on the client computer, even if they both run in the same country. Then the parse was perfectly working on my computer but was removing the decimal character on the client production environment. I finally just put in place a "replace" coma by dot and everything goes well now. Thanks again for your time.