I am new to PostgreSQL. I am trying to insert data to a table using JDBC.
One of the column values to be inserted are of following type.
2014-04-04T19:56:42.784Z (Please note the T and Z in the string).
I first used timestamp as a datatype for corresponding column. However, I am getting the following ERROR :
org.postgresql.util.PSQLException: ERROR: syntax error at or near "T19"
I then changed it to character(40) thinking might be my format is wrong. However, still getting the same error.
Later, when I removed the T and Z from the string, the error stopped. Also please note I also thought colon could be the problem. However, that is not the case through my testing.
Java code snippet
String line = 1,2014-04-04T19:56:42.784Z,456,0
String[] tempStr = strLine.split(",");
String sql = "INSERT INTO Table (A , TimeOfSess , B , C )"
+ "VALUES("+tempStr[0]+","+tempStr[1]+","+tempStr[2]+","+tempStr[3]+")";
stmt.executeUpdate(sql);
Please note I am using TimeOfSess as a character(40). For my analysis it is not important to store the time as time. Character will also work. So I am going the easy way around here.
Related
I use datagrip as client to connect redshift and encounter a stranger issue which exhaust my whole day.
When I run my query sql the datagrip complains
[XX000] ERROR: invalid string enlargement request size 1073741823
It seemed that there dont exist a place that I can check more detail error log. And I google this error it also have very little similar question and it seemed maybe due to my field is too long which exceed the max length that redshift can accept. But actually, the story is not such for me I dont have long field, then I comment all my sql statement and re-add them incrementally to locate this issue statement.
Finally, I find the error-msg-triggered statement as below:
(
case when trunc(request_date_skip_weekend_tmp) = to_date('2022-03-21', 'YYYY-MM-DD')
then dateadd(day, 1, trunc(request_date_skip_weekend_tmp))
else request_date_skip_weekend_tmp end
)
request_date_skip_weekend,
After I change it with:
dateadd(day, 1, trunc(request_date_skip_weekend_tmp)) request_date_skip_weekend,
the error complain disappear, it is very hard for me to accept the relationship error message and the sql change, I dont know why my the former statement will trigger error complain.
I will appreciate if you can spot why the former expression error or share some knowledge about where can I fetch more detail error message to know what happened.
Your code snippet is dates and timestamps but the error is for strings. So it is likely you have identified a "trigger" and not root cause. Also since you report that the SQL is very long you could be dealing with compiler optimization changes, moving the failure. Removing a CASE can cause the compiler/optimizer to choose different structures for the query.
One experiment to try is to change the to_date() to a cast to timestamp so there are no implicit casts ('2022-03-21'::timestamp). This is unlikely the cause but it may help.
I expect you will need to post the query to get more help. How large is it? This error could be related to building a large string in the query OR could be related to the text of the query OR creating the output. This isn't a standard "string too long" message so this is something more implicit. You could post to a google doc or some other file sharing service. Just link in the question.
this is my first question ever in StackOverflow and as suggested, I have looked at other similar questions and attempted to use their responses for my problem. So far, no luck.
The situation is as follows:
I have a custom query in JPA.
#Query(value="SELECT u.str_id,u.str_exercise_name, u.str_target_body_part,u.char_effect FROM training_schema.exercise_entity u WHERE u.str_exercise_name = ?1 and u.str_target_body_part= ?2", nativeQuery=true)
ExerciseEntity findExerciseEntityByNameAndTargetBodyPart(String str_exercise_name,String str_target_body_part);
If I remove the name of the columns (u.str_id, u.str_exercise_name, u.str_target_body_part, u.char_effect) and replace the query with:
#Query(value="SELECT u FROM training_schema.exercise_entity u WHERE u.str_exercise_name = ?1 and u.str_target_body_part= ?2", nativeQuery=true)
ExerciseEntity findExerciseEntityByNameAndTargetBodyPart(String str_exercise_name,String str_target_body_part);
I get the following error:
"The column name str_id was not found in this ResultSet"
The fact that the error doesn't come when I mention all the columns and is generated when I use alias 'u' doesn't make sense because this would mean that if I ever had to work with a larger table with, say, 10 columns, I would have to write them all out.
One more piece of information that hopefully helps: With the version of the query where I am using 'u' instead of the column names, the error is ONLY generated when a matching record is found. For a null return from the database, there is no problem.
Using Java Spring and PostgresSQL.
I was able to figure out the problem.
In the query where I am using the alias 'u' ALONE, I had to make a slight change. Instead of just saying 'u', I changed it to:
#Query(value="SELECT u.* FROM training_schema.exercise_entity u WHERE u.str_exercise_name = ?1 and u.str_target_body_part= ?2", nativeQuery=true)
ExerciseEntity findExerciseEntityByNameAndTargetBodyPart(String str_exercise_name,String str_target_body_part);
Using only 'u', was returning a record set WITHOUT any headers. Adding the '*' caused the query to return a resultset with column names which made the error go away.
I have the following table in redshift:
Column | Type
id integer
value varchar(255)
I'm trying to copy in (using the datapipeline's RedshiftCopyActivity), and the data has the line 1,maybe as the entry trying to be added, but I get back the error 1214:Delimiter not found, and the raw_field_data value is maybe. Is there something I'm missing in the copy parameters?
The entire csv is three lines that goes:
1,maybe
2,no
3,yes
You may want to take a look at the similar question Redshift COPY command delimiter not found.
Make sure your RedshiftCopyActivity configuration includes FORMAT AS CSV from https://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-data-format.html#copy-csv.
Be sure your input data has your configured delimiter between every field, even in the case of nulls.
Be sure you do not have any trailing blank lines.
You can run the following SQL (from the linked question) to see more specific details of what row is causing the problem.
SELECT le.starttime,
d.query,
d.line_number,
d.colname,
d.value,
le.raw_line,
le.err_reason
FROM stl_loaderror_detail d,
JOIN stl_load_errors le
ON d.query = le.query
ORDER BY le.starttime DESC;
I have the follow situation:
A PostgreSQL database with a table that contains a date type column called date.
A string from a delimited .txt file outputting: 20170101.
I want to insert the string into the date type column.
So far i have tried the following with mixed results/errors:
row1.YYYYMMDD
Detail Message: Type mismatch: cannot convert from String to Date
Explanation: This one is fairly obvious.
TalendDate.parseDate("yyyyMMdd",row1.YYYYMMDD)
Batch entry 0 INSERT INTO "data" ("location_id","date","avg_winddirection","avg_windspeed","avg_temperature","min_temperature","max_temperature","total_hours_sun","avg_precipitation") VALUES (209,2017-01-01 00:00:00.000000 +01:00:00,207,7.7,NULL,NULL,NULL,NULL,NULL) was aborted. Call getNextException to see the cause.
can see the string parsed into "2017-01-01 00:00:00.000000 +01:00:00".
When I try to execute the query directly i get a "SQL Error: 42601: ERROR: Syntax error at "00" position 194"
Other observations/attempts:
The funny thing is if I use '20170101' as a string in the query it works, see below.
INSERT INTO "data" ("location_id","date","avg_winddirection","avg_windspeed","avg_temperature","min_temperature","max_temperature","total_hours_sun","avg_precipitation") VALUES (209,'20170101',207,7.7,NULL,NULL,NULL,NULL,NULL)
I've also tried to change the schema of the database date column to string. It produces the following:
Batch entry 0 INSERT INTO "data" ("location_id","date","avg_winddirection","avg_windspeed","avg_temperature","min_temperature","max_temperature","total_hours_sun","avg_precipitation") VALUES (209,20170101,207,7.7,NULL,NULL,NULL,NULL,NULL) was aborted. Call getNextException to see the cause.
This query also doesn't work directly because the date isn't between single quotes.
What am i missing or not doing?
(I've started learning to use Talend 2-3 days ago)
EDIT//
Screenshots of my Job and tMap
http://imgur.com/a/kSFd0
EDIT//It doesnt appear to be a date formatting problem but a Talend to PostgreSQL connection problem
EDIT//
FIXED: It was a stupid easy problem/solution ofcourse. THe database name and schema name fields were empty... so it basically didnt know where to connect
You don't have to do anything to insert a string like 20170101 into a date column. PostgreSQL will handle it for you it's just ISO 8601's date format.
CREATE TABLE foo ( x date );
INSERT INTO foo (x) VALUES ( '20170101' );
This is just a talend problem, if anything.
[..] (209,2017-01-01 00:00:00.000000 +01:00:00,207,7.7,NULL,NULL,NULL,NULL,NULL)[..]
If Talend doesn't know by itself that passing timestamp into query requires it to be single quoted, then if possible - you need to do it.
FIXED: It was a stupid easy problem/solution ofcourse. THe database name and schema name fields were empty... so it basically didnt know where to connect thats why i got the BATCH 0 error and when i went deeper while debugging i found it couldnt find the table, stating the relation didnt exist.
Try like this,
The data in input file is: 20170101(in String format)
then set the tMap like,
The output is as follows:
I am getting this error running an insert query for a single record:
DB2 SQL Error: SQLCODE=-302, SQLSTATE=22001, SQLERRMC=null,
DRIVER=3.62.56
Exception: org.springframework.dao.DataIntegrityViolationException
I looked this up on IBM's help site, but there being no parameter index, I am stuck. The SQL state also seems to specify it is other than a value being too big.
The format of the query is INSERT INTO [[TABLE_NAME]] VALUES (?,?,?,...) using Spring's JdbcTemplate.update(String sql, Object... params).
This being for work, I cannot post schema nor query. I am looking for general advice into debugging this issue. I already know using Arrays.toString(Object[]) does not print out in SQL format.
To find the explanation for SQLCODE -302 in the manual you need to search for SQL0302N (the general rule for DB2 SQLCODE values is this: "SQL" plus four digits, padded if necessary with zeros on the left, plus "N" for "negative" because -302 is a negative number).
If you have the DB2 command line processor installed, you can also use it to look up error codes:
db2 ? sql302
which would produce something like this:
SQL0302N The value of a host variable in the EXECUTE or OPEN
statement
is out of range for its corresponding use.
Explanation:
The value of an input host variable was found to be out of range for
its use in the SELECT, VALUES, or prepared statement.
In other words, one of the bind variables in your INSERT is too large for the target column. You'll need to compare the table column definitions with the actual values you're trying to insert.
In addition to mustaccio's answer you can also get the info from sql with SYSPROC.SQLERRM. Example:
values SYSPROC.SQLERRM ('SQL302', '', '', 'en_US', 0)
SQL0302N The value of a host variable in the EXECUTE or OPEN statement
is out of range for its corresponding use.
Explanation:
...