how to use utc in postgres timestamp with jdbc PrepareStatement parameter? - postgresql

I'am using timestamp data type on pg9.4, but there come very strange problem with to_json.
now i am in Shanghai, UTC+08:00 timezone.
see below:
conn.createStatement().execute("set time zone 'UTC'");
String sql = "select to_json(?::timestamp) as a, to_json(current_timestamp::timestamp) as b";
PreparedStatement ps = conn.prepareStatement(sql);
Timestamp timestamp = new Timestamp(new Date().getTime());
ps.setTimestamp(1, timestamp);
ResultSet rs = ps.executeQuery();
while(rs.next()){
System.out.println("a " + rs.getString("a") + ", b " + rs.getString("b"));
}
output:
a "2015-09-24T16:52:42.529", b "2015-09-24T08:53:25.468191"
it's mean when i pass a TIMESTAMP parameter to pg with jdbc, the timezone is still in shanghai, not UTC.
this problem is not due to to_json function, i have make a table with one timestamp column, this problem still exits, the code of above is shortest sample.
how to let's all timestamp work in UTC timezone?

You need to set Calendar tzCal = Calendar.getInstance(TimeZone.getTimeZone("UTC"));, Before you create your prepared statement.
UPDATED CODE SNIPPET
conn.createStatement().execute("set time zone 'UTC'");
String sql = "select to_json(?::timestamp) as a, to_json(current_timestamp::timestamp) as b";
Calendar tzCal = Calendar.getInstance(TimeZone.getTimeZone("UTC"));
PreparedStatement ps = conn.prepareStatement(sql);
Timestamp timestamp = new Timestamp(new Date().getTime());
ps.setTimestamp(1, timestamp);
ResultSet rs = ps.executeQuery();
while(rs.next()){
System.out.println("a " + rs.getString("a") + ", b " + rs.getString("b"));
}
This way you will be able to set timezone to UTC in your JDBC call.
If you want to run the whole application/JVM in UTC, set -Duser.timezone=UTC flag while starting JVM.
HTH.

Related

Is there a way to use java LocalDateTime.now() in jpa #query? [duplicate]

I want to compare the date in database with current dateTime in JPA query :
captureLimitDate < currentDateTime
my requirement is as follows :
database.captureLimitDate : 04/07/2012 19:03:00
currentDateTime : 04/07/2012 20:03:00
My JPAQuery is this :
SELECT o FROM Operation o"
+ " WHERE ( o.merchantId =:merchantId ) AND "
+ "(o.captureLimitDate < currentDateTime ) ";
And Operation class has captureLimitDate as java.util.Date
#Generated(value = "XA", comments = "0,_8BedAMXZEeGHf_Dj4YaPyg")
private Date captureLimitDate;
I want to compare both current date and time . will the above query works. ??
CURRENT_TIMESTAMP must be used to refere to the current date and time in a JPQL query:
select o from Operation o
where o.merchantId = :merchantId
and o.captureLimitDate < CURRENT_TIMESTAMP
If the current date and time is in fact a date coming from user input (and which is thus not the current date and time, then you do it like you do for the merchantId:
select o from Operation o
where o.merchantId = :merchantId
and o.captureLimitDate < :maxDateTime
And you set the parameter using
query.setParameter("maxDateTime", maxDateTime, TemporalType.TIMESTAMP);
JPA defines special JPQL expressions that are evaluated to the date and time on the database server when the query is executed:
CURRENT_DATE - is evaluated to the current date (a java.sql.Date instance).
CURRENT_TIME - is evaluated to the current time (a java.sql.Time instance).
CURRENT_TIMESTAMP - is evaluated to the current timestamp, i.e. date and time
(a java.sql.Timestamp instance).
More info: Click here

Apache calcite: cast integer to datetime

I am using Beam SQL and trying to cast integer to datetime field.
Schema resultSchema =
Schema.builder()
.addInt64Field("detectedCount")
.addStringField("sensor")
.addInt64Field("timestamp")
.build();
PCollection<Row> sensorRawUnboundedTimestampedSubset =
sensorRowUnbounded.apply(
SqlTransform.query(
"select PCOLLECTION.payload.`value`.`count` detectedCount, \n"
+ "PCOLLECTION.payload.`value`.`id` sensor, \n"
+ "PCOLLECTION.`timestamp` `timestamp` \n"
+ "from PCOLLECTION "))
.setRowSchema(resultSchema);
For some computation and windowing, I want to convert/cast timestamp to Datetime field? Please provide some pointers to convert timestamp in resultSchema to DateTime. datatype.
There is no out of the box way to do that in Beam (or in Calcite). Short version - Calcite or Beam have no way of knowing how you actually store the dates or timestamps in the integers. However, assuming you have epoch millis, this should work:
#Test
public void testBlah() throws Exception {
// input schema, has timestamps as epoch millis
Schema schema = Schema.builder().addInt64Field("ts").addStringField("st").build();
DateTime ts1 = new DateTime(2019, 8, 9, 10, 11, 12);
DateTime ts2 = new DateTime(2019, 8, 9, 10, 11, 12);
PCollection<Row> input =
pipeline
.apply(
"createRows",
Create.of(
Row.withSchema(schema).addValues(ts1.getMillis(), "two").build(),
Row.withSchema(schema).addValues(ts2.getMillis(), "twelve").build()))
.setRowSchema(schema);
PCollection<Row> result =
input.apply(
SqlTransform.query(
"SELECT \n"
+ "(TIMESTAMP '1970-01-01 00:00:00' + ts * INTERVAL '0.001' SECOND) as ts, \n"
+ "st \n"
+ "FROM \n"
+ "PCOLLECTION"));
// output schema, has timestamps as DateTime
Schema outSchema = Schema.builder().addDateTimeField("ts").addStringField("st").build();
PAssert.that(result)
.containsInAnyOrder(
Row.withSchema(outSchema).addValues(ts1, "two").build(),
Row.withSchema(outSchema).addValues(ts2, "twelve").build());
pipeline.run();
}
Alternatively you can always do it in java, not in SQL, just apply a custom ParDo to the output of the SqlTransform. In that ParDo extract the integer timestamp from the Row object, convert it to DateTime and then emit it, e.g. as part of another row with a different schema.

Kafka source connector is not pulling the record as expected when records are inserted in source topic from multiple sources

In one of my use case i am trying to create a pipeline
whenever i sent the message from custom partition, i sent the timestamp in milliseconds with LONG data type because in the schema, the timestamp column has been defined as long.
Code that i had earlier in custom partition:
Date date = new Date();
long timeMilli = date.getTime();
System.out.println("date = " + date.toString() + " , time in millis = " + timeMilli);
Display result before i sent the record:
date = Tue Mar 26 22:02:04 EDT 2019 , time in millis = 1553652124063
value inserted in timestamp column in table2:
3/27/2019 2:02:04.063000 AM
Since its taking UK timezone (i believe), i put temporary fix for time being to subtract 4 hours from the current timestamp so that i can match with USA EST timestamp.
Date date = new Date();
Date adj_date = DateUtils.addHours(date,-4);
long timeMilli = adj_date.getTime();
System.out.println("date = " + date.toString() + " , time in millis = " + timeMilli);
Display result:
date = Tue Mar 26 22:04:43 EDT 2019 , time in millis = 1553637883826
value inserted in timestamp column in table2:
3/26/2019 10:04:43.826000 PM
Please let me know if i am missing anything as i am not sure why this is happening when i sent message from custom partition.
Under the hood Jdbc Source Connector use following query:
SELECT * FROM someTable
WHERE
someTimestampColumn < $endTimetampValue
AND (
(someTimestampColumn = $beginTimetampValue AND someIncrementalColumn > $lastIncrementedValue)
OR someTimestampColumn > $beginTimetampValue)
ORDER BY someTimestampColumn, someIncrementalColumn ASC
Summarizing: The query retrieve rows if their timestamp column's value is earlier the current timestamp and is later than last checked.
Above parameters are:
beginTimetampValue - value of timestamp column of last imported record
endTimetampValue - current timestamp according to the Database
lastIncrementedValue - value of incremental column of last imported record
I think in your case Producer put to the Tables records with higher timestamp, than you later insert manually (using the query).
When Jdbc Connector checks for new records to import to Kafka it skips them (because they don't fullfil someTimestampColumn < $endTimetampValue timestamp condition)
You can also change log level to DEBUG and see what is going on in logs

How to set client specific timezone and date

pardon me if it seems to be a duplicate question.
I have seen many posts already on this topic. However after trying many examples could not find the solution to my problem.
I tried this code
SimpleDateFormat sdf = new SimpleDateFormat("EEE MMM dd yyyy", Locale.ENGLISH )
Date newDate = sdf.parse(sdf.format( new Date( dateTimeString ) ) )
However the second line of code always converts the date to the server specific date and timezone which i don't want. I also tried the following
SimpleDateFormat sdf = new SimpleDateFormat("EEE MMM dd yyyy HH:mm:ss zzz", Locale.ENGLISH )
log.info "+++++++++++++++++hidden date ++++++++ " + params.hiddenGameDateTime.substring(35, 38)
log.info "x = " + sdf.format( new Date ( params.hiddenGameDateTime ))
String tzone = params.hiddenGameDateTime.substring(35, 38)
sdf.setTimeZone( TimeZone.getTimeZone( tzone ) )
log.info "Timezone = " + sdf.getTimeZone().getDisplayName()
Please note that
sdf.format( new Date( dateTimeString ) )
gives me the desired result, however it gives me the string value of the date and the actual value to be stored in database is of the Data type date which can't hold the string value.
the value for date and time in my case gets converted to PST date and time. how can i avoid this. The user input date with timezone should be stored in the database as it is with no change in date and timezone.
An observation: The constructor new Date( dateTimeString ) is deprecated. A better replacement would be something like that:
SimpleDateFormat sdfOriginal = new SimpleDateFormat("EEE MMM dd HH:mm:ss zzz yyyy");
Date d = sdfOriginal.parse(dateTimeString);
Furthermore: An expression like sdf.parse(sdf.format(...)) using the same format object does not make much sense.
But most important, your statement "the second line of code always converts the date to the server specific date and timezone" seems to be based on test output like:
System.out.println(newDate);
This implicitly uses toString() which is based on jvm default time zone, in your case the server time zone. But keep in mind, the internal state of j.u.Date does not reference any time zone. A Date is just a container for a long, namely the seconds since 1970-01-01T00:00:00Z in UTC time zone, that is a global time.
Additional remark:
If you need the client time zone (in a scenario with multiple users in different time zones) to create user-specific formatted date strings, then you indeed need to store the time zone preference of every user in the database, so you can use this information for output in an expression like:
SimpleDateFormat sdf = new SimpleDateFormat("{pattern}";
sdf.setTimeZone(TimeZone.getTimeZone("{user-preference-time-zone}");
String userOutput = sdf.format(date);
Date is always jvm timezone specific. You need to normalize it to standard time and store it in DB to cater with different timezone servers.

Date variable in vba using sql query

I have a query in VB6:
"select plate_no from INFO where date_time between #07-10-2012 01:13:17# and #10/10/2012 11:30:25#"
My Access database table has the column with datatype Data/Time in general format.
How do I modify the query above to use variables like:
Public t1 As Date
Public t2 As Date
I'm doing it from memory, may have to tweak it to compile:
Public t1 as Date
Public t2 as Date
t1 = #10/07/2012 01:13:17# '7th of October 2012
t2 = #10/10/2012 11:30:25# '10th of October 2012
sql = "select plate_no from INFO where date_time between #" + Format(t1, "YYYY-MM-DD HH:MM:SS") + "# and #" + Format(t2, "YYYY-MM-DD HH:MM:SS") + "#"