I am trying to cast a column of a timestamp into date format
eventTimestamp
2016.11.02D04:25:01.599000000
Into:
eventTimestamp
2016.11.02
using update"D"$column from table
does not work. I guess I need to parse it out of the string first!
The use of upper case letters for casting is used on string inputs as you say the cast you're looking for is as follows
q)show tab:([]100?0p;100?0t)
q)tab
x x1
------------------------------------------
2001.03.18D08:40:47.804237904 21:10:45.900
2001.10.11D22:11:37.961901872 20:23:25.800
2001.10.06D22:58:22.399235216 19:03:52.074
2002.11.27D20:28:07.114942080 00:29:38.945
2003.12.31D10:15:38.085363056 04:30:47.898
// Cast the timestamp column to date
q)update `date$x from tab
x x1
-----------------------
2001.03.18 21:10:45.900
2001.10.11 20:23:25.800
2001.10.06 19:03:52.074
2002.11.27 00:29:38.945
2003.12.31 04:30:47.898
Related
I want to search through a column of dates in the format YYYY-MM-DD (column G - in a random order) and sum up all corresponding cost values for all dates in the same month.
So, for example, the total cost for December 2019 would be 200.
My current formula is:
=SUMPRODUCT((MONTH(G2:G6)=12)*(YEAR(G2:G6)=2019)*(H2:H6))
This gives me the total cost for that month correctly, but I cannot work out how to do this without hardcoding the year and month!
How would I do this with a formula (given the two date columns are a different format)?
You can do this easily combining SUMIFS with EDATE:
SUMIFS function
EDATE function
The formula I've used in cell B2 is:
=SUMIFS($F$2:$F$6;$E$2:$E$6;">="&A2;$E$2:$E$6;"<="&(EDATE(A2;1)-1))
For this formula to work, in column A must be first day of each month!. In cell A2 the value is 01/11/2019, but applied a format of mmmm yyyy to see it like that (and chart will do the same).
paste in D2 cell:
=ARRAYFORMULA(QUERY({EOMONTH(G2:G, -1)+1, H2:H},
"select Col1,sum(Col2)
where Col1 is not null
and not Col1 = date '1900-01-01'
group by Col1
label sum(Col2)''
format Col1 'mmm yyyy'", 0))
When trying to add days to a date in another column the value returns as a number, not a date.
I have tried to set the column format as the date for both columns B and C. I also tried using the DATEVALUE() function, but I don't think I used it properly.
=ARRAYFORMULA(IF(ROW(B:B)=1,"Second Notification",IF(LEN(B:B), B:B+1,)))
I want the value in column C to return as a date.
use this with TEXT formula:
={"Second Notification";
ARRAYFORMULA(IF(LEN(B2:B), TEXT(B2:B+1, "MM/dd/yyyy hh:mm:ss"), ))}
I'm trying to select records from a DB2 Iseries system where the date field is greater than the first of this year.
However, the date fields I'm selecting from are actually PACKED fields, not true dates.
I'm trying to convert them to YYYY-MM-DD format and get everything greater than '2018-01-01' but no matter what I try it says it's invalid.
Currently trying this:
SELECT *
FROM table1
WHERE val = 145
AND to_date(char(dateShp), 'YYYY-MM-DD') >= '2018-01-01';
it says expression not valid using format string specified.
Any ideas?
char(dateshp) is going to return a string like '20180319'
So your format string should not include the dashes.. 'YYYYMMDD'
example:
select to_date(char(20180101), 'YYYYMMDD')
from sysibm.sysdummy1;
So your code should be
SELECT *
FROM table1
WHERE val = 145
AND to_date(char(dateShp), 'YYYYMMDD') >= '2018-01-01';
Charles gave you a solution that converts the Packed date to a date field, and if you are comparing to another date field, this is a good solution. But if you are comparing to a constant value or another numeric field, you could just use something like this:
select *
from table1
where val = 145
and dateShp >= 20180101;
I have a dataframe that have two columns (C, D) are defined as string column type, but the data in the columns are actually dates. for example column C has the date as "01-APR-2015" and column D as "20150401" I want to change these to date column type, but I didn't find a good way of doing that. I look at the stack overflow I need to convert the string column type to Date column type in Spark SQL's DataFrame. the date format can be "01-APR-2015" and I look at this post but it didn't have info relate to date
Spark >= 2.2
You can use to_date:
import org.apache.spark.sql.functions.{to_date, to_timestamp}
df.select(to_date($"ts", "dd-MMM-yyyy").alias("date"))
or to_timestamp:
df.select(to_date($"ts", "dd-MMM-yyyy").alias("timestamp"))
with intermediate unix_timestamp call.
Spark < 2.2
Since Spark 1.5 you can use unix_timestamp function to parse string to long, cast it to timestamp and truncate to_date:
import org.apache.spark.sql.functions.{unix_timestamp, to_date}
val df = Seq((1L, "01-APR-2015")).toDF("id", "ts")
df.select(to_date(unix_timestamp(
$"ts", "dd-MMM-yyyy"
).cast("timestamp")).alias("timestamp"))
Note:
Depending on a Spark version you this may require some adjustments due to SPARK-11724:
Casting from integer types to timestamp treats the source int as being in millis. Casting from timestamp to integer types creates the result in seconds.
If you use unpatched version unix_timestamp output requires multiplication by 1000.
Say I have a column that contains unix timestamps - an int representing the number of seconds since the epoch. They look like this: 1347085827. How do I format this as a human-readable date string in my SELECT query?
Postgresql has a handy built-in function for this: to_timestamp(). Just wrap that function around the column you want:
Select a, b, to_timestamp(date_int) FROM t_tablename