SAP HANA Decimal to timestamp or seconddate SLT - date

I am using SLT to load tables into our Hana DB. SLT uses the ABAP dictionary and sends timestamps as decimal (15,0) to the HANA DB. Once in the HANA DB via a calculated column in a calculation view, I am trying to convert the decimals to timestamps or seconddates. Table looks like this:
I run a small SLT transformation to populate columns 27-30. The ABAP layer in SLT populates the columns based on the Database transactions.
The problem comes when I try and convert columns 28-30 to timestamps or seconddates. using syntax like this:
Select to_timestamp(DELETE_TIME)
FROM SLT_REP.AUSP
Select to_seconddate(DELETE_TIME)
FROM SLT_REP.AUSP
I get the following errors:
Problem being, It works some times as well:
The syntax in calculated column looks like this:
With the error from calculation view being:
Has anyone found a good way to convert ABAP timestamps (Decimal (15,0)) to Timestamp or Seconddate in HANA?

There are conversion functions available, that you can use here (unfortunately not very well documented).
select tstmp_to_seconddate(TO_DECIMAL(20110518082403, 15, 0)) from dummy;
TSTMP_TO_SECONDDATE(TO_DECIMAL(20110518082403,15,0))
2011-05-18 08:24:03.0

The problem was with the ABAP data type. I was declaring the target variable as DEC(15,0). The ABAP extracting the data was rounding up the timestamp in some instances to the 60th second. Once in Target Hana, the to_timestamp(target_field) would come back invalid when a time looked like "20150101121060" with the last two digits being the 60th second. This is invalid and would fail. The base Hana layer did not care as it was merely putting a length 14 into into a field. I changed the source variable to be DEC(21,0). This eliminated the ABAP rounding and fixed my problem.

Related

Inaccurate COUNT DISTINCT Aggregation with Date dimension in Google Data Studio

When I aggregate values in Google Data Studio with a date dimension on a PostgreSQL Connector, I see buggy behaviour. The symptom is that performing COUNT(DISTINCT) returns the same value as COUNT():
My theory is that it has something to do with the aggregation on the data occurring after the count has already happened. If I attempt the exact same aggregation on the same data in an exported CSV instead of directly from a PostgreSQL Connector Data Source, the issue does not reproduce:
My PostgreSQL Connector is connecting to Amazon Redshift (jdbc:postgresql://*******.eu-west-1.redshift.amazonaws.com) with the following custom query:
SELECT
userid,
submissionid,
date
FROM mytable
Workaround
If I stop using the default date field for the Date Dimension and aggregate my own dates directly in within the SQL query (date_byweek), the COUNT(DISTINCT) aggregation works as expected:
SELECT
userid,
submissionid,
to_char(date,'YYYY-IW') as date_byweek
FROM mytable
While this workaround solves my immediate problem, it sucks because I miss out on all the date functionality provided by Data Studio (Hierarchy Drill Down, Date Range filtering, etc.). Not to mention reducing my confidence at what else may be "buggy" within the product 😞
How to Reproduce
If you'd like to re-create the issue, using the following data as a PostgreSQL Data Source should suffice:
> SELECT * FROM mytable
userid submissionid
-------- -------------
1 1
2 2
1 3
1 4
3 5
> COUNT(DISTINCT userid) -- ERROR: Returns 5 when data source is PostgreSQL
> COUNT(DISTINCT userid) -- EXPECTED: Returns 3 when data source is CSV (exported from same PostgreSQL query above)
I'm happy to report that as of Sep 17 2020, there's a workaround.
DataStudio added the DATETIME_TRUNC function (see here https://support.google.com/datastudio/answer/9729685?), that allows you to add a custom field that truncs the original date to whatever granularity you want, without causing the distinct bug.
Attempting to set the display granularity in the report still causes the bug (i.e., you'll still set Oct 1 2020 12:00:00 instead of Oct 2020).
This can be solved by creating a SECOND custom field, which just returns the first, and then you can add IT to the report, change the display granularity, and everything will work OK.
I have the same issue with MySQL Connector. But my problem is solved, when I change date field format in DB from DATETIME (YYYY-MM-DD HH:MM:SS) to INT (Unixtimestamp). After connection this table to the Googe Datastudio I set type for this field as Date (YYYYMMDD) and all works, as expected. Hope, this may help you :)
In this Google forum there is a curious solution by Damien Choizit that involves combining your data source with itself. It works well for me.
https://support.google.com/datastudio/thread/13600719?hl=en&msgid=39060607
It says:
I figured out a solution in my case: I used a Blend Data joining twice the same data source with corresponding join key(s), then I specified a data range dimension only on the left side and selected the columns I wanted to CTD aggregate as "dimensions" (and not metric!) on the right side.

UPDATE SQL Command not saving the results

I looked though the forum but I couldn't find a issue like mine.
Essentially I have a table called [p005_MMAT].[dbo].[Storage_Max]. It has three columns Date, HistValue and Tag_ID. I want to make all the values in 'HistValue' column to have 2 decimal places. For example if a number is 1.1, I want it to be 1.10 or if its 1 then also I want it to look like 1.00.
Here is the sql update statement I am using
update [p005_MMAT].[dbo].[Storage_Max]
set [HistValue] = cast([HistValue] as decimal (10,2))
where [Tag_ID] = 94
After executing the query it says 3339 rows affected but when I perform a simple select statement it appears the column had no affect of. I have used that cast function in select statement and it adds two decimal places.
Please advice.
The problem is the datatype and SQL Server. Float or real will not have the trailing zeros. You either have to change the datatype of the column or just deal with it and handle the formatting in your queries or application.
You could run something like the following
select
cast([HistValue] as decimal (10,2))
from [p005_MMAT].[dbo].[Storage_Max]
where [Tag_ID] = 94

dbWriteTable date format

Halo,
im importing tables from MS-Access into R via
channel <- odbDriverConnect(...)
foo_import <-sqlFetch(channel,"foo")
now Im using the RSQLite for SQL Querys and other stuff
dbWriteTable(conn, "foo_table", foo_import)
R_Table <- dbReadTable(conn, "foo_table")
the Problem is, that at the sqlFetch step columns including data are still correct, tested with class(foo_import$Datetime) which gives POSIXct POSIXt
After using dbWriteTable and testing the Format with class(R_Table$Datetime)
which gives numeric
The Dates are just numbers
with as.POSIXct(R_Table$Datetime, origin= "1970-01-01")
I can transform the column manually. Are they other options to Change the format automatically to a date Format ?

PowerBi doesn't honour an SSAS OLAP cube date field

We have an SSAS OLAP cube, in production since long and perfectly tested by dozens of users, with a normal Time dimension (two hierarchies, Months and Weeks, but this is irrelevant).
The Time dimension key is a date field. On the data view it is defined as DataType: System.DataTime. On the dimension as Calendar -> Date, Usage: Key.
Using this date field on an Excel table accessing the OLAP cube is fully operational, it is a date and "Date filter" options are available as expected.
But trying to use this field on PowerBI defeated all our efforts! No way to have PowerBI interpret the field as a date, so no date filters are available. PowerBI thinks it's a text field and nothing we can think makes it change its behaviour.
We tested PowerBI with an external Excel and adding the SSAS OLAP dimension so to be able to modify the field format and oh surprise! the field is interpreted as text. Changing the format to date makes it work.
But there is no way to change the format for an SSAS OLAP cube when accessed directly as the primary (and only) source of data for the PowerBI repport.
Any idea how to define an SSAS OLAP date field so PowerBI understands it is a date?
Found it! One minute before getting mad forever I spotted the diference.
I was able to create two dimensions on the same cube, with the same field. One works, the other doesn't.
Define the date field as Order by Key and PowerBi treats it as a date.
Define the date field as Order by Name and PowerBi treats it as text.
I came across same situation. I don't know OP's calendar dimension properties, but his solution does not helped me.
In dimension design I used ValueColumn property
And in Power BI I saw a new field, that worked pretty nice with Slicer visual:
And Excel all remained the same:

SSIS Data Flow overflowing on dates

I have a FoxPro data source, and the destination is SQL Server 2008.
On the FoxPro side, I have a column with the Date data type. That's a width of 8, min value is 0001-01-01 and max is 9999-12-31. On the SQL Server side, I have a datetime. Also a width of 8, min value is 1753-01-01 and max is 9999-12-31.
In my SSIS Data Flow task, I have an OLE DB Source component that reads in the FoxPro table. The columns are mapped as DT_DBDate in both External and Output Columns. The OLE DB Destination to the SQL Server table takes that columns and flows it to a DT_DBTIMESTAMP. I'm sure that DT_DBTIMESTAMP can handle these date ranges adequately, and reasonably sure about DT_DBDATE (but am having trouble finding it in MS documentation).
The Problem
When I execute the task, I've been having trouble with it failing and complaining of "Invalid date format" and then "Conversion failed because the data value overflowed the specific type" when it encounters values from around the early 1900s, to values over 2050. I'm not exactly sure where I've gone wrong.
Bonus Question
How can I handle overflows in my data flow task?
you might want to do a simple VFP query looking for the records that are beyond the range such as a VFP query of
select * from yourtable
where YourDate < date( 1753, 1, 1 )
or YourDate > date( 9999, 12, 31 )
look at those records for problems...
Additionally, look at other columns that may be causing your boundary issues
I think you need to decide what do you want to do with your data
If you want to load it as it is, you need to change field data type so it will be able to hold the data.
Or if you any wish to correct wrong data you would need to validate and transform it.
If you willing to spend some money please consider using Advanced ETL Processor.
It works with Text, XML, Excel, Access, DBF, Foxpro, ODBC, OLE DB, MS Sql Server, Oracle, MySql, PostgreSQL, Firebird, Interbase, SQLite, POP3, SMTP, File System, FTP, SSL and Unicode.