oracle apex sorting data on chart with Custom sort in underlying query - oracle-apex-20.1

Oracle apex Charts sort data only in Numeric order even we use Some sort by in Underlying Query.
For example I want to Sort data on x axis by Time portion like 7 , 8 am ..... 00 am etc
How i can consider the custom sort so Charts should skip it default sorting functionality

It's not working because Oracle it's treating the data as strings. A-Z or Z-A
What if you try. For example
order by to_char(to_date( your_field,'Mon-RR'), 'YYMM')
Use the mask that you need obviously.

Related

Sorting Data by Latest Date, selecting top 10 and charting (v. 10.0.1)

This is the data table that I have created
I need to sort the data by 'April2017' in descending order and then select the top 10 projects.
When we select top10 basis April2017, the output should be
Instead what I get is
Here is what I've tried out so far,
Created a calculated field
Calculation1 = iif([Year_Month]=MAKEDATE(2017,4,1),[Claim Count],0)
Sorted Projects based on 'Calculation1'
Drag Project to Filter and select Top10 based on sum([Calculation1])
I am unable to understand how the top10 here is being derived.
Where am I going wrong?
The chart that I am trying to get should be similar to
Please help me with this problem.
You can filter to a selected portion of data in a calculation and use as desired. So create a calculated field called, say April_2017_Foobars, defined as:
if datetrunc('month', [Year_Month]) = #04/01/2017# then [Foobars] end
This field return [Foobars] for the April 2017 rows and null for other rows. Nulls are ignored by aggregate functions, so if you aggregate with SUM() or AVG() etc, the effect is to filter to April 2017 for that field alone.
Then you can use April_2017_Foobars for sorting and defining top filters for your Project field. This is a very general technique that is useful in all kinds of situations.
You can generalize it a bit to use a parameter for the special month rather than hard code it - or use an LOD calc to find the last month in your dataset if you always intend to use the latest month.
P.S. You can use the makedate() function instead of a date literal if you prefer and your data source supports that function. Might avoid any confusion about date literal formats being different in various countries.
create Calculation1 field: iif([Year_Month]="April 2017",[Number],0)
sort Project in descending order on Calculation1 Sum
drag Project to filters, and do Top > By Field > Top: 10 by Calculation1 Sum

SAP HANA Decimal to timestamp or seconddate SLT

I am using SLT to load tables into our Hana DB. SLT uses the ABAP dictionary and sends timestamps as decimal (15,0) to the HANA DB. Once in the HANA DB via a calculated column in a calculation view, I am trying to convert the decimals to timestamps or seconddates. Table looks like this:
I run a small SLT transformation to populate columns 27-30. The ABAP layer in SLT populates the columns based on the Database transactions.
The problem comes when I try and convert columns 28-30 to timestamps or seconddates. using syntax like this:
Select to_timestamp(DELETE_TIME)
FROM SLT_REP.AUSP
Select to_seconddate(DELETE_TIME)
FROM SLT_REP.AUSP
I get the following errors:
Problem being, It works some times as well:
The syntax in calculated column looks like this:
With the error from calculation view being:
Has anyone found a good way to convert ABAP timestamps (Decimal (15,0)) to Timestamp or Seconddate in HANA?
There are conversion functions available, that you can use here (unfortunately not very well documented).
select tstmp_to_seconddate(TO_DECIMAL(20110518082403, 15, 0)) from dummy;
TSTMP_TO_SECONDDATE(TO_DECIMAL(20110518082403,15,0))
2011-05-18 08:24:03.0
The problem was with the ABAP data type. I was declaring the target variable as DEC(15,0). The ABAP extracting the data was rounding up the timestamp in some instances to the 60th second. Once in Target Hana, the to_timestamp(target_field) would come back invalid when a time looked like "20150101121060" with the last two digits being the 60th second. This is invalid and would fail. The base Hana layer did not care as it was merely putting a length 14 into into a field. I changed the source variable to be DEC(21,0). This eliminated the ABAP rounding and fixed my problem.

How to display 40 + columns in Tableau?

I am trying to do a list report with about 40 columns(Dims+measure) but not able to get it right,
the requirement pushes the Tableau limitation by exploiting its limit to only 16 columns.
How can I get this done?
I read this
Here is my Tableau workbook with 16+ columns but no column header
Go to Analysis-->Table Layout -->Advanced and change the number in Rows and Columns as per your need.
You can't add more than 16 to this, but increase it to 16 (for identification).
So, save the Tableau file with extension .TWB. Then open this file in notepad.
Then search for the text: attr='row-levels'.
You will find something like:
<format attr='row-levels' value='16' />
<format attr='row-horiz-levels' value='16' />
Change the value of 16 to desired column numbers. Save the notepad file. Open it in Tableau.
The measures names and measures values special fields can help here and covers most use cases. (Using the measure names and values fields is likely a better choice than creating 40+ marks cards as you did in your posted example)
Put Measure Names on the column and filter shelves and measure values on the text shelf. Then add the measure fields you want to the Measures Values shelf. Then put the dimensions that you wish on the rows shelf.
A single field+aggregation can only be on the Measure Values shelf once, but a field can repeat with different aggregations -- so you can show the min, avg and max of a measure in 3 different columns.
As you mentioned, you can increase the max col and row headers up to 16 each via the Analysis->Table Layout->Advanced menu and panel. Beyond that point, adjacent columns will still display, just be coalesced for display.
Still you can have an apparently arbitrary number of fields on the measures values shelf, so can display as many columns of measures (data) as you wish, even though adjacent header columns for dimension (~category) get coalesced for display once you hit the header limit.
Tableau is optimized for summarizing data for efficient interpretation by humans, so displaying extremely wide tables of data is not the best fit for the tool (or a human reader frankly). Importing and exporting large tables is certainly possible.
At the 2015 conference I went to a session called "Use Tableau Like a Sith" and they showed us how to change the XML to workaround the 16 limit. Caveat being this is not supported.
Find the entries in the attached image and change their value to 40. In the screenshot, the Sith presenters were changing them to 36.
Here is a workaround for some data sets:
convert your fields from Dimension to Measure, and then
display using Measure Names / Measure Values, as #Alex Blakemore suggested.
For example, Boolean fields can be converted to numeric using INT().
PROS:
It is easier to change which fields to plot using Measure Names / Measure Values.
Faster performance, at least for some data sets.
CONS:
Often data sets have some fields that cannot or should not be converted to measure.
Not as easy or straightforward as changing Analysis > Table Layout > Advanced settings, or the xml-editing workaround suggested by #Cyndi1976.
There are Two ways:
Edit the saved .twb file and edit the Below xml code by opening the workbook with Notepad
<format attr='row-levels' value='16' />
<format attr='row-horiz-levels' value='16' />
Create 3 different worksheets each consisting multiple column but each worksheet consisting columns >16 and place them in single dashboard. So you will get one view with 40 columns.
A good way to do this is to create groups and filters. I'm sure, out of 40+ columns, a good number of them can be converted to either of the above, giving a neater look to your dashboard, making it easy to comprehend your data.
Let us assume you're creating a dashboard to show the overall split of mobile recharges for a company x.
One of the option is to have multiple columns; each for:
the mobile OS
OS version
service provider
recharge rank
Sub-category (Prepaid / Postpaid)
...
the easier and elegant way to reduce the number of columns is to populate a dropdown list with these values. Not only this will make the dashboard easier to comprehend, it will reduce the number of columns one has to refer to interpret the data and would also reduce the technical limitations imposed on the number of columns.
to create a group in Tableau:
include the fields in the result set i.e. use the column[s] in select statement.
select os, os_version, service_provider, rank, subcategory ... from schema.recharge_table [where...];
In the Sheets view of Tableau, right click on the field to create group. Let's create a split on subcategory.
Group the sub-categories, give them proper alias to be recognised easily.
Drag the Group to filter and you've successfully and elegantly reduced one column.
16 is the maximum limit for row/column labels in tableau table.
Put 20 columns on one sheet and 20 one the other dashabord. Drag and drop both sheets on to your dashbaord, and you should be having 40 columsn.

Best way to query 4 B+ records in Tableau

I am looking a best way to analyse 4B records (1TB data) stored in Vertica using Tableau. I tried using extract of 1M records which works perfectly. but dont know how to manage 4B records, because its taking too long to query on 4B records.
I have following dataset :
timestamp id url domain keyword nor_word cat_1 cat_2 cat_3
So here I need to create descending list of Top 10 ID's, Top 10 url, Top 10 domain, Top 10 keyword, Top 10 nor_word, Top 10 cat_1, Top 10 cat_2, Top 10 cat_3 depending count of each field value in separate worksheet and combine all worksheet in one dashboard.
There is no primary key. This dataset of 1 month so I want to make global filter start date and end date to reduce the query size. But don't know how to create global date filter and display on dashboard ?
You have two questions, one about Vertica and one about Tableau. You should split these up.
Regarding Vertica, you need to know that Vertica stores data in ascending sort order in physical storage. This means that an additional step will always be required anytime you want to get a descending sort order.
I would suggest creating a partition on the date, and subsequently running Database Designer (DBD) in incremental mode and using your queries as samples. By partitioning the data, Vertica can eliminate the partitions during optimization.
Running the DBD will generate some better optimized projections. You should consider the trade-off between how often you will need this data and whether it's worth creating these additional projections as it will impact your load performance.

Working with a delimited list of items in a Tableau field

I am preparing a data visualization in Tableau.
I have some data that can be simplified like this:
Name, Score, Tag
Joe, 5, A;B
Phil, 7, D
Quinn, 9, A;C
Bill, 3, A;B;C
I would like to generate a word cloud on the Tag field that counts
occurances of each item A,B,C. So I need to generate this:
A,3
B,2
C,2
D,1
In other words, I need help working with a field that contains a list of delimited values.
In the example data ; is the delimiter, but it could be anything.
I would like the word cloud to update as the user
applies filters, e.g. dragging a slider to set score > 5.
So the tag count has to be done on the fly.
I'm pretty sure I'll need to use field calculations and table calculations..?
Possibly I'll need to have a separate table tracking the tags..?
I have no problem building the word cloud and other viz elements.
What I'm looking for help with is parsing the delimited list field and
calculating the tag counts.
I do have full control over the source data, so if there is an easier way to
do this by reorganizing the schema, I'd be glad to do that. I thought of breaking
the field up into spearate tag1, tag2, tagX fields and trying to count over the
separate fields... but not sure if this is any simpler.
Thanks for any tips.
Another (probably better in your case) approach is to reshape the data before feeding it to Tableau. Tableau works best with normalized data.
Preprocess it to look like:
Name, Score, Tag
Joe, 5, A
Joe, 5, B
Phil, 7, D
Quinn, 9, A
Quinn, 9, C
Bill, 3, A
Bill, 3, B
Bill, 3, C
At that point, the standard Tableau word cloud charts should work well, and it will scale easily as you add more tags and data.
Reshaping data to normalize it prior to analysis with Tableau is a pretty standard step. Sometimes you can do it automatically, say with custom SQL, but often you'll have to use some sort of script first. If your data comes from Excel, Tableau has a plug in that can help with reshaping data. Look for it on the Tableau knowledge base.
Here's an approach that would be tolerable if you had a fixed set of 3 or 4 tags. Since you have closer to 50K possible tags, it's not a feasible approach for your problem as is. But maybe it will give you an idea. Similar approaches can be used to solve different kinds of problems in Tableau, so its a useful trick to know.
For each tag, create a boolean calculated field that returns 1 if the current row contains that particular tag and null otherwise (or whatever the condition is you want to detail)
For example, define a calculated field called Tag_A defined as:
if contains(Tag, "A") then
1
end
Similar, define calculated fields Tag_B, Tag_C etc
So far it's easy.
Then you can use those fields in other calculations to count the number of records that contain tag A, filter to only those that contain A, use the calculated field on the condition tab when defining sets that are computed dynamically by a formula ... Of course, the low level calculated field function can be more complex, say checking for the presence of at least 2 fields out of a list for example.
If nothing else, this approach sometimes lets you break complex problems into bite sized pieces.
Unfortunately, hard coding calculated field names won't scale to 50K tags. For that, you probably want to reshape your data.