Display group with no data in Crystal Reports 12 - crystal-reports

I am trying to group my data based on age. I use the following database select:
select * from (
select 0 range_start, 11 range_end, '0-10 days' date_description from dual union
select 11, 21, '11-20 days' from dual union
select 21, 31, '21-30 days' from dual union
select 31, 99999, '31+ days' from dual) date_helper
left outer join table
on table.date <= date_helper.range_start*-1 + sysdate
and table.date > date_helper.range_end*-1 + sysdate
I then make a group based on the date_description column. I am trying to make it display all groups, even when there are no records, that fall within that group.
If there are no records, I want it to have a value of 0, and still print the group.

(+1 for completeness of your question. Welcome to SO!)
If there are no records for a group, then obviously Crystal can't report it. I recommend creating a "helper" table in your datasource. Here is what I would do using some form of SQL:
Make a 'helper' table. It will have 1 column and will contain all the groups you want displayed. If the names of the groups are dynamic, you may want to use a select query or make-table query.
Right join from your helper table to your data-table. Send the combined data to Crystal.
In Crystal, use the helper table's column in your groupings and agebucket calculations.
Also, in your calculation, you should add a line: Else "No age";

Expanding on a comment on PowerUser's answer, if you're using a version of Crystal that allows you to enter your own SQL (instead of having to use Crystal's Database Expert), you can set up a subquery that acts as a helper table - something like:
select * from (
select 0 range_start, 11 range_end, '0-10 days' date_description from dual union
select 11, 21, '11-20 days' from dual union
select 21, 31, '21-30 days' from dual union
select 31, 99999, '31+ days' from dual) date_helper
left outer join
(select sysdate-5 mydate from dual union all
select sysdate - 25 from dual) mytable
on mytable.mydate <= date_helper.range_start*-1 + sysdate
and mytable.mydate > date_helper.range_end*-1 + sysdate
(Oracle syntax - the precise syntax of the query will vary depending on which dialect of SQL you are using.)
EDIT: Changed from SQLServer to Oracle syntax.
FURTHER EDIT: Added some simple sample data.

Related

Selecting certain columns from a table with dates as columns

I have a table where column names are like years "2020-05","2020-06", "2020-07" etc and so many years as columns.I need to select only the current month, next month and third month columns alone from this table.(DB : PostgreSQL Version 11)
But since the column names are "TEXT" are in the format YYYY-MM , How can I select only the current month and future 2 months from this table without hard-coding the column names.
Below is the table structure , Name : static_data
Required select statement is like this,The table contains the 14 months data as in the above screen shot like DATES as columns.From this i want the current month , and next 2 month columns along with their data, something like below.
SELECT "2020-05","2020-06","2020-07" from static
-- SELECT Current month and next 2 months
Required output:
It's nearly impossible to get the actual value of the current month as the column name, but you can do something like this:
select d.item_sku,
d.status,
to_jsonb(d) ->> to_char(current_date, 'yyyy-mm') as current_month,
to_jsonb(d) ->> to_char(current_date + interval '1 month', 'yyyy-mm') as "month + 1",
to_jsonb(d) ->> to_char(current_date + interval '2 month', 'yyyy-mm') as "month + 2"
from bad_design d
;
Technically, you can use the information schema to achieve this. But, like GMB said, please re-design your schema and do not approach this issue like this, in the first place.
The special schema information_schema contains meta-data about your DB. Among these is are details about existing columns. In other words, you can query it and convert their names into dates to compare them to what you need.
Here are a few hints.
Query existing column names.
SELECT column_name
FROM information_schema.columns
WHERE table_schema = 'your_schema'
AND table_name = 'your_table'
Compare two dates.
SELECT now() + INTERVAL '3 months' < now() AS compare;
compare
---------
f
(1 row)
You're already pretty close with the conversion yourself.
Have fun and re-design your schema!
Disclaimer: this does not answer your question - but it's too long for a comment.
You need to fix the design of this table. Instead of storing dates in columns, you should have each date on a separate row.
There are numerous drawbacks to your current design:
very simple queries are utterly complicated : filtering on dates, aggregation... All these operations require dynamic SQL, which adds a great deal of complexity
adding or removing new dates requires modifying the structure of the table
storage is wasted for rows where not all columns are filled
Instead, consider this simple design, with one table that stores the master data of each item_sku, and a child table
create table myskus (
item_sku int primary key,
name text,
cat_level_3_name text
);
create table myvalues (
item_sku int references myskus(item_sku),
date_sku date,
value_sku text,
primary key (item_sku, date_sku)
);
Now your original question is easy to solve:
select v.*, s.name, s.cat_level_3_name
from myskus s
inner join myvalues v on v.item_sku = s.item_sku
where
v.date_sku >= date_trunc('month', now())
and v.date_sku < date_trunc('month', now()) + interval '3 month'

Set sequence for Union queries in Postgres

I am new to Postgresql and trying to solve this:
I have 3 union queries. First query gives a single date, second queries gives id of a dealer and third prints its transaction.
I want the date query to execute first and then the dearler id query.
How can i achieve this in postgres?
I tried doing this using sql using setOrder function
eg given below
select *\date query\*
union
select *\id query\*
union
select *\trnsaction query\*
When I execute this query all gets mixed up.
The usual way to preserve the order of the individual queries is to add some additional "sort index" to each.
select *
from (
select ...., 1 as sort_index
from ..
union all
select ...., 2
from ..
union all
select ...., 3
from ..
)
order by sort_index, ...;

How to reference a date range from another CTE in a where clause without joining to it?

I'm trying to write a query for Hive that uses the system date to determine both yesterday's date as well as the date 30 days ago. This will provide me with a rolling 30 days without the need to manually feed the date range to the query every time I run it.
I have that code working fine in a CTE. The problem I'm having is in referencing those dates in another CTE without joining the CTEs together, which I can't do since there's not a common field to join on.
I've tried various approaches but I get a "ParseException" every time.
WITH
date_range AS (
SELECT
CAST(from_unixtime(unix_timestamp()-30*60*60*24,'yyyyMMdd') AS INT) AS start_date,
CAST(from_unixtime(unix_timestamp()-1*60*60*24,'yyyyMMdd') AS INT) AS end_date
)
SELECT * FROM myTable
WHERE date_id BETWEEN (SELECT start_date FROM date_range) AND (SELECT end_date FROM date_range)
The intended result is the set of records from myTable that have a date_id between the start_date and end_date as found in the CTE date_range. Perhaps I'm going about this all wrong?
You can do a cross join, it does not require ON condition. Your date_range dataset is one row only, you can CROSS JOIN it with your_table if necessary and it will be transformed to a map-join (your small dataset will be broadcasted to all the mappers and loaded into each mapper memory and will work very fast), check the EXPLAIN command and make sure it is a map-join:
set hive.auto.convert.join=true;
set hive.mapjoin.smalltable.filesize=250000000;
WITH
date_range AS (
SELECT
CAST(from_unixtime(unix_timestamp()-30*60*60*24,'yyyyMMdd') AS INT) AS start_date,
CAST(from_unixtime(unix_timestamp()-1*60*60*24,'yyyyMMdd') AS INT) AS end_date
)
SELECT t.*
FROM myTable t
CROSS JOIN date_range d
WHERE t.date_id BETWEEN d.start_date AND d.end_date
Also instead if this you can calculate dates in the where clause:
SELECT t.*
FROM myTable t
CROSS JOIN date_range d
WHERE t.date_id
BETWEEN CAST(from_unixtime(unix_timestamp()-30*60*60*24,'yyyyMMdd') AS INT)
AND CAST(from_unixtime(unix_timestamp()-1*60*60*24,'yyyyMMdd') AS INT)

Add new row to SELECT results

Say I have a table with the column 'CreatedDate'. I am interested in creating an SQL statement that can get me a list of all years except add one more year at the end of the result.
TSQL:
SELECT DISTINCT YEAR(CreatedDate) as FY from MyTable ORDER BY FY ASC
Current results returned by above query:
2016
2017
I would like the final results to be:
2016
2017
2018
I do not want to accomplish this by using a TemporaryTable or a View. I do not want to create new tables and truncate them. Is there anything I can do to the select statement to get me what I need?
select *
from (
SELECT DISTINCT YEAR(CreatedDate) as FY
from MyTable
union
SELECT max(YEAR(CreatedDate))+1 as FY
from MyTable
)x
ORDER BY FY ASC

Crystal reports zero values

Hey guys,
So I have this report that I am grouping into different age buckets. I want the count of an age bucket to be zero if there are no rows associated with this age bucket. So I did an outer join in my database select and that works fine. However, I need to add a group based on another column in my database.
When I add this group, the agebuckets that had no rows associated with them dissapear. I thought it might have been because the column that I was trying to group by was null for that row, so I added a row number to my select, and then grouped by that (I basically just need to group by each row and I can't just put it in the details... I can explain more about this if necessary). But after adding the row number the agebuckets that have no data are still null! When I remove this group that I added I get all age buckets.
Any ideas? Thanks!!
It's because the outer join to age group is not also an outer join to whatever your other
group is - you are only guaranteed to have one of each age group per data set, not one of each age group per [other group].
So if, for example, your other group is Region, you need a Cartesian / Cross join from your age range table to a Region table (so that you get every possible combination of age range and region), before outer joining to the rest of your dataset.
EDIT - based on the comments, a query like the following should work:
select date_helper.date_description, c.case_number, e.event_number
from
(select 0 range_start, 11 range_end, '0-10 days' date_description from dual union
select 11, 21, '11-20 days' from dual union
select 21, 31, '21-30 days' from dual union
select 31, 99999, '31+ days' from dual) date_helper
cross join case_table c
left outer join event_table e
on e.event_date <= date_helper.range_start*-1 + sysdate
and e.event_date > date_helper.range_end*-1 + sysdate
and c.case_number = e.case_number
(assuming that it's the event_date that needs to be grouped into buckets.)
I had trouble understanding your question.
I do know that Crystal Reports' NULL support is lacking in some pretty fundamental ways. So I usually try not to depend on it.
One way to approach this problem is to hard-code age ranges in the database query, e.g.:
SELECT p.person_type
, SUM(CASE WHEN
p.age <= 2
THEN 1 ELSE 0 END) AS "0-2"
, SUM(CASE WHEN
p.age BETWEEN 2 AND 18
THEN 1 ELSE 0 END) AS "3-17"
, SUM(CASE WHEN
p.age >= 18
THEN 1 ELSE 0 END) AS "18_and_over"
FROM person p
GROUP BY p.person_type
This way you are sure to get zeros where you want zeros.
I realize that this is not a direct answer to your question. Best of luck.