How to Add repeated columns in jasper reports.(Using jaspersoft studio) - jasper-reports

I have a requirement like the following but can't solve it and that's why I am here. I hope someone can help.
I need a report which contains 3 columns in jasper report. The column values are so small that all these 3 columns occupy just the one third of the page width. So, I want to make a report which is like as follows:
Col1 Col2 Col3 | Col1 Col2 Col3 | Col1 Col2 Col3
------------------ ------------------ ------------------
val1 val1 val1 | val3 val3 val3 | val5 val5 val5
val2 val2 val2 | val4 val4 val4 | val6 val6 val6
That means Col1, Col2, Col3 will be repeated.
Can you give me any suggestions regarding these issues in the jasper report?

set the columnCount and printOrder properties of the report as follows,
columnCount="3" it divided page into 3 columns
printOrder="Horizontal" by default it is vertically. hence, it should be a horizontal
report.jrxml
<!-- Created with Jaspersoft Studio version 6.13.0.final using JasperReports Library version 6.5.1 -->
<jasperReport ----- name="report" columnCount="3" printOrder="Horizontal" ---->
<property name="com.jaspersoft.studio.data.defaultdataadapter" value="One Empty Record"/>
-----
</jasperReport>

Related

Is there anyway to load comma seperated string into a single column in hive?

'TOK_STRINGLITERALSEQUENCE not supported in insert/values' getting this error while loading data into the hive.
when trying to insert a comma-separated string into a single column it is showing error as
'TOK_STRINGLITERALSEQUENCE not supported in insert/values'
insert into table table_name values('llu'/t'ghf'/t'a,b,c,d'/t'gh,edf,ghu,kjhl'/t'1')
/t represents delimiter as tab
while loading data I am getting an error as 'TOK_STRINGLITERALSEQUENCE not supported in insert/values'.
Expected results
col1 col2 col3 col4 col5
llu ghf a,b,c,d gh,edf,ghu,kjhl 1
I'm not sure why you're using tab-delimitation for the insert statement. This worked for me in Hive version 1.2.1
create table test (col1 STRING, col2 STRING, col3 STRING, col4 STRING, col5 STRING);
insert into table test values('llu','ghf','a,b,c,d','gh,edf,ghu,kjhl','1');
select * from test;
+------------+------------+------------+------------------+------------+--+
| test.col1 | test.col2 | test.col3 | test.col4 | test.col5 |
+------------+------------+------------+------------------+------------+--+
| llu | ghf | a,b,c,d | gh,edf,ghu,kjhl | 1 |
+------------+------------+------------+------------------+------------+--+

How to find duplicated columns with all values in spark dataframe?

I'm preprocessing my data(2000K+ rows), and want to count the duplicated columns in a spark dataframe, for example:
id | col1 | col2 | col3 | col4 |
----+--------+-------+-------+-------+
1 | 3 | 999 | 4 | 999 |
2 | 2 | 888 | 5 | 888 |
3 | 1 | 777 | 6 | 777 |
In this case, the col2 and col4's values are the same, which is my interest, so let the count +1.
I had tried toPandas(), transpose, and then duplicateDrop() in pyspark, but it's too slow.
Is there any function could solve this?
Any idea will be appreciate, thank you.
So you want to count the number of duplicate values based on the columns col2 and col4? This should do the trick below.
val dfWithDupCount = df.withColumn("isDup", when($"col2" === "col4", 1).otherwise(0))
This will create a new dataframe with a new boolean column saying that if col2 is equal to col4, then enter the value 1 otherwise 0.
To find the total number of rows, all you need to do is do a group by based on isDup and count.
import org.apache.spark.sql.functions._
val groupped = df.groupBy("isDup").agg(sum("isDup")).toDF()
display(groupped)
Apologies if I misunderstood you. You could probably use the same solution if you were trying to match any of the columns together, but that would require nested when statements.

KDB: why am I getting a type error when upserting?

I specified the columns to be of type String. Why am I getting the following error:
q)test: ([key1:"s"$()] col1:"s"$();col2:"s"$();col3:"s"$())
q)`test upsert(`key1`col1`col2`col3)!(string "999"; string "693"; string "943";
string "249")
'type
[0] `test upsert(`key1`col1`col2`col3)!(string "999"; string "693"; string "9
43"; string "249")
To do exactly this, you can remove the types of the list you defined in test:
q)test: ([key1:()] col1:();col2:();col3:())
q)test upsert (`key1`col1`col2`col3)!("999";"693";"943";"249")
key1 | col1 col2 col3
-----| -----------------
"999"| "693" "943" "249"
The reason you are getting a type error is because "s" corresponds to a list of symbols, not a list of characters. you can check this by using .Q.ty:
q).Q.ty `symbol$()
"s"
q).Q.ty `char$()
"c"
It is (generally) not a great idea to set the keys as nested list of chars, you might find it better to set them as integers ("i") or longs ("j") as in:
test: ([key1:"j"$()] col1:"j"$();col2:"j"$();col3:"j"$())
Having the keys as integers/longs will make the upsert function behave nicely. Also note that a table is a list of dictionaries, so each dictionary can be upserted inidividually as well as a table being upserted:
q)`test upsert (`key1`col1`col2`col3)!(9;4;6;2)
`test
q)test
key1| col1 col2 col3
----| --------------
9 | 4 6 2
q)`test upsert (`key1`col1`col2`col3)!(8;6;2;3)
`test
q)test
key1| col1 col2 col3
----| --------------
9 | 4 6 2
8 | 6 2 3
q)`test upsert (`key1`col1`col2`col3)!(9;1;7;4)
`test
q)test
key1| col1 col2 col3
----| --------------
9 | 1 7 4
8 | 6 2 3
q)`test upsert ([key1: 8 7] col1:2 4; col2:9 3; col3:1 9)
`test
q)test
key1| col1 col2 col3
----| --------------
9 | 1 7 4
8 | 2 9 1
7 | 4 3 9
You have a few issues:
an array of chars in quotes is a string so no need to write string "abc"
string "aaa" will split the string out in strings of strings
your initial defined types are symbols "s" and not strings
This will allow you to insert as symbols:
q)test: ([key1:"s"$()] col1:"s"$();col2:"s"$();col3:"s"$())
q)`test upsert(`key1`col1`col2`col3)!`$("999"; "693"; "943"; "249")
`test
This will keep them as strings:
q)test: ([key1:()] col1:();col2:();col3:())
q)`test upsert(`key1`col1`col2`col3)!("999"; "693"; "943"; "249")
`test
Have a look at the diffs in metas of the two
HTH,
Sean

Oracle: How to group records by certain columns before fetching results

I have a table in Redshift that looks like this:
col1 | col2 | col3 | col4 | col5 | col6
=======================================
123 | AB | SSSS | TTTT | PQR | XYZ
---------------------------------------
123 | AB | SSTT | TSTS | PQR | XYZ
---------------------------------------
123 | AB | PQRS | WXYZ | PQR | XYZ
---------------------------------------
123 | CD | SSTT | TSTS | PQR | XYZ
---------------------------------------
123 | CD | PQRS | WXYZ | PQR | XYZ
---------------------------------------
456 | AB | GGGG | RRRR | OPQ | RST
---------------------------------------
456 | AB | SSTT | TSTS | PQR | XYZ
---------------------------------------
456 | AB | PQRS | WXYZ | PQR | XYZ
I have another table that also has a similar structure and data.
From these tables, I need to select values that don't have 'SSSS' in col3 and 'TTTT' in col4 in (edited) either of the tables. I'd also need to group my results by the value in col1 and col2.
Here, I'd like my query to return:
123,CD
456,AB
I don't want 123, AB to be in my results, since one of the rows corresponding to 123, AB has SSSS and TTTT in col3 and col4 respectively. i.e, I want to omit items that have SSSS and TTTT in col3 and col4 in either of the two tables that I'm looking up.
I am very new to writing queries to extract information from a database, so please bear with my ignorance. I was told to explore GROUP BY and ORDER BY, but I am not sure I understand their usage well enough yet.
The query I have looks like:
SELECT * from table1 join table2 on
table1.col1 = table2.col1 AND
table1.col2 = table2.col2
WHERE
col3 NOT LIKE 'SSSS' AND
col4 NOT LIKE 'TTTT'
GROUP BY col1,col2
However, this query throws an error: col5 must appear in the GROUP BY clause or be used in an aggregate function;
I'm not sure how to proceed. I'd appreciate any help. Thank you!
It seems you also want DISTINCT results. In this case a solution with MINUS is probably as efficient as any other (and, remember, MINUS automatically also means DISTINCT):
select col1, col2 from table_name -- enter your column and table names here
minus
select col1, col2 from table_name where col3 = 'SSSS' and col4 = 'TTTT'
;
No need to group by anything!
With that said, here is a solution using GROUP BY. Note that the HAVING condition uses a non-trivial aggregate function - it is a COUNT() but what is counted is a CASE to take care of what was required. Note that it is not necessary/required that the aggregate function in the HAVING clause/condition be included in the SELECT list!
select col1, col2
from table_name
group by col1, col2
having count(case when col3 = 'SSSS' and col4 = 'TTTT' then 1 else null end) = 0
;
You should use the EXCEPT operator.
EXCEPT and MINUS are two different versions of the same operator.
Here is the syntax of what your query should look like
SELECT col1, col2 FROM table1
EXCEPT
SELECT col1, col2 FROM table1 WHERE col3 = 'SSSS' AND col4 = 'TTTT';
One important consideration is to know if your desired answer requires either the and or OR operator. Do you want to see the records where col3 = 'SSSS' and col4 has a value different than col4 = 'TTTT'?
If the answer is no you should use the version below:
SELECT col1, col2 FROM table1
EXCEPT
SELECT col1, col2 FROM table1 WHERE col3 = 'SSSS' OR col4 = 'TTTT';
You can learn more about the MINUS or EXCEPT operator on the Amazon Redshift documentation here.

No borders in table in org-mode

This is how I wrote my table in org-mode:
| col1 | col2 | col3 |
|------+------+------|
| val1 | val2 | val3 |
| val4 | val5 | val6 |
This is the output I'm getting in org-export-as-pdf :
What I want is the borders for the table. The org-mode version I'm
using is 7.9.3f.
UPDATE:
With #+ATTR_LaTeX: align=|c|c|c|, I get the follwing table:
UPDATE:
Solved that using putting horizontal lines on top and below of the table using C-u C-c - and C-c - respectively.
If you want vertical lines, you need to specify it, hence something like:
#+ATTR_LaTeX: align=|c|c|c|
in your old version of Org mode, or:
#+ATTR_LaTeX: :align |c|c|c|
in Org mode 8.