I have very large number of db tables in pracle database. I would like to somehow generate simple jasper report (version 6.0.3) for each of them.
One line with the name of table as header and under it common table displaying all columns of table. Option to exclude some columns from predefined list is welcome.
Any advise? Has someone any experience with this issue?
Thank in advance
My idea is use some ETL tool for extract specification of tables directly from database and map it somehow into xml files
There is a cloud based tool that generates the JRXML transparently based on the data structure, you can check it out at http://flashreport.io.
It supports simple XML and JSON as input, but does not allow excluding specific columns. You would have to do that in your ETL tool.
You can use iReports, which is a designing tool for generating jrxml files (which are jasper-reports xml files): iReports tutorial. You have just to create a data-source (in this case you can create a connection to your DB), and construct your report design by dragging and dropping the tables/columns into it (mapped from the underlying data-source).
I've personally been working this iReports, but nowadays the jaspersoft community is putting it's efforts into another tool: Jaspersoft Studio, which seems to be the future replacement for iReports
Related
I got a connection to the SAP HANA database. I've created a personal DB called simply "database" which I want to fill with two CSV files that I have in my laptop.
How can I do this?
Do I need to create the tables with the all the columns before?
One of the problem is that my CSV files are composed by 130 columns.
It is impossible to create all the columns from scratch.
Thank you in advance for you help.
You can simply use the import data from front end function in SAP HANA Studio for that.
The wizard lets you define the target table structure based on the data found in the CSV files.
Check this out Import CSV into SAP HANA, express edition using the SAP HANA Tools for Eclipse
I am new to talend and need guidance on below scenario:
We have set of 10 Json files with different structure/schema and needs to be loaded into 10 different tables in Redshift db.
Is there a way we can write generic script/job which can iterate through each file and load it into database?
For e.g.:
File Name: abc_< date >.json
Table Name: t_abc
File Name: xyz< date >.json
Table Name: t_xyz
and so on..
Thanks in advance
With Talend Enterprise version one can benefit of dynamic schema. However based on my experiences with json-s they are somewhat nested structures usually. So you'd have to figure out how to flatten them, once thats done it becomes a 1:1 load. However with open studio this will not work due to the missing dynamic schema.
Basically what you could do is: write some java code that transforms your JSON into CSV. Use either psql from commandline or if your Talend contains new enough PostgreSQL JDBC driver then invoke the client side \COPY from it to load the data. If your file and the database table column order matches it should work without needing to specify how many columns you have, so its dynamic, but the data newer "flows" through talend.
Really not cool but also theoretically possible solution: If Redshift supports JSON (Postgres does) then one can create a staging table, with 2 columns: filename, content. Once the whole content is in this staging table, INSERT-SELECT SQL could be created that transforms the JSON into tabular format that can be inserted into the final table.
However, with your toolset you probably have no other choice than to load these files with 1 job per file. And I'd suggest 1 dedicated job to each file. They would each look for their own files and triggered / scheduled individually or be part of a bigger job where you scan the folders and trigger the right job for the right file.
My requirenet is to build tree structured report using iReport. I have data base table XXX. Which is having supervisor, reportedTo columns to maintain relation among employees.
So how to generate this relation using iReport designer. Is there any way.
iReport is not going to do this for you. Its going to be highly dependent on the DBMS you have (see recursive queries) or your ability to generate the dataset in Java.
I use iReport designed jrxmls for Jasper reports
I have done database specific functions and DML queries like date format, string concatenation, concatenate symbol(||) etc.
My Question is, "Is there any way or plug-in to make the jrxml files to be database portable?".
Thanks in advance,
Kalaiselvan.
You are using JDBC, so your reports are already kind of portable unless you use some vendor-specific SQL functions or features.
You could write your OWN datasource in JasperReports (do implement JRDataSource interface), and provide your own layer of database independence. It shouldn't be that hard.
Each report is filled from a data source like a database, but you knew that. Since the report is filled by fetching data from a specific database with queries to specific rows, if you want to make your .jrxml files database portable (or your .jasper files for that matter) you will need to make your data source and sql queries parameters which are fed into your report file from your program. It is pretty straight forward to make the data source and SQL query a parameter using iReport.
i am using ADO.Net oledb for inserting and fetching data from Excel database. I want to make first column in the excel sheet to bold and i want to add comments. I am achieving this thru Interop.Excel Application class.
i dont want to use interop. is there anyway to achieve through ADO.net query itself ? or some other way? My application is c# windows application
No way through ADO.NET, any more than there is of making a SQL Server column bold. ADO.NET treats Excel as a data source - formatting is something quite different and requires knowledge of the Excel spreadsheet format, such as you'd get via Interop. There are probably other libraries you can use if you search...