Excel template translator Jett more data auto generated sheet - jett

Using jett template translator if there exist more data than supported in one sheet. Will more data displayed in next sheet . If yes will it include header of table.

No, it won't.
Since you're mentioning max rows per sheet limit, I assume you're working with the XLS format where every sheet is limited to 65536 rows.
Then I would strongly advise you to switch to XLSX format, since its limit is around 1 million rows per sheet.
Also, if you're inserting so much data per sheet, you should consider streaming the data directly to the sheet using SXSSF POI API. If you manipulate huge XLSX spreadsheets with hundreds of thousands of rows directly with JETT, you're likely to hit memory problems - but because of POI XSSF, not because of JETT itself.
If you're stuck with XLS format, you'll have to do the pagination and insert extra sheets yourself before inserting data with JETT.

Related

Is there any way to save lots of data on Sharepoint by just using REST api (or any client-side solution)?

I have been asked to develop a web application to be hosted on Sharepoint 2013 that has to work with lots of data. It basically consists in a huge form to be used to save and edit many information.
Unfortunately, due to some work restrictions, I do not have access to the backend, so it has to be done entirely client-side.
I am already aware on how to programmatically create sharepoint lists with site columns and save data on them with REST.
The problem is, I need to create a Sharepoint list (to be used as database) with at least 379 site columns (fields), of which 271 has to be single lines of text and 108 multiple lines of text, and by doing so I think I would exceed the threshold limit (too many site columns on a single list).
Is there any way I could make this work? Any other solution on how to save big amounts of data on Sharepoint, by only using client-side solutions (e.g. REST)? Maybe there is a way to save a XML or JSON file in any way on Sharepoint through REST?
I don't remember if there is actually some limit regarding columns in SP 2013. For sure there is a limit when You would be using lookup columns (up to 12 columns in one view), but since You are using only text columns this should not be blocking... and limit regarding number of rows that may be present in one view (5000 - normal user, 20 000 - admin user)
Best to check all here https://learn.microsoft.com/en-us/sharepoint/install/software-boundaries-and-limits
As described by MS:
The sum of all columns in a SharePoint list cannot exceed 8,000 bytes.
Also You may create one Note column and store all date in JSON structure which will be saved in this column or create Library not list and store JSON documents in list (just be aware that by default JSON format is blocked in SharePoint to be stored in lists and You need to change those settings in Central Admin for application pool -> link). Just be aware that with this approach You will be not able to use many OOB SharePoint features like column filters which might be handy

How to remove words from a document on a column-by-column basis instead of whole lines in word

Perhaps a stupid question but I have a document where I have a large number of numerical values arranged in columns, although not in word's actual column formatting and I want to delete certain columns while leaving one intact. Heres a link to a part of my document.
Data
As can be seen there are four columns and I only want to keep the 3rd column but when I select any of this in word, it selects the whole line. Is there a way I can select data in word as a column, rather than as whole lines? If not, can this be done in other word processing programs?
Generally, spreadsheet apps or subprograms are what you need for deleting and modifying data in column or row format.
Microsoft's spreadsheet equivalent is Excel, part of the Microsoft Office Suite that Word came with. I believe Google Docs has a free spreadsheet tool online as well.
I have not looked at the uploaded file, but if it is small enough, you might be able to paste one row of data at a time into a spreadsheet, and then do your operation on the column data all at once.
There may be other solutions to this problem, but that's a start.

Working with huge csv files in Tableau

I have a large csv files (1000 rows x 70,000 columns) which I want to create a union between 2 smaller csv files (since these csv files will be updated in the future). In Tableau working with such a large csv file results in very long processing time and sometimes causes Tableau to stop responding. I would like to know what are better ways of dealing with such large csv files ie. by splitting data, converting csv to other data file type, connecting to server, etc. Please let me know.
The first thing you should ensure is that you are accessing the file locally and not over a network. Sometimes it is minor, but in some cases that can cause some major slow down in Tableau reading the file.
Beyond that, your file is pretty wide should be normalized some, so that you get more row and fewer columns. Tableau will most likely read it in faster because it has fewer columns to analyze (data types, etc).
If you don't know how to normalize the CSV file, you can use a tool like: http://www.convertcsv.com/pivot-csv.htm
Once you have the file normalized and connected in Tableau, you may want to extract it inside of Tableau for improved performance and file compression.
The problem isn't the size of the csv file: it is the structure. Almost anything trying to digest a csv will expect lots of rows but not many columns. Usually columns define the type of data (eg customer number, transaction value, transaction count, date...) and the rows define instances of the data (all the values for an individual transaction).
Tableau can happily cope with hundreds (maybe even thousands) of columns and millions of rows (i've happily ingested 25 million row CSVs).
Very wide tables usually emerge because you have a "pivoted" analysis with one set of data categories along the columns and another along the rows. For effective analysis you need to undo the pivoting (or derive the data from its source unpivoted). Cycle through the complete table (you can even do this in Excel VBA despite the number of columns by reading the CSV directly line by line rather than opening the file). Convert the first row (which is probably column headings) into a new column (so each new row contains every combination of original row label and each column header plus the relevant data value from the relevant cell in the CSV file). The new table will be 3 columns wide but with all the data from the CSV (assuming the CSV was structured the way I assumed). If I've misunderstood the structure of the file, you have a much bigger problem than I thought!

How to automatically create new worksheet when data exceeds 65536 rows when exporting using OfficeWriter?

I have a report that exceeds 65536 rows of data. From my understanding, correct me if I'm wrong here, Officewriter template can only render this much data (65536), and the remaining rows will be removed. Is there any way to automatically create a new worksheet to the exported excel file to accommodate the remaining rows using Officewriter?
Dave,
There are a couple ways of doing this.
Use the continue Modifier. The continue modifier will let you overflow your data from one worksheet to another see the documentation here
Use the XLSX file format. Not xls. The 65536 rows you mention is a file format limitation on the xls file format. The XLSX file format easily supports over 1 million rows per a worksheet.
Lastly look at the MaxRows property on DataBindingProperties. I am going by memory and do not have OfficeWriter installed at the moment, but depending on your version of OW there may be a bug. In some versions I believe MaxRows defaults to 65536, so even if you are using XLSX it may appear to get truncated. You can work around this by setting MaxRows to a larger number and using XLSX.
Hope this helps.

What is the ideal approach to export reports to Excel and CSV using JasperReports?

I am using iReport designer to export reports into PDF format and CSV formats. Now for the PDF
format, everything seems perfect, but when I use the same design to export to CSV, the whole
layout goes haywire. I would document all the necessary research i have gathered. Let's have
a look at the report format in PDF and then CSV.
PDF Format
CSV Format
Here is the research gathered.
PDF format is pixel perfect reports where CSV reports.
We can use CSVMetaDataExporter in order to just extract the data and set the column names describing the types and data using export parameters. Though i have not used the second option still.
So my basic question is, if we want to use the same template to export CSV or Excel, we would be obviously running into alignment and width issues. I exported the report to Excel as well and in the Excel format the results were not at all satisfactory. So in this context, is JasperReports really a correct choice to opt for Excel and CSV formats? If it is, what is the ideal approach to deal with such output formats?
In my professional opinion, no. Don't even bother trying to keep the same template format when your output will change from Visual: PDF/On-Screen/Print and Structured: CSV/Excel etc,.
Alex K mentioned the Advanced Excel Features, and when used well it can generate output on screen that will match Excel. However, your design of the elements must be very tight, meaning avoid spanning cells, absolutely positioned elements, snap to grid or snap to other elements.
If your client/user requires the report to look good and be useable in Excel, then you may very well have to design for an Excel format.