I want to convert a spreadsheet (e.g. .xls or from LibreOffice Calc) to some text format, e.g. .csv, without evaluating formulas so the formulas are stored in the text file. I know that LibreOffice has an option "Save cell formula instead of calculated values" when saving as .csv and according to How to export spreadsheet to CSV without evaluating formulas Excel can do this too, but I'd like to do it on command line. I know that ssconvert from the Gnumeric package can convert on command line but as fa as I ca see there's no option to keep the formulas.
The bigger picture is that I want to write a script that takes two versions of an .ods file, converts them and shows the differences. When only one cell has really changed but many other cells depend on it, then I want to see only the real change.
I have used xls2csv under Cygwin. Just a Google search shows many implementations. I would start there.
http://search.cpan.org/~ken/xls2csv-1.07/script/xls2csv
Related
The complete output text file is hundreds of lines long, with relevant nuclear cross sections and a plethora of other data that I do not need for this particular problem. I am trying to extract the columns of data under "BURNUP" and the first "K-INF" from the file I attached. I am trying to extract this data and place it into a separate file. I am a newbie, and have a similar perl script from a professor. I have tried to adapt it to the information I am looking for but the only result I am receiving are the 2 print statements. Any suggestions?
I am trying to use xlsread functioin to read spreadsheets of 6000x2700 (xlsx file).
I have two questions:
First, when I use something like
[num,txt,~]=xlsread(input_file,input_sheet,'A1:CYY6596')
Matlab keeps showing 'busy' and lose response (while I can open it in excel within 30 seconds).
Is there any solution If I don't want to loop through ranges of the xlsx file? In other word, can I just dump spreadsheet of this size into matlab using xlsread?
Alternatively, Maybe I can use loops to read these files range by range, but I cannot identify the last column of each of the spreadsheets unless I read the whole file first. Therefore, If I cannot identify the last column, it is hard to make loops and do my interpretation on the file.
So My second questions is: Is there a way to identify the last column of the spreadsheet without reading the whole spreadsheet?
Thanks.
EDIT:However, if I run a similar code which only reads first 400 columns ('A1:RY6596') of the spreadsheet, such problem doesn't happen.
which version of matlab you are using?
matlab has a problem to load bix excell file.
convert the excell in csv and use M = csvread(filename).
You can try to convert .xlsx into .xls also.
You can Try the tool in
File Exchange
I have the following problem:
1) consider a dataset in Stata whose variables are of type float, double, byte. Observations are all "numbers"
2) I want to save the dataset in .xls format; hence I type in Stata export excel using ".../A.xls"
3) When I open A.xls in Excel, the cells have format "General"
4) I want to load A.xls in Matlab but I get the error Unreadable Excel file: XLS File contains unicode text which is not yet supported
5) If, instead, before going to Matlab, I apply the format "Numbers" to the cells in Excel, A.xls can be easily loaded in Matlab.
Any suggestion on how going directly from Stata to Matlab in this particular case?
With Stata 14, you can save the file as xlsx instead of xls. In this case, the error message on OSX disappears. On Windows, xls seems to work fine as well.
is the problem you having just from xlsread? did you try readtable?
I want to automate Excel using Perl to do the following task(s):
For a list of Excel .xls files, do the following:
Open the file
Set Format to CSV
Save the file under the original filename and directory, but replace the extension "xls" with "csv"
Close the file
End
I found how to open files, even how to save them. I did not find how to change the fileformat/save as a different format. There shall be no user dialogs popping up, it should be fully automated. The Excel file list I can generate myself, a parameterized "find" or maybe "dir" should suffice.
If you are using Excel automation a great help is Excel itself. Use the VBA environment (Alt+F11) to get help for the Excel objects you want to use.
The objectbrowser (F2) is very valuable.
Workbook.SaveAs([Filename], [FileFormat], [Password], [WriteResPassword], [ReadOnlyRecommended], [CreateBackup], [AccessMode As XlSaveAsAccessMode = xlNoChange], [ConflictResolution], [AddToMru], [TextCodepage], [TextVisualLayout], [Local])
Searching for CSV in the object browser will show Excel constants with their values, since you probably cannot use these Excel constants in Perl.
See Spreadsheet::ParseExcel and xls2csv, they will help you.
I have a crystal report which contains a list of absolutely referenced text files. There is one text file referenced in each body line.
e.g.
line1 c:\file1.txt
line2 c:\file2.txt
Is there any way to display the contents of these files in Crystal?
i.e. I would like each crystal body line to show the text from the referenced text file.
I'm using Crystal reports 11 with a non-standard database connector (dataflex).
You would need to set up a file dsn (in XP it's under Control Panel/Administrative Tools/Datasources (ODBC)) and then use the file dsn (Microsoft Text Driver) for the datasource as an ODBC(RDO) connection.
I set this test scenario up on mine like the following:
**File 1**
column1
1row1
1row2
1row3
**File 2**
column1
2row1
2row2
2row3
I set up the file dsn to point to the c drive and in the datasource screen I added file1.txt and file2.txt to the selected tables. Then the easiest thing to do is clear the links of the tables so that it pulls every row. It will warn you that there are multiple starting points. I don't generally recomend this, but it will work in this case and since it's not reporting off a database it probably isn't the end of the world. If you disregard the starting point message then add the fields to the report, when you run it you should get the following output:
1row1 2row1
1row1 2row2
1row1 2row3
1row2 2row1
1row2 2row2
1row2 2row3
1row3 2row1
1row3 2row2
1row3 2row3
From this you can change your grouping to get the output that you need.
You can also use this same connect against subreports instead of doing this linking where you have the main report pull the info from file1.txt and then put a subreport in the report footer that pulls from file2.txt. This option won't have the text collated, but you'd still have it in the same report.
Hope this helps some.
It's easier than you think. I just set up one myself before I wrote this to make sure I was giving you the right steps. Using CR version XI and a .txt file, I followed these steps:
For each text file you want to import, make a subsection in your report (i.e. DetailsA, DetailsB, etc.). If your list of text files is constantly changing (and I don't think it is, based on your description), you'll need another method.
Make sure your text file is comma delimited and the first row contains field names. If these text files are actually text (i.e. not tables), then just put a dummy variable name in the first row so Crystal will see the text as a table of data with just 1 row.
For each text file you want to display, create a new Subreport (Insert->Subreport)
In the database selection menu, go to "Create New Connection"->"Access/Excel (DAO)"
Under 'database type', you'll see a 'text' option at the bottom of the screen.
Choose your file.
Relax! (I'm in a good mood this morning, don't know why)
I guess if you have a function that takes a file name as an argument and returns the contents of that file - you could use that function in a Crystal Report formula.
I am not familiar with the current CR, it has been years since I last used it (I last used version 8). In the versions I did use, such a function was not built in. What you would have to do back then, was to create a UFL (user function library) containing the functions you needed. If I remember correctly, you had to do this using COM.
In this day and age, I guess you can extend CR using some other mechanism, perhaps writing .NET code?
I suggest you search the CR documentation for the term UFL.
Another suggestion, then:
Create a new table FILECONTENTS (filename varchar primary key, contents blob)
Create a script that on a schedule populates this table with the filenames and contents of all the files (assuming that there is a finite number of files, and that you have a way of knowing about them)
Modify the report datasource query to join it with the FILECONTENTS table, and add the contents field to the report.
You could setup a file dsn. But this is geared toward tabular file data, not text.
How big are these text files? You want to display the entire contents of each file?
There is probably no easy way to dynamically read in a file from within crystal. You will most likely have to push a dataset to the report which contains the file contents.