Using the Node.js importXlsxSheet function to import a workbook into Smartsheet, I've read that the function will only import the first sheet/tab of any given workbook, and I've verified this is the case using a test workbook. However, the "production" workbook I'm trying to upload is failing with a message that "You may only import sheets with up to 5000 rows." This workbook has multiple tabs/sheets and some of them do in fact have more than 5000 rows, but the first tab (the one I want to import) has less than 5000 rows.
So what is going on here? If it only imports the first tab of any given workbook, why does it care that other tabs have more than 5000 rows?
This is similar behavior to importing an Excel file via the Smartsheet UI in your browser which also does not allow importing an Excel file where any tab has more than 5000 rows. There will be an error that at least one of the tabs has more than 5000 rows and the file can't be imported. I can pass along your feedback that you'd like to see it still import the first tab if that has less than 5000 rows.
Related
i've created a script to mass produce copies of a Google Sheet from a master sheet. The script changes the name of the documents according to data in a separate sheet.
Within the template sheet, I've set row 2 as a "named range" and what i'd like the script to do is to also change the data in that row based on data I have in the master sheet.
I have been previously advised that this is possible but I confess I have no clue how to code this in to my script!
Is anyone able to offer any code which might do the job?
Many thanks
Kerry
I am working on a data synch project between two systems based on a csv file. In this file you find Person, badges and profiles. Those needs to be imported in a access control system. Now I am facing a challenge.
The behavior of the importer is as follow:
The service checks the defined directory if a csv file is available and will import the data if true. If there are multiple files it will fetch them together and import it as one.
Inside the csv it is possible that a person has two badges and for that a second row is created. The importer see that as duplicate entry and import the first entry of the file and ignore the rest but output them inside the log file.
I found out that if the duplicated entries are separated in multiple files and the import is made on a different scheduled time, the additional badges will be assigned to the person and will not mark them as duplicated. (One file per badge)
To fix that on the importer we have a lot to change. so i try to find a workaround to be able to iterate the csv, check for duplicates and create an additional files if it is the case. So I can import the file at another time to make sure they where imported.
Does anyone knows a function in powershell to do so?
example:
Ori-File:
Person1,Badge1,Profile-1
Person1,Badge2,Profile-2
Person1,Badge3,Profile-3
expected result
File1:
Person1, Badge1, Profile1
File2:
Person1, Badge2, Profile2
File3:
Person1, Badge3, Profile3
I have three Excel files and one database connection which I need to append as a part of my flow. All four datasets in the pre-append stage have just one column.
When I try to use tUnite, I get the error for tFileInputExcel - see the screenshot. Moreover, I cannot join the database connection with tUnite.
What am I doing wrong?
I think the problem is with the tFileExist components (I think that's what these are on the left with the "if" links coming out) because each of them is trying to start a new flow. Once you're joining them with the unite, there can be only one start to the flow - and this goes to the start of the first branch of the merge order.
You can move the if logic elsewhere. Another idea is to put the output from each of the Excel into a tHashOutput (linked together), then use a tHashInput to write to your DB.
I have a number of excel files where there is a line of text (and blank row) above the header row for the table.
What would be the best way to process the file so I can extract the text from that row AND include it as a column when appending multiple files? Is it possible without having to process each file twice?
Example
This file was created on machine A on 01/02/2013
Task|Quantity|ErrorRate
0102|4550|6 per minute
0103|4004|5 per minute
And end up with the data from multiple similar files
Task|Quantity|ErrorRate|Machine|Date
0102|4550|6 per minute|machine A|01/02/2013
0103|4004|5 per minute|machine A|01/02/2013
0467|1264|2 per minute|machine D|02/02/2013
I put together a small, crude sample of how it can be done. I call it crude because a. it is not dynamic, you can add more files to process but you need to know how many files in advance of building your job, and b. it shows the basic concept, but would require more work to suite your needs. For example, in my test files I simply have "MachineA" or "MachineB" in the first line. You will need to parse that data out to obtain the machine name and the date.
But here is how may sample works. Each Excel is setup as two inputs. For the header the tFileInput_Excel is configured to read only the first line while the body tFileInput_Excel is configured to start reading at line 4.
In the tMap they are combined (not joined) into the output schema. This is done for the Machine A Excel and Machine B excels, then those tMaps are combined with a tUnite for the final output.
As you can see in the log row the data is combined and includes the header info.
Is there a way to convince Crystal Reports to export a page / group / whatever to separate worksheets when exporting to Excel (Data Only)? I'm using the CR that came with VS2008 (version 10.5)
Thanks.
According to the documentation you cannot export a report directly to multiple worksheets in a single Excel workbook.
When the limit of 65536 rows in Excel is reached though, the exporter does create a new worksheet, but you are not in control :)
update
To create your own Excel merger:
PRE: Make sure you have the Office (Excel) SDK libraries installed.
PRE: Place the files that need to be merged in a single directory.
In a VS2008 solution:
Create a new empty Excel Workbook (variable: objNewWorkbook)
Loop through the files in the directory (where you placed the Excel files) and for each item:
Load the file as a Excel Workbook (variable: objWorkbookLoop)
Create a new Worksheet in objNewWorkbook (optionally: with the filename of objWorkbookLoop) (variable: objNewWorksheetLoop)
Copy the data from (probably sheet1 in) objWorkbookLoop to objNewWorksheetLoop
Finally save objNewWorkbook to a file.
One of the things everybody ignores is that excel automation is not an acceptable solution. Yes it works ( almost always) , but even Microsoft recommends to not use it for unattended execution : http://support.microsoft.com/kb/257757
The only safe way I know to export a crystal report to multiple worksheets is by creating a grouped report and burst it using R-Tag report manager. This tool is not using Excel automation so you can run your reports at any time and on the server but if you are currently using another software to run your reports you will need to switch to this one (it is not an extension).
I know this thread is an old one, but I can see links to it without a real answer. Hopefully this will help somebody.