Google Apps Script: Export Dashboard to CSV - charts

I've created a Dashboard similar to the example provided on the Google Apps Script Help Center. It contains controls which filters the spreadsheet that contains the data. I would like to add an option on the dashboard that would allow the users to export the filtered view to a new CSV file or Google Sheets file that would be saved in their Drive however I've no idea how to do this. Any suggestions?

Here's a code snippet from Google tutorial:
function saveAsCSV() {
// Prompts the user for the file name
var fileName = Browser.inputBox("Save CSV file as (e.g. myCSVFile):");
// Check that the file name entered wasn't empty
if (fileName.length !== 0) {
// Add the ".csv" extension to the file name
fileName = fileName + ".csv";
// Convert the range data to CSV format
var csvFile = convertRangeToCsvFile_(fileName);
// Create a file in the Docs List with the given name and the CSV data
DocsList.createFile(fileName, csvFile);
}
else {
Browser.msgBox("Error: Please enter a CSV file name.");
}
}*
View the full tutorial and code here.

Related

How to change a file name of a google form bu Apps script

For a google spreadsheet it is easy to change the filename by using this script:
var file = SpreadsheetApp.openById(Idfile);
file.rename("new name");
Logical to me for change file name of a form:
var form = FormApp.openById(Idform);
form.rename("new name");
However this does not work!
Who knows the solution to change a filename of a google form by using apps script?
How about this method?
DriveApp.getFileById(id).setName("new name");
This method can also rename Spreadsheet.
Reference :
setName(name)

Is there a way to read an Excel file using Dataflow

Is there a way to read an Excel file stored in a GCS bucket using Dataflow?
And I would also like to know if we can access the metadata of an object in GCS using Dataflow. If yes then how?
CSV files are often used to read files from excel. These files can be split and read line by line so they are ideal for dataflow. You can use TextIO.Read to pull in each line of the file, then parse them as CSV lines.
If you want to use a different binary excel format, then I believe that you would need to read in the entire file and use a library to parse it. I recommend using CSV files if you can.
As for reading the GCS metadata. I don't think that you can do this with TextIO, but you could call the GCS API directly to access the metadata. If you only do this for a few files at the start of your program then it will work and not be too expensive. If you need to read many files like this, you'll be adding an extra RPC for each file.
Be careful to not read the same file multiple times, I suggest reading each file's metadata once once and then writing the metadata out to a side input. Then in one of your ParDo's you can access the side input for each file.
Useful links:
ETL & Parsing CSV files in Cloud Dataflow
https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/io/TextIO.Read
https://cloud.google.com/dataflow/model/par-do#side-inputs
private static final int BUFFER_SIZE = 64 * 1024;
private static void printBlob(com.google.cloud.storage.Storage storage, String bucketName, String blobPath) throws IOException, InvalidFormatException {
try (ReadChannel reader = ((com.google.cloud.storage.Storage) storage).reader(bucketName, blobPath)) {
InputStream inputStream = Channels.newInputStream(reader);
Workbook wb = WorkbookFactory.create(inputStream);
StringBuffer data = new StringBuffer();
for(int i=0;i<wb.getNumberOfSheets();i++) {
String fName = wb.getSheetAt(i).getSheetName();
File outputFile = new File("D:\\excel\\"+fName+".csv");
FileOutputStream fos = new FileOutputStream(outputFile);
XSSFSheet sheet = (XSSFSheet) wb.getSheetAt(i);
Iterator<Row> rowIterator = sheet.iterator();
data.delete(0, data.length());
while (rowIterator.hasNext())
{
// Get Each Row
Row row = rowIterator.next();
data.append('\n');
// Iterating through Each column of Each Row
Iterator<Cell> cellIterator = row.cellIterator();
while (cellIterator.hasNext())
{
Cell cell = cellIterator.next();
// Checking the cell format
switch (cell.getCellType())
{
case Cell.CELL_TYPE_NUMERIC:
data.append(cell.getNumericCellValue() + ",");
break;
case Cell.CELL_TYPE_STRING:
data.append(cell.getStringCellValue() + ",");
break;
case Cell.CELL_TYPE_BOOLEAN:
data.append(cell.getBooleanCellValue() + ",");
break;
case Cell.CELL_TYPE_BLANK:
data.append("" + ",");
break;
default:
data.append(cell + ",");
}
}
}
fos.write(data.toString().getBytes());
}
}
}
You should be able to read the metadata of a GCS file by using the GCS API. However you would need the filenames. You can do this by doing a ParDo or other transform over a list of PCollection<string> which holds the filenames.
We don't have any default readers for excel files. You can parse from a CSV file by using a text input:(ETL & Parsing CSV files in Cloud Dataflow)
I'm not very knowledgeable on excel, and how the file format is stored. If you want to process one file at a time, you can use a PCollection<string> of files. And then use some library to parse the excel file at a time.
If an excel file can be split into easily-parallelizable parts, I'd suggest you take a look at this doc (https://beam.apache.org/documentation/io/authoring-overview/). (If you are still using Dataflow SDK, it should be similar.) It may be worth splitting into smaller chunks before reading to get more parallelization out of your pipeline. In this case you could use IOChannelFactory to read from the file.

Mirth: Send file like PDF, zip or Transfer pdf files using mirth

I want to transfer pdf/zip file through mirh.
I am using file reader connector as source and file writer as destination connector.
can any one help me how to send/transfer pdf/zip file?
Set Incoming data: Delimited text
File type: Binary
Outgoing filetype also has to be Binary, otherwise the data are corrupted.
Outgoing template has to be ${message.rawData}
see screen shot for more info.
Channel settings [summary]
Channel settings [Source]
Channel settings [Destination]
var source = "D:/ftproot/PDF/Source";
var fileName = $('fieldId')+".pdf";
var srcpath=source +"\\"+ fileName
var directory = "D:/ftproot/PDF/Target"
var outFileName = $('fieldId')+".pdf";
var destination = directory +"/" + outFileName
importPackage(java.io);
importPackage(org.apache.commons.io);
//var file = new java.io.File(directory);
var inputFile = new File(srcpath);
var outputFile = new File(destination);
FileUtils.copyFile(inputFile,outputFile);
For Transfer your PDF file from one location to another location .You don't need to bother about that.
Place the above code in your Destination trasfarmer .
The above code will pic the PDF file D:/ftproot/PDF/Source from this path and copied the PDF file in to the another mentioned location i.e D:/ftproot/PDF/Target
.You can directly read the file in Mirth using
importPackage(java.io);
importPackage(org.apache.commons.io);
Copy the PDF file using
FileUtils.copyFile(inputFile,outputFile);

Dynamically changing CSV data source using ApplyLogOnInfo

I have a .rpt file that I have created by setting it's data source as a text (csv) file using the (Access/Excel (DAO) ) option.
Now I want the same .rpt file loaded using a C# code and each time my C# code will change the input file and I want a new report to be generated based on the data in the new text file.
I am doing the following code and when I export the file to a pdf document, it still displays the data according to the data in the old input file.
I have checked off the option in the .rpt file that says "save data with report" and "verify on first refresh".
What am I missing here?
CODE:
cryRpt = new ReportDocument();
cryRpt.Load(reportfile);
Tables tables = cryRpt.Database.Tables;
TableLogOnInfo tableLogonInfo;
foreach (Table table in cryRpt.Database.Tables)
{
tableLogonInfo = table.LogOnInfo;
tableLogonInfo.TableName = "MYdata_BS_NEW#csv";
table.Location = "MYdata_BS_NEW#csv";
table.ApplyLogOnInfo(tableLogonInfo);
}
cryRpt.Refresh();
// After this I export the report to pdf document.

Mirth: How to get source file directory from file reader channel

I have a file reader channel picking up an xml document. By default, a file reader channel populates the 'originalFilename' in the channel map, which ony gives me the name of the file, not the full path. Is there any way to get the full path, withouth having to hard code something?
You can get any of the Source reader properties like this:
var sourceFolder = Packages.com.mirth.connect.server.controllers.ChannelController.getInstance().getDeployedChannelById(channelId).getSourceConnector().getProperties().getProperty('host');
I put it up in the Mirth forums with a list of the other properties you can access
http://www.mirthcorp.com/community/forums/showthread.php?t=2210
You could put the directory in a channel deploy script:
globalChannelMap.put("pickupDirectory", "/Mirth/inbox");
then use that map in both your source connector:
${pickupDirectory}
and in another channel script:
function getFileLastModified(fileName) {
var directory = globalChannelMap.get("pickupDirectory").toString();
var fullPath = directory + "/" + fileName;
var file = Packages.java.io.File(fullPath);
var formatter = new Packages.java.text.SimpleDateFormat("yyyyMMddhhmmss");
formatter.setTimeZone(Packages.java.util.TimeZone.getTimeZone("UTC"));
return formatter.format(file.lastModified());
};
Unfortunately, there is no variable or method for retrieving the file's full path. Of course, you probably already know the path, since you would have had to provide it in the Directory field. I experimented with using the preprocessor to store the path in a channel variable, but the Directory field is unable to reference variables. Thus, you're stuck having to hard code the full path everywhere you need it.