SPSS stream executing with python - spss-modeler

I need help with script in python to run stream in SPSS. Now I am using with code to export data into Excel file and it works. But this code is required one manual step before exporting data to Excel file.
stream = modeler.script.stream() **- geting stream in SPSS;**
output1 = stream.findByType("excelexport", "1") **- then searching Excel file with name "1";**
results = [] **- then run all stream;**
output1.run(results) **- but here I need to press button to finish execution(Have a look screenshots);**
output1 = stream.findByType("excelexport", "2") **- this the next step!**
results = []
output1.run(results)
I would like to fully automate stream. Please, help me! Thanks a lot!

I can help you only using the legacy script. I have several export excel nodes in my streams and they are saved according to the month and year of reference.
set Excel_1.full_filename = "\\PATH"><"TO"><".xlsx" execute Excel_1image example
Take a look in the image because the stackoverflow is messing with the written code.
And to fully automatize, you have also to set all the passwords in the initial nodes, example:
set Database.username = "TEST"
set Database.password = "PASSWORD"*

On the stream properties window -> Exection tab have you selected the 'Run this script' on stream execution??
If you make this selection you can run the stream and produce your output without even opening SPSS Modeler User Interface(via Modelerbatch).

Related

How to use a output of a Databricks activity in future activity inside ADF?

I have a Databricks activity in ADF and I pass the output with the below code:
dbutils.notebook.exit(message_json)
Now, I want to use this output for the next Databrick activity.
As my search, I think add the last output into base parameters in the second activity. Am I right?
and other questions, How can I use this output inside the Databrick notebook?
Edited: The output is a JSON as the below screenshot.
As per doc, you can consume the output of Databrick Notebook activity in data factory by using expression such as #{activity('databricks notebook activity name').output.runOutput}.
If you are passing JSON object you can retrieve values by appending property names.
Example: #{activity('databricks notebook activity name').output.runOutput.PropertyName}.
I reproduced the issue and it's working fine.
Below is the sample notebook.
import json
dates = ['2017-12-11', '2017-12-10', '2017-12-09', '2017-12-08', '2017-12-07']
return_json = json.dumps(dates)
dbutils.notebook.exit(return_json)
This is how the Notebook2 Activity Seeting looks like:
Pipeline ran successfully.

SpreadsheetGear - Save Specific Workbook Sheet to CSV

I am opening an existing Excel file using SpreadsheetGear, using the following code:
SpreadsheetGear.IWorkbook xlBook = SpreadsheetGear.Factory.GetWorkbook(fileName, System.Globalization.CultureInfo.CurrentCulture);
xlBook.SaveAs(fileNameCSV, SpreadsheetGear.FileFormat.CSV);
This works, but the saved CSV file contains the wrong sheet.
Can anyone help with a code snippet on how to open an Excel file in SpreadsheetGear, then save only a SPECIFIC sheet to a CSV file.
Please note I am working with SpreadsheetGear and want a solution for that library. Thanks!
The IWorksheet interface includes a SaveAs(...) method for just this purpose:
using SpreadsheetGear;
using System.Globalization;
...
IWorkbook xlBook = Factory.GetWorkbook(fileName, CultureInfo.CurrentCulture);
xlBook.Worksheets["My Sheet"].SaveAs(fileNameCSV, FileFormat.CSV);
I'll also mention that there is also an IRange.SaveAs(...) method if you want to save just a particular range to CSV / UnicodeText (tab-delimited).

Writing string to specific dir using chaquopy 4.0.0

I am trying a proof of concept here:
Using Chaquopy 4.0.0 (I use python 2.7.15), I am trying to write a string to file in a specific folder (getFilesDir()) using Python, then reading in via Android.
To check whether the file was written, I am checking for the file's length (see code below).
I am expecting to get any length latger than 0 (to verify that the file indeed has been written to the specific location), but I keep getting 0.
Any help would be greatly appreciated!!
main.py:
import os.path
save_path = "/data/user/0/$packageName/files/"
name_of_file = raw_input("test")
completeName = os.path.join(save_path, name_of_file+".txt")
file1 = open(completeName, "w")
toFile = raw_input("testAsWell")
file1.write(toFile)
file1.close()
OnCreate:
if (! Python.isStarted()) {
Python.start(new AndroidPlatform(this));
File file = new File(getFilesDir(), "test.txt");
Log.e("TEST", String.valueOf(file.length()));
}```
It's not clear whether you've based your app on the console example, so I'll give an answer for both cases.
If you have based your app on the console example, then the code in onCreate will run before the code in main.py, and the file won't exist the first time you start the activity. It should exist the second time: if it still doesn't, try using the Android Studio file explorer to see what's in the files directory.
If you haven't based your app on the console example, then you'll need to execute main.py manually, like this:
Python.getInstance().getModule("main");
Also, without the input UI which the console example provides, you won't be able to read anything from stdin. So you'll need to do one of the following:
Base your app on the console example; or
Replace the raw_input calls with a hard-coded file name and content; or
Create a normal Android UI with a text box or something, and get input from the user that way.

How to call java code in a talend job

I'm new to talend, I have java code which need to get data from files. I want to use them in talend job. Now am facing problem how to use this java code in talend, I created routine but facing problem in creating jar files, and also how should I use this routine in my job.
I don't know what you want to do but normally you would use the build-in Talend components for reading a file.
Depending on your File you are going to read you can use:
tFileInputRaw - for reading a file line by line
tFileInputDelimited - for reading CSV files (getting a set of columns)
tFileInputExcel - for XLS/XLSX files (getting a set of columns)
If you want to use your code anyway you have to make your routine available to your job. To do that, close your job, click right on the job and choose "setup routine dependencies". You should be able to add a routine by click the green "+"-button.
After that you are able to use your functions in a tJava or tJavaRow component with routines.ClassName.functionName().

How to add input parameters for matlab application execution

I'm writing a script in MatLab and I need to execute a program (written in C) as one of the lines (it generates an output file).
My current code is:
!collect2.exe infile.csv <-- I want to be able to change this to a variable but I can't
My question is, is there a way for me to either:
A. put a variable in place of infile.csv such as
!collect2.exe filedir
or
B. run multiple files without a variable
Thanks in advance :)
Edit:
filedir = input('What is the directory with quotes?');
!cd /
cmdString = ['cd ', filedir];
system(cmdString);
Edit #2:
Never mind, I fixed the issue. Thanks for all of your help!
Use the system function.
Example:
filename = 'infile.csv';
cmdString = ['collect2.exe ', filename];
system(cmdString);
you can create a variable called filename, per your example it would be
filename='infile.csv';
or alternately have you tried using this to launch multiple files at once?
!collect.exe {infile.csv,infile1.csv,infile2.csv};