How to export statistics_log data cumulatively to the desired Excel sheet in AnyLogic? - anylogic

I have selected to export tables at the end of model execution to an Excel file, and I would like that data to accumulate on the same Excel sheet after every stop and start of the model. As of now, every stop and start just exports that 1 run's data and overwrites what was there previously. I may be approaching the method of exporting multiple runs wrong/inefficiently but I'm not sure.

Best method is to export the raw data, as you do (if it is not too large).
However, 2 improvements:
manage your output data yourself, i.e. do not rely on the standard export tables but only write data that you really need. Check this help article to learn how to write your own data
in your custom output data tables, add additional identification columns such as date_of_run. I often use iteration and replication columns to also identify from which of those the data stems.
custom csv approach
Alternative approach is to write create your own csv file programmatically, this is possible with Java code. Then, you can create a new one (with a custom filename) after any run:
First, define a “Text file” element as below:
Then, use this code below to create your own csv with a custom name and write to it:
File outputDirectory = new File("outputs");
outputDirectory.mkdir();
String outputFileNameWithExtension = outputDirectory.getPath()+File.separator+"output_file.csv";
file.setFile(outputFileNameWithExtension, Mode.WRITE_APPEND);
// create header
file.println( "col_1"+","+"col_2");
// Write data from dbase table
List<Tuple> rows = selectFrom(my_dbase_table).list();
for (Tuple row : rows) {
file.println( row.get( my_dbase_table.col_1) + "," +
row.get( my_dbase_table.col_2));
}
file.close();

Related

Issue with loading multiple SQL query results sets in datatables with Powershell

I am doing 2 separate SQL queries on separate databases / connections in a Powershell script. The goal is to export the results of both requests into a single CSV file.
What I am doing now is:
# Create a data table for Clients
$ClientsTable = new-object "System.Data.DataTable"
# Create text commands
$ClientsCommand1 = $connection1.CreateCommand()
$ClientsCommand1.CommandText = $ClientsQuery1
$ClientsCommand2 = $connection2.CreateCommand()
$ClientsCommand2.CommandText = $ClientsQuery2
# Get Clients results
$ClientsResults1 = $ClientsCommand1.ExecuteReader()
$ClientsResults2 = $ClientsCommand2.ExecuteReader()
# Load Clients in data table
$ClientsTable.Load($ClientsResults1)
$ClientsTable.Load($ClientsResults2)
# Export Clients data table to CSV
$ClientsTable | export-csv -Encoding UTF8 -NoTypeInformation -delimiter ";" "C:\test\clients.csv"
where $connection1 and $connection2 are opened System.Data.SqlClient.SqlConnection.
Both requests work fine and both output data with exactly the same columns names. If I export the 2 results sets to 2 separate CSV files, all is fine.
But loading the results in the data table as above fails with the following message:
Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints.
If instead I switch the order in which I load data into the data tables, like
$ClientsTable.Load($ClientsResults2)
$ClientsTable.Load($ClientsResults1)
(load second results set before the first one), then the error goes away and my CSV is generated without any problem with the data from the 2 requests. I cannot think of why appending data in one way, or the other, would trigger this error, or work fine.
Any idea?
I'm skeptical reversing the order works. More likely, it's doing something like appending to the csv file that was already created from the first attempt.
It is possible, though, that different primary key definitions from the original data could produce the results you're seeing. Datatable.Load() can do unexpected things when pulling data from an additional sort. It will try to MERGE the data rather than simply append it, using different matching strategies depending on the overload and argument. If the primary key used for the one of the tables causes nothing match and no records to merge, but the primary key for the table matched everything, that might explain it.
If you want to just append the results, what you want to do instead is Load() the first result into the datatable, export to CSV, clear the table, load the second result into the table, and then export again in append mode.

Load the temp table data into a text file using temp table's handle

I have created a temp table I want to load all the data of a temp table including field names in a text file using the temp table's handle what can I do?
Using the default buffer handle of the tamp table (hTable:DEFAULT-BUFFER-HANDLE), you can then loop through the fields of the table.
DO i = 1 TO hBufferHandle:NUM-FIELDS:
...
END
You would do that twice, once to output the field names as headers to your text file, and then once for each record of the temp table to export the values.
You will have to handle things like extents.
You will have to deal with data types, making decisions on what to do, depending on what you have in your table.
In theory it's not very complex code, and you could write a simple reusable library to do the work.
Use the documentation to find the full syntax.

How to merge multiple output from single table using talend?

I have Employee table in database which is having gender column. so I want to Filter Employee data based on number of gender with three column to excel like:
I'm getting this output using below talend schemaStructure 1:
So I want to optimized above structure and trying in this way but I have been stuck with other scenario. Here I'm getting Employee data with gender wise but in three different file so Is there any way so that I can achieve same excel result from one SQL input file and after mapping can be get in a single output excel file?
Structure 2 :
NOTE: I don't want to use same input table many time. I want to get same output using single table and single output excel file. so please suggest me any component which one is useful for me.
Thanks in advance!!!

Count the number of rows for each file along with the file name in Talend

I have built a job that reads the data from a file, and based on the unique data of a particular columns, splits the data set into many files.
I am able to acheive the requirement by the below job :
Now from this job which is splitting the output into multiple files, what I want is to add a sub job which would give me two columns.
In the first column I want the name of the files that I created in my main job and in the second column, I want the count of number of rows each created output file has.
To achive this I used tflowmeter and to catch the result of count i used the tFlowmeterCatcher, which is giving me correct result for the count of each rows for the correspoding output files, but is giving the last file name in all the files that i have generated for the counts.
How can I get the correct file names and the corresponding row count.
If you use the following directions, your job will in the end have additional components like so:
Use a tJavaFlex directly after the tFileOutputDelimited on main. It should look like this:
Start Code: int countRows = 0;
Main Code: countRows = countRows + 1;
End Code: globalMap.put("rowCount", countRows);
Connect this component OnComponentOk with the first component of a new subjob. This subjob holds a tFixedFlowInput, a tJavaRow and a tBufferOutput.
The tFixedFlowInput is just here so that the OnComponentOk can be connected, nothing has to be altered. In tJavaRow you put the following:
output_row.filename = (String)globalMap.get("row7.newColumn");
//or whatever is your row variable where the filename is located
output_row.rowCount = (Integer)globalMap.get("rowCount");
In the schema, add the following elements:
Simply add a tBufferOutput now at the end of the first subjob.
Now, create another new subjob with the components tBufferInput and whatever components you may need to process and store the data. Connect the very first component of your job with a OnSubjobOk with the tBufferInput component. I used a tLogRow to show the result (with my randomly created fake data):
.---------------+--------.
| LogFileData |
|=--------------+-------=|
|filename |rowCount|
|=--------------+-------=|
|fileblerb1.txt |27 |
|fileblerb29.txt|14 |
|fileblerb44.txt|20 |
'---------------+--------'
NOTE: Keep in mind that if you add a header to the file (Include Header checked in tFileOutputDelimited), the job might need to be changed (simply set int countRows = 1; or whatever you would need). I did not test this case.
You can use tFileproperties component to store file-name generated in a intermediate excel after first sub-job and use this excel in your second sub-job.
Thanks!

Export ado.net dataset into excel worksheets using softartisans Excelwriter

I have a Ado.Net dataset that have three datatables let say Dataset name customer and tables are accounts, purchases and profile . i do like to export them using softartisans ExcelWriter to worksheets using templates
Example
DataSet myDataSet = new DataSet();
myDataSet.Tables.Add("Customer");
myDataSet.Tables.Add("Accounts");
myDataSet.Tables.Add("Purchases");
xlt.BindData(myDataSet,null , xlt.CreateDataBindingProperties());
I do like to export theses tables into seperate excel worksheets.
The BindData method of OfficeWriter's ExcelTemplate object has a number of overloads. The one that accepts a DataSet does not automatically import every DataTable in the DataSet, it only imports the first one. If you want to use every DataTable in the DataSet, you should use the overload that takes a DataTable (see the documentation). For example:
//The 2nd parameter is the dataSource name that must correspond with the first
//part of the datamarker in your template workbook (i.e. %%=Customer.FirstName)
xlt.BindData(DS.Tables["Customer"], "Customer", xlt.CreateDataBindingProperties());
You could also do something in a loop, like this:
foreach (DataTable table in myDataSet.Tables)
{
xlt.BindData(table, table.TableName, xlt.CreateDataBindingProperties());
}