Import daily repeating schedule in anylogic from excel - anylogic

I was wondering if there is an option to import excel database as shown in the figure below to a schedule in anylogic. I know that when you use dates it is possible, but this is an repeating schedule whis is every monday the same (see figure 2). I can ofcourse type it over, but I was wondering if there is a faster way by just importing the excel as database.

Related

SAP incremental data load in Azure Data Factory

I'm trying to implement an Extractor pipeline in ADF, with several Copy Data activities (SAP ERP Table sources). To save some processing time, I'd like to have some deltas (incremental load). What's the best way to implement this?
What I'm trying at the moment is just to use the "RFC table options" in each Copy Data activity. However, this seems to be quite limited (only very simple queries allowed). Also, each SAP ERP table requires a different query. I found 3 different situations, regarding table field formats:
Timestamp in miliseconds (e.g. COVP);
Timestamp in YYYYMMDDHHMMSS (e.g. FAGLFLEXA);
Last change date and last change time, in separated fields (e.g. CATSDB)
Has anyone ever tried this? What would you advise?
Thanks!

any way to import a csv file to postgresql without creating new tables from scratch?

I have 2 files and want to import them to PostgreSql using pgAdmin4 so the only way I know is to create the tables and manually create the columns and every thing from the beginning, but this will take a lot of time hoping I didn't miss any thing, so any way to simply import them and PostgreSql consider the headers as the columns, creates the tables and saves me tons of time. I'm new to all of this so my question form maybe wrong but this is the best I can ask.

Which is the best way to get the data from AS400 to Excel with specific fields?

We have a screen which has number of fields from different tables, i need to extract those fields from tables and keep the data in Excel sheet. How can i do this ?
Is this a "one-time" data-transfer, or will it be an ongoing, automated process?
IMHO, for a "one-time" data-extraction, the easiest way to accomplish that is using ODBC. Historically, I've used ODBC to import the data into Microsoft Access. From there, it's extremely easy to export the data into Excel.
For a regularly-occurring, automated method, I think using the CpyToImpF command works the best. It takes a little trial-and-error to get the process working, but once you've got it set up, it can run in regularly scheduled job to export the data. (Google the syntax for the command, and try it yourself.)
HTH,
Dave
There is a tool called iEXL which will create .Xlsx spreadsheets native on the AS400.
If you after only data from one screen we will help you and the price would we adjusted to take this into account.
WWW.iEXLSOFTWARE.COM

How to import datasets as csv file to power bi using rest api?

I want to automate the import process in power bi, But I can't find how to publish a csv file as a dataset.
I'm using a C# solution for this.
Is there a way to do that?
You can't directly import CSV files into a published dataset in Power BI Service. AddRowsAPIEnabled property of datasets published from Power BI Desktop is false, i.e. this API is disabled. Currently the only way to enable this API is to create a push dataset programatically using the API (or create a streaming dataset from the site). In this case you will be able to push rows to it (read the CSV file and push batches of rows, either using C# or some other language, even PowerShell). In this case you will be able to create reports with using this dataset. However there are lots of limitations and you should take care of cleaning up the dataset (to avoid reaching the limit of 5 million rows, but you can't delete "some" of the rows, only to truncate the whole dataset) or to make it basicFIFO and lower the limit to 200k rows.
However a better solution will be to automate the import of these CSV files to some database and make the report read the data from there. For example import these files into Azure SQL Database or Data Bricks, and use this as a data source for your report. You can then schedule the refresh of this dataset (in case you use imported) or use Direct Query.
After a Power BI updates, it is now possible to import the dataset without importing the whole report.
So what I do is that I import the new dataset and I update parameters that I set up for the csv file source (stored in Data lake).

Using postgres to replace csv files (pandas to load data)

I have been saving files as .csv for over a year now and connecting those files to Tableau Desktop for visualization for some end-users (who use Tableau Reader to view the data).
I think I settled on migrating to postgreSQL and I will be using the pandas library to_sql to fill it up.
I get 9 different files each day and I process each of them (I currently consolidate them into monthly files in .csv.bz2 format) by adding columns, calculations, replacing information, etc.
I create two massive csv files using pd.concat and pd.merge out of those
processed files which Tableau is connected to. These files are literally overwritten every day when new data is added which is time consuming
Is it okay to still do my file joins and concatenation with pandas and export the output data to postgres? This will be my first time using a real database and I am more comfortable with pandas compared to learning SQL syntax and creating views or tables. I just want to avoid overwriting the same csv files over and over (and some other csv problems I run into).
Don't worry too much about normalization. A properly normalized database will usually be more efficient and easier to handle than an non-normalized. On the other hand, if you have non-normalized csv data you dump into a database, your import functions will be a lot more complicated if you do a proper normalization. I think I would recommend you to make one step at the time. Start up with just loading the processed csv-files into postgres. I am pretty sure all processing following that will be a lot easier and quicker than doing it using csv-files (just make sure you set up the right indexes). When you start to get used to using the database, you can start to do more processing there.
Just remember, one thing a database is really good at is to pick out the subset of data you want to work on. Try as much as possible to avoid pulling out huge amount of data from the database when you only intend to work on a subset of it.