Reading newest rows from the updated database table : Anylogic 8 - anylogic

In my project I have to keep on inserting new rows in a table based on some logic. After this I want that each time an event is triggered, the rows of updated table should be fetched.
But the Problem is that new rows aren't accessed. The table is always updated after i close the current simulation. A similar case was posted last year but the answer wasn't clear, and due to less reputation score I am unable to comment on it. Does anyone know that whether Anylogic 8.1.0 PLE supports reading of newly updated database table records at runtime or not? or is there some other beneficial solution?

This works correctly in AnyLogic (at least in the latest 8.2.3 version) so I suspect there is another problem with your model.
I just tested it:
set up a simple 2-column database table;
list its contents (via query) at model startup;
update values in all rows (and add a bunch of rows) via a time 1 event;
list its contents (via query) via a time 2 event.
All the new and updated rows show correctly (including when viewing the table in AnyLogic, even when I do this during the simulation, pausing it just after the changes).
Note that, if you're checking the database contents via the AnyLogic client, you need to close/reopen the table to see the changes if you were already in it when starting the run. This view does auto-update when you close the experiment, so I suspect that is what you were seeing. Basically, the rows had been added (and will be there when/if you query them later in the model) but the table in the AnyLogic client only shows the changes when closing/reopening it or when the experiment is closed.
Since you used the SQL syntax (rather than the QueryDSL alternative syntax) to do your inserts, I also checked with both options (and everything works the same in either case).
The table is always updated after i close the current simulation
Do you mean when you close the experiment?
It might help if you can show the logic/syntax you are using for your database inserts and your queries.

Related

DB2 updated rows since last check

I want to periodically export data from db2 and load it in another database for analysis.
In order to do this, I would need to know which rows have been inserted/updated since the last time I've exported things from a given table.
A simple solution would probably be to add a timestamp to every table and use that as a reference, but I don't have such a TS at the moment, and I would like to avoid adding it if possible.
Is there any other solution for finding the rows which have been added/updated after a given time (or something else that would solve my issue)?
There is an easy option for a timestamp in Db2 (for LUW) called
ROW CHANGE TIMESTAMP
This is managed by Db2 and could be defined as HIDDEN so existing SELECT * FROM queries will not retrieve the new row which would cause extra costs.
Check out the Db2 CREATE TABLE documentation
This functionality was originally added for optimistic locking but can be used for such situations as well.
There is a similar concept for Db2 z/OS - you have to check that out as I have not tried this one.
Of cause there are other ways to solve it like Replication etc.
That is not possible if you do not have a timestamp column. With a timestamp, you can know which are new or modified rows.
You can also use the TimeTravel feature, in order to get the new values, but that implies a timestamp column.
Another option, is to put the tables in append mode, and then get the rows after a given one. However, this option is not sure after a reorg, and affects the performance and space utilisation.
One possible option is to use SQL replication, but that needs extra tables for staging.
Finally, another option is to read the logs, with the db2ReadLog API, but that implies a development. Also, just appliying the archived logs into the new database is possible, however the database will remain in roll forward pending.

Hyperion RDBMS Table

As we know details of every job are stored in rdbms in table Hsp_Job_Status. But unfortunately this table gets truncated each time we re-start services. As per business requirement we needed to keep a record of BR's launched by user and it's details. So we had developed a work around and created a trigger on the table such that it inserted each new row/update in a backup table. This was working fine uptill now.
Recently after re-start the the values of old Job_id (i.e primary key), are not appearing in order. It started series form a previous number. It was going in series of 106XX but after re-start the numbering started from 100XX. As Hsp_job_status was truncated during restart, there was no issue of duplicate primary key in that table. But it created duplicate values in backup table. And this has created issues with backup table and procedure that we use.
Usually the series is continuous one even after table truncate. So may be some thing has gone wrong during restart. Can you please suggest me as to what should i check and do to resolve this issue.
Thanks in advance.
Partial answer: the simple solution is to insert an instance prefix to the Job_Id, and on service startup increment the active instance. The instance table can then include details from startup/shutdown events to help drive SLA metrics. Unfortunately, I don't know how you would go about implementing such a scheme, since it's been many years since I've spoken any SQL dialects.

Why do all my 'Sum of Units' rows all have the same values in PowerPivot?

I'm pretty new to PowerPivot and have a problem.
I created an SSIS project (.dtsx) to import around 10 million rows of data and an Analysis Services Tabular Project (.bim) to process the data model.
Up until today, everything worked as expected, but after making a schema change to add further columns to a table and updating the model, I now have a problem. When opening the existing connection in Business Intelligence Development Studio (BIDS) to update the schema changes, I was told that I would have to drop and reload the Sales and Returns tables as they were related.
Now, when I try to filter on a particular attribute, the Sales 'Sum of Units' column always displays the total sum of units for every row, instead of the correct values. I remember having this problem once when I was building the system, but it went away after re-processing the tables in BIDS... this time however, no amount of processing is making any difference.
I'm really hoping that this is a common problem and that someone has a nice easy solution for me, but I'll take whatever I can get at this stage. I'd also quite like to understand what is causing this. Many thanks in advance.
For anyone with a similar problem, I found the answer.
Basically, I had made a schema change and BIDS told me that I had to drop my SalesFact and ReturnsFact tables before updating the model with the new database schema. The problem was that I did not realise that relationships had been set up on these tables and so after re-adding them, the model was missing its relationships to the other tables... that's why all rows showed the same value.
The fix was to put the model into design view and to create relationships between the tables by clicking and dragging between them.
I knew it was something simple.

Populating a layout using a table from OBDC

I have been able to set up OBDC in Filemaker, and added table, a MySQL table, in Filemaker's relationship diagram.
I want to set up a layout, to view the entire contents of table, at the moment it displays a much smaller number of records than there should be, initially only 3 Total (where it should be 150).
However, if I go to Find Mode, and type in the id of one of the records which is not displayed, it is subsequently appended to the table (so I now have 4 "Total" records).
How can I display the entire contents of this table?
You don't state what OS you're on nor what your ODBC driver is, but assuming it's a Mac with the Actual Technologies driver, it sounds like the driver isn't registered.
From http://www.actualtech.com/product_opensourcedatabases.php
Downloaded driver has all features turned on, except that it will only return 3 rows from any query until the driver is registered
pft221 is correct on the recordset result, but you will often find that you will need to perform the script step "Refresh Window" with [Flush cached SQL data] checked periodically to refresh rows of data if the datasource is actively being updated.

Update table instantly or “Bulk” Update in database later? And is it advisable?

I have a question regarding a semi-constant update in a database. In short it is regarding a checkout function on a web page, which each time the checkout function is evoked it do five steps.
I want to try to optimize this function and have my eye on a step where I update a table each time the checkout is performed. I take the information retrieved from the shopping cart and then update the table in question.
I do have some indexes on the table, the gain from those are greater than leaving them so this is a cost I’m willing to take.
Now, my question is. Could it in some way regarding to performance be better to not update the table instantly but collect every checkout items and save them in some way (maybe in a file) and then at a specific time (or several times) at day take this file and then update the table with the new information.
Then I started thinking about if there was a possibility to use some sort of Bulk Update to take a file, hashmap, array (or?) and then update it.
And I’m using IBM DB2 version 9.7
Mestika
You will lose the ability to do transactions, or to recover from failure after a step midway, so I would avoid using this approach. You could try using prepared statements, or batch updates offered by JDBC 2.0 where multiple statements are submitted to the DB as a single unit.