I have been able to set up OBDC in Filemaker, and added table, a MySQL table, in Filemaker's relationship diagram.
I want to set up a layout, to view the entire contents of table, at the moment it displays a much smaller number of records than there should be, initially only 3 Total (where it should be 150).
However, if I go to Find Mode, and type in the id of one of the records which is not displayed, it is subsequently appended to the table (so I now have 4 "Total" records).
How can I display the entire contents of this table?
You don't state what OS you're on nor what your ODBC driver is, but assuming it's a Mac with the Actual Technologies driver, it sounds like the driver isn't registered.
From http://www.actualtech.com/product_opensourcedatabases.php
Downloaded driver has all features turned on, except that it will only return 3 rows from any query until the driver is registered
pft221 is correct on the recordset result, but you will often find that you will need to perform the script step "Refresh Window" with [Flush cached SQL data] checked periodically to refresh rows of data if the datasource is actively being updated.
Related
In my project I have to keep on inserting new rows in a table based on some logic. After this I want that each time an event is triggered, the rows of updated table should be fetched.
But the Problem is that new rows aren't accessed. The table is always updated after i close the current simulation. A similar case was posted last year but the answer wasn't clear, and due to less reputation score I am unable to comment on it. Does anyone know that whether Anylogic 8.1.0 PLE supports reading of newly updated database table records at runtime or not? or is there some other beneficial solution?
This works correctly in AnyLogic (at least in the latest 8.2.3 version) so I suspect there is another problem with your model.
I just tested it:
set up a simple 2-column database table;
list its contents (via query) at model startup;
update values in all rows (and add a bunch of rows) via a time 1 event;
list its contents (via query) via a time 2 event.
All the new and updated rows show correctly (including when viewing the table in AnyLogic, even when I do this during the simulation, pausing it just after the changes).
Note that, if you're checking the database contents via the AnyLogic client, you need to close/reopen the table to see the changes if you were already in it when starting the run. This view does auto-update when you close the experiment, so I suspect that is what you were seeing. Basically, the rows had been added (and will be there when/if you query them later in the model) but the table in the AnyLogic client only shows the changes when closing/reopening it or when the experiment is closed.
Since you used the SQL syntax (rather than the QueryDSL alternative syntax) to do your inserts, I also checked with both options (and everything works the same in either case).
The table is always updated after i close the current simulation
Do you mean when you close the experiment?
It might help if you can show the logic/syntax you are using for your database inserts and your queries.
I can run a query on views in SQL Developer 3.1.07 and it returns the results I expect. My co-worker, who is in Mexico using the same user, can connect to the same database, sees the same views, runs the same query and gets no results, even from a simple "select * from VIEWNAME" query. The column headers display, but no data. If he selects a view from the connections window and selects the DATA tab no data displays. This user does not have access to any tables on this specific database.
I'm not certain he is running the same version of Developer, but it's not far off. I have checked as many settings in SQL Developer that I think could be the issue, but see no significant difference in his settings from mine.
Connecting to another database allows him to access data in both tables and views
Any thoughts on what we're missing?
I know I'm a few years late, but check if the underlying view doesn't filter on something that is based on localisation! I just had the issue and it turned out to be a statment like this that was causing issues:
SELECT *
FROM sometable
WHERE language = userenv('LANG')
Copy the JDBC folder from your oracle home and copy it over to your c-workers machine. we had the same issue and replacing the JDBC folder worked.
Faced the same which got resolved when I checked the 'skip NLS settings' box. My query was returning zero results earlier but when I ran the same query again, I could see the table rows.
Since your co-worker is in a different country, most probably the NLS settings (related to the language) are the culprit here.
I was facing the same issue, turned out that the update to the database from my sqldevelolper was not commited to the main database, that's why, I was getting results on my sqldeveloper for that query, but from aws it was returning empty results. When I chatted with DBA, he could find stale data. After I committed the data from my sqldeveloper, the db was actually updated.
I have a repeating problem that just feels so basic yet I cannot solve it nor can I find a solution online. Really hoping someone has something simple.
I have multiple situations where I have relatively large tables stores in Postgres (v8.4) and I want to be able to easily display them for my testers to review. The tables always have character varying fields that go well beyond the 255 max that Access wants to display in a Text field; it should become a Memo field. The data also has every possible separator imaginable already in it (tab, carriage return, semi colon, pipe, etc) and extracting it to Excel or such will never work smoothly. The easiest thing WOULD be using ODBC to link the table into an Access DB and viewing it there ... except that when I link or import, Access translates the field to Text. I've tried settings on the ODBC, but nothing can get those Fields to be Memo.
I'll take a way to extract to Excel cleaner, to view it in Access better .. just anything that gets me the entire table in a low level user friendly way to consistently get a table like that to a place they can review it. Suggestions?
Better late than never..
I just ran into this problems with Access 2010 and Postgres 9.1. I found a setting in the Postgres ODBC driver settings that you have to change. In the ODBC Data Source Administrator, select the datasource that you setup and click the 'Configure...' button.
Click the 'Datasources' button.
Uncheck the 'Text as LongVarChar' checkbox
In Access, you may have to delete the linked tables and re-add them. I tried relinking and one table updated properly and one did not. After deleting are re-adding, I had both working.
Try setting text datatype for all columns that you want to have Memo Data Type. I checked that with PostgreSQL 9.0 (64 bit), psqlodbc_09_00_0310 (32 bit, so I created User DSN under C:\Windows\SysWOW64\odbcad32.exe) and as I see all columns wit text type become Memo, as opposite to characted(6) column that has Text Data Type in Access.
I know it is xmas eve, so it is a perfect time to find hardcore programers online :).
I have a sqlite db fiel that contains over 10 K record, I generate the db from a mysql database, I have built the sqlite db within my iphone application the usual way.
The records contains information about products and their prices, shops and the like, this info of course is not static, I use an automatic scheme to populate and keep updating my mysql db.
Now, how can I update the iphoen app sqlite database with the new information available in the mysql db, the db structure is still the same, but the records contains new information.
Thanks.
Ahed
info:
libsqlite3.0,
iphone OS 3.1,
mysql 2005,
Mac OS X 10.6.2
There is a question you need to answer first; How do you determine the set of changed records in your MySQL database?
Or, more specifically, given that the MySQL database is in state A, some transactions occur and now it is in state B, how do you know what changed between A and B?
Bottom line; you need a schema in MySQL that enables this. Once you have answered that question, then you can answer the "how do I sync problem?".
I have a similar application.
I am using Push Notification to let my users know there is new or updated data available.
Each time a record on the server is updated, I store a sequential record-number alongside the record.
Each UDID that's registered has a "last updated" number associated with it that contains the highest record-number it has ever downloaded.
When any given device comes to get it's updates, all database records greater than the UDID's last updated record-number as stored on the server are sent to the device. If everything goes OK, the last updated record-number for the UDID is set to the last record number sent.
The user has the option to fetch all records and refresh his database if he feels any need to sync his device to the entire database.
Seems to be working well.
-t
You can find many other similar questions by searching for "iphone synchronization":
https://stackoverflow.com/search?q=iphone+synchronization
I'm going to assume that the data is going only from mysql to sqlite, and not the reverse direction.
Three are a few ways that I could imagine doing this. The first is to just redownload
the entire database during every update. Another way, which I'm describing below, would be to create a "log" table to record the modifications to your master table, and then download just the new logs when doing the update.
I would creat a new "log" table in your SQL database to log changes to the table needing synchronization. The log could contain a "revision" column to track in what order changes were made, a "type" column to specify if it was a insert, update, or or delete, a the row-id if your affected row, and finally have the entire set of columns from your master table.
You could automate the creation of log entries by using stored procedures as wrappers to modify your master table.
With only 10k records, I wouldn't expect this log table to grow to be that huge.
You would then in sqlite keep track of the latest revision downloaded from mysql. To update the table, you would download all log entries after the latest update, and then apply them to your sqlite table.
I'm really at a loss as to how to procede.
I have a very large database, and the table I'm accessing has approx. 600,000 records. This database is accessed using an accounting application, which provides the report with the SQL query by which this report accesses the database.
My report has a linked subreport which has restrictions that are placed in the report header. When this report is run, the average time to refresh, using a very base query is 36 minutes. When adding two more items to the query, the report takes 2.5 hours.
Here is what I've tried:
cleaned up the report only leaving items in absolutely necessary - no difference
removed most formulas (removing the remaining formulas makes no time difference)
tried editing the SQL query - wasn't allowed because of the accounting application
tried flipping subreport and main report - didn't work
added other groupings - no difference
removed groupings - no difference
checked all the servers for lack of temp disc space - no issue
tried "on demand" subreport - no change
checked Parameters (discrete vs. range) and it is as it should be
tried bursting indexes, grouping on server, etc. - no difference
the report requires 2 passes. I've tried getting it down to one pass unsuccessfully.
There must be something I'm missing.
There does not appear to be any other modifications to the report using regular crystal functions. Is there any way to speed up the accessing of the data without having to go through all 600,000 records? The SQL query that accesses this data is long and has many requests. It is not something I can change.
Can I add something (formula?) that nullifies these requests? I'm reaching now...
Couple of things we have had success with is adding indexes to the databases, and instead of importing tables into the report, we instead wrote a stored procedure to retrieve the desired results.
If indices and stored procedures dont get you where you need to be you have reached the denormalise until it works part of life with a database. You might want to look at creating an MI database with tables optimized for your reporting needs; and some data transformation scripts that can extract the data from production to your MI database. Depending on what it is oracle / ms have tools to help you do this.
We use Crystal Reports with a billing system, and we had queries in the database that take over 1.5 hours to complete. This doesn't even take into account the rendering/formatting of the reports.
We created Materialized Views and force the client to refresh them daily. A materialized view is basically a database view that holds the returned dataset. The dataset is not refreshed unless you explicitly tell it to refresh.
Do you know what the SQL query is? If so, you can move the report outside the accounting application and paste the query directly into the Command in the database expert. I've had to do this in a couple of cases with another application I work with.