Slow SQLite access on iPhone - iphone

I have a quite slow data retrieval from a sqlite database on my iPhone and perhaps someone have an alternative idea to explain this. From what I tracked down so far sqlite3_step(statement) is sometimes unusually slow. While retrieving e.g. 50 rows from the database to execute this step takes normally some milliseconds but sometimes it takes several seconds.
My database is not small (80MB) and my theory is that the reason is paging. But can someone else think of an other reason for this?

Do you have a proper index on that table? Queries can be very slow if a full table scan is required to perform your query. See this page for example for some guidance on how to optimize your SQLite queries.
You may simply need to add an index to your table. Don't forget to reindex your table after you add your index (you'll need a tool, there's a free firefox add-on called "SQLite Manager" that does a pretty decent job for this)

Related

How to cache duplicate queries in PostgreSQL

I have some extensive queries (each of them lasts around 90 seconds). The good news is that my queries are not changed a lot. As a result, most of my queries are duplicate. I am looking for a way to cache the query result in PostgreSQL. I have searched for the answer but I could not find it (Some answers are outdated and some of them are not clear).
I use an application which is connected to the Postgres directly.
The query is a simple SQL query which return thousands of data instance.
SELECT * FROM Foo WHERE field_a<100
Is there any way to cache a query result for at least a couple of hours?
It is possible to cache expensive queries in postgres using a technique called a "materialized view", however given how simple your query is I'm not sure that this will give you much gain.
You may be better caching this information directly in your application, in memory. Or if possible caching a further processed set of data, rather than the raw rows.
ref:
https://www.postgresql.org/docs/current/rules-materializedviews.html
Depending on what your application looks like, a TEMPORARY TABLE might work for you. It is only visible to the connection that created it and it is automatically dropped when the database session is closed.
CREATE TEMPORARY TABLE tempfoo AS
SELECT * FROM Foo WHERE field_a<100;
The downside to this approach is that you get a snapshot of Foo when you create tempfoo. You will not see any new data that gets added to Foo when you look at tempfoo.
Another approach. If you have access to the database, you may be able to significantly speed up your queries by adding and index on on field

Query log equivalent for Progress/OpenEdge

Short story: A report running against a Progress database (OpenEdge Release 10.1C03) takes hours to complete. I suspect that it does not take advantage of existing data indexes. Would like to understand how it scans the data to then try to add an index that will make it run faster.
Source code of the report is not available. The code is native Progress 4GL, not SQL.
If it were an SQL database I would try to do a dump of SQL queries and would then go from that. With 4GL I did not find any such functionality. Is it possible to somehow peek at what gets executed at the low level?
What else can be done if there is no source code?
Thanks!
There are several things you can do:
If I recall correctly 10.1C should have the _usertablestat and _userindexstat virtual system tables available. These allow you to observe, at runtime, what tables and indexes are being accessed by a particular session. You can either write your own 4GL program to query them or you can use the screens in PROMON, R&D, 3 "Other Displays", 5 "I/O Operations by User by Table" and 6 "I/O Operations by User by Index". That will show you what tables and indexes are actually in use and how much use they are getting. If the observed data seems wrong it will probably give you a clue. (If the VSTs are missing it might be because the db was upgraded from an older version -- add them with proutil dbname -C updatevsts.)
You could also use the session startup parameters -clientlog "filename" and -logentrytypes QryInfo to obtain more detailed information about the queries being executed.
Keep in mind that Progress is not SQL. Unlike most SQL databases the 4gl uses a static, compile-time, optimizer. Index selection happens when the code is compiled. So unless you can recompile (and you seem to not have source so that seems unlikely) you won't be able to improve things by adding missing indexes. You might, however, at least be able to show the person who does have source where the problem is.
Another tool that can help is the profiler. This will identify where in the code the time is being spent. That can also be good information to provide to the original vendor if they need help finding the problem. For more information on the profiler: http://dbappraise.com/ppt/profiler.pptx

sqlite versus csv file for iPhone

We have about 10 sqlite files getting downloaded in our app and each of which contains about 4000 rows. We process that data and display it in a tableview. We are running into speed and memory issues when scrolling through the tableview.
We were thinking whether instead of sqlite files, if we have csv files or some other format, can we get better performance than sqlite? I have read that xml or json won't help since the number of records is too huge and parsing time would go up.
Please suggest.
First, don't assume that SQLite is your bottleneck. I made that same assumption in my own application and spent days trying to optimize the database access, only to run Instruments against it and find that I had a slow string-processing routine in my interface that was bogging things down.
Use Time Profiler and Object Allocations first to verify where your hotspots are in code. SQLite is ridiculously fast.
That said, with 4000 rows, you will probably run into memory issues at the least if you try to load all of them into an array for display to the screen. My recommendation would be to import that data into a Core Data SQLite database and use an NSFetchedResultsController with a batch size set for its fetch request to be slightly larger than the number of rows displayed onscreen.
Core Data will handle the loading / unloading of batched data this way, meaning that only a small part of the database is loaded into memory at once. This can lead to a tremendous speedup (particularly on the initial load) and will significantly reduce memory usage. It also does it using a trivial amount of code.
A properly indexed SQLite database will run circles around any flat file, especially if you have a lot of records. Also try consolidating those 10 files into 1 database, so you can perform joins on indexed columns and use clever tricks such as views. Right now it seems like you're pulling data from 10 different databases and manually comparing/processing them, which would of course take a lot of time and memory.
It is going to depend on the application, how you are using and querying the data. Profile it, confirm that sqlite is or isn't the problem. Then attack whatever the profiling turns up.
Profilers: Shark
Or some other profiling solution

A question about indexes regarding to the gain of inserts & updates in database

I’m having a question about the fine line between the gain of an index to a table there is growing steadily in size every month and the gain of queries with an index.
The situation is, that I’ve two tables, Table1 and Table2. Each table grows slowly but regularly each month (with about 100 new rows for Table1 and a couple of rows for Table2).
My concrete question is whether to have an index or to drop it. I’ve made some measurement that an covering index on Table2 improve my SELECT queries and some rather much but again, I’ve to consider the pros and cons but having a really hard time to decide.
For Table1 it might not be necessary to have an index because the SELECT queries there is not that common.
I would appreciate any suggestion, tips or just good advice to what is a good solution.
By the way, I’m using IBM DB2 version 9.7 as my Database system
Sincerely
Mestika
Any additional index will make your inserts slower and your queries faster.
To take a smart decision, you will have to measure exactly by how much, with the amount of data that you expect to see. If you have multiple clients accessing the database at the same time, it may make sense to write a small multithreaded application that simulates the maximum load, both for inserts and for queries.
Your results will depend on the nature of your data and on the hardware that you are running. If you want to know the best answer for your usecase, there is no way around testin accurately yourself with your data and your hardware.
Then you will have to ask yourself:
Which query performance do I need?
If the query performance is good enough without the index anyway, easy: Don't add the index!
Which insert performance do I need?
Can it drop below the needed limit with the additional index? If not, easy: Add the index!
If you discover that you absolutely need the index for query performance and you can't get the required insert performance with the index, you may need to buy better hardware. Solid state discs can do wonders for database servers and they are getting affordable.
If your system is running fine for everyone anyway, worry less, let it run as is.

iphone sdk sqlite lookup performance for +40k records

What is the best way to get this thing done:
I have a huge table with +40k records (tv show titles) in sqlite and I want to do real time lookups to this table. For eg if user searches for a show, as and when user enters search terms I read sqlite and filter records after every keystroke (like google search suggestion).
My performance benchmark is 100 milliseconds. A few things I have thought of are: creating indexes, splitting the data into multiple tables.
However, I would really appreciate any suggestions to achieve this in the fastest possible time so I can avoid any ui refresh delays - it would be awesome to have feedback from coders who have already done something similar.
Things to do:
Index fields appropriately.
Limit yourself to only 10-15 records on the initial query—that should be enough to populate the top of the table view.
If you don't need to sort, don't. If you do need to sort, sort on an indexed field.
Do as much as you can in SQLite rather than your own code.
Do as little as you can overall.
You'll likely find what I have: SQLite and the iPhone are actually amazingly capable as long as you don't do anything really dumb.
Keep "perceived performance" in mind - doing lookups right after a key is hit is could be somewhat expensive. How many milliseconds does it take a user to hit a key, though? You can probably get away with not updating the resultlist until the user hasn't typed anything for several hundred milliseconds. (For really fast users, perhaps update every X hundred millisecodns while he's still typing).
How do you know the performance will be bad? 40k rows is not that much, even for an iPhone... try it on the phone before you optimize.
Avoid doing any joins, try to use paging so that you keep the amount of data returned to a minimum. Perhaps you should try loading the whole thing into memory, then sort and do binary search? If it is just a list of show titles it would fit?