sqlite fetching on ios device slow but fast on simulator - iphone

I have a database with 3 tables having 700000 records each. I have added a search feature in the app which uses query...
const char *sqlstatement="select * from artist where name like ?";
sqlite3_stmt *compliedstatement;
if(sqlite3_prepare_v2(database,sqlstatement , -1, &compliedstatement, NULL)==SQLITE_OK)
{
sqlite3_bind_text(compliedstatement,1,[[NSString stringWithFormat:#"%%%#%%",self.searchBar.text] UTF8String], -1, SQLITE_STATIC);
This was OK on the simulator but took about a minute on the iOS device. So i used Sqlite Manager to add indexes to the table columns, the database size grew form 76mb to 166mb but now this query takes about 1 to 2 seconds on simulator to run, and abt 10 to 15 seconds on device. SO its an improvement but still any suggestions to further improve it? NO I cannot use CoreData at this point of time.

The first point to note is that SQLite will not use an index for LIKE clauses, even if they are of the form LIKE '...%' (e.g. LIKE 'Fred%'), the point being that most collation sequences are case-sensitive. The improvement in performance that you observe is due to SQLite using what is called an index search - it can search the index rather than having to plough through the whole table.
15 seconds for 700,000 records is not bad, in fact it is about what I am getting myself, for a very similar query. To get any improvement on this, you would have to look into doing a full text search (FTS). This involves a lot more work than simply adding an index, but it may be the way to go for you.
If they query returns a lot of rows, and you are using these to populate a table view, there may be another performance issue due to having to process so many rows, even if you are only pulling off the row ids. My solution was to limit the number of rows fetched to about 5,000, on the grounds that nobody would want to scroll through more than that.

Use the FTS feature of SQLite. FTS is enabled by default and will solve the query performance issue with the "like". You need to add alle rows to an FTS virtual table. Then use "match" instead of "like". See here the documentation: http://www.sqlite.org/fts3.html. You can expect performance to be in ms instead of seconds.

Related

Statistics of all/many tables in FileMaker

I'm writing a kind of summary page for my FileMaker solution.
For this, I have define a "statistics" table, which uses formula fields with ExecuteSQL to gather info from most tables, such as number of records, recently changed records, etc.
This strangely takes a long time - around 10 seconds when I have a total of about 20k records in about 10 tables. The same SQL on any database system shouldn't take more than some fractions of a second.
What could the reason be, what can I do about it and where can I start debugging to figure out what's causing all this time?
The actual code is, like this:
SQLAusführen ( "SELECT COUNT(*) FROM " & _Stats::Table ; "" ; "" )
SQLAusführen ( "SELECT SUM(\"some_field_name\") FROM " & _Stats::Table ; "" ; "" )
Where "_Stats" is my statistics table, and it has a string field "Table" where I store the name of the other tables.
So each row in this _Stats table should have the stats for the table named in the "Table" field.
Update: I'm not using FileMaker server, this is a standalone client application.
We can definitely talk about why it may be slow. Usually this has mostly to do with the size and complexity of your schema. That is "usually", as you have found.
Can you instead use the DDR ( database design report ) instead? Much will depend on what you are actually doing with this data. Tools like FMPerception also will give you many of the stats you are looking for. Again, depends on what you are doing with it.
Also, can you post your actual calculation? Is the statistic table using unstored calculations? Is the statistics table related to any of the other tables? These are a couple things that will affect how ExecuteSQL performs.
One thing to keep in mind, whether ExecuteSQL, a Perform Find, or relationship, it's all the same basic query under-the-hood. So if it would be slow doing it one way, it's going to likely be slow with any other directly related approach.
Taking these one at a time:
All records count.
Placing an unstored calc in the target table allows you to get the count of the records through the relationship, without triggering a transfer of all records to the client. You can get the value from the first record in the relationship. Super light way to get that info vs using Count which requires FileMaker to touch every record on the other side.
Sum of Records Matching a Value.
using a field on the _Stats table with a relationship to the target table will reduce how much work FileMaker has to do to give you an answer.
Then having a Summary field in the target table so sum the records may prove to be more efficient than using an aggregate function. The summary field will also only sum the records that match the relationship. ( just don't show that field on any of your layouts if you don't need it )
ExecuteSQL is fastest when it can just rely on a simple index lookup. Once you get outside of that, it's primarily about testing to find the sweet-spot. Typically, I will use ExecuteSQL for retrieving either a JSON object from a user table, or verifying a single field value. Once you get into sorting and aggregate functions, you step outside of the optimizations of the function.
Also note, if you have an open record ( that means you as the current user ), FileMaker Server doesn't know what data you have on the client side, and so it sends ALL of the records. That's why I asked if you were using unstored calcs with ExecuteSQL. It can seem slow when you can't control when the calculations fire. Often I will put the updating of that data into a scheduled script.

Postgres query shows under the 'Most time consuming section' of heroku

This below query I am using for a search:
I guess this current time is not bad but still I am searching for some kind of more optimizations. Also I saw in the analyze report this nested loop and nested loop joins shows the red. It would be great if I get an idea to reduce that. I was thinking to add index for search key. It would be great if I can get more suggestions to improve this. Here I have added the explain analyze result with 3 times execution, which ran in production
You could try to add ingredients.name or ingredients.code to an existing index or to create a new index so that more rows are filtered during ingredients index scan.
You should also try to avoid to use fonction on column name such as LOWER(ingredients.name) to make sure the right index is used.

Can I have more than 250 columns in the result of a PostgreSQL query?

Note that PostgreSQL website mentions that it has a limit on number of columns between 250-1600 columns depending on column types.
Scenario:
Say I have data in 17 tables each table having around 100 columns. All are joinable through primary keys. Would it be okay if I select all these columns in a single select statement? The query would be pretty complex but can be programmatically generated. The reason for doing this is to get denormalised data to populate a web page. Please do not ask why though :)
Quite obviously if I do create table table1 as (<the complex select statement>), I will be hitting the limit mentioned in the website. But do simple queries also face the same restriction?
I could probably find this out by doing the exercise myself. In the next few days I probably will. However, if someone has an idea about this and the problems I might face by doing a single query, please share the knowledge.
I can't find definitive documentation to back this up, but I have
received the following error using JDBC on Postgresql 9.1 before.
org.postgresql.util.PSQLException: ERROR: target lists can have at most 1664 entries
As I say though, I can't find the documentation for that so it may
vary by release.
I've found the confirmation. The maximum is 1664.
This is one of the metrics that is available for confirmation in the INFORMATION_SCHEMA.SQL_SIZING table.
SELECT * FROM INFORMATION_SCHEMA.SQL_SIZING
WHERE SIZING_NAME = 'MAXIMUM COLUMNS IN SELECT';

Optimize SQLite query that has multiple parameters using FMDB

I have a query that starts pretty simple:
SELECT zudf16, zudf17, zudf18, zudf19, zudf20, zitemid, zItemName, zPhotoName, zBasePrice FROM zskus WHERE zmanufacturerid=? AND zudf16=? ORDER BY zItemID
As choices are made I build on it:
SELECT zudf16, zudf17, zudf18, zudf19, zudf20, zitemid, zItemName, zPhotoName, zBasePrice FROM zskus WHERE zmanufacturerid=? AND zudf16=? AND zudf19=? AND zudf20=? ORDER BY zItemID
For each parameter I add the query drops in performance, especially on iPad1. Each of these columns have an index on them.
How can I increase the performance of this query as it get's more targeted rather than lose performance?
This is across around 60,000 records that match the initial ManufacturerID but with a total of about 200,000 rows total in the database.

Efficient way of insert millions of rows, convert data and deal with it, on PostgreSQL+PostGIS

I have a big collection of data I want to use for user search later.
Currently I have 200 millions resources (~50GB). For each, I have latitude+longitude. The goal is to create spatial index to be able to do spatial queries on it.
So for that, the plan is to use PostgreSQL + PostGIS.
My data are on CSV file. I tried to use custom function to not insert duplicates, but after days of processing I gave up. I found a way to load it fast in the database: with COPY it takes less than 2 hours.
Then, I need to convert latitude+longitude on Geometry format. For that I just need to do:
ST_SetSRID(ST_MakePoint(longi::double precision,lat::double precision),4326))
After some checking, I saw that for 200 millions, I have 50 millions points. So, I think the best way is to have a table "TABLE_POINTS" that will store all the points, and a table "TABLE_RESOURCES" that will store resources with point_key.
So I need to fill "TABLE_POINTS" and "TABLE_RESOURCES" from temporary table "TABLE_TEMP" and not keeping duplicates.
For "POINTS" I did:
INSERT INTO TABLE_POINTS (point)
SELECT DISTINCT ST_SetSRID(ST_MakePoint(longi::double precision,lat::double precision),4326))
FROM TABLE_RESOURCES
I don't remember how much time it took, but I think it was matter of hours.
Then, to fill "RESOURCES", I tried:
INSERT INTO TABLE_RESOURCES (...,point_key)
SELECT DISTINCT ...,point_key
FROM TABLE_TEMP, TABLE_POINTS
WHERE ST_SetSRID(ST_MakePoint(longi::double precision,lat::double precision),4326) = point;
but again take days, and there is no way to see how far the query is ...
Also something important, the number of resources will continue to grow up, currently should be like 100K added by day, so storage should be optimized to keep fast access to data.
So if you have any idea for the loading or the optimization of the storage you are welcome.
Look into optimizing postgres first (ie google postgres unlogged, wal and fsync options), second do you really need points to be unique? Maybe just have one table with resources and points combined and not worry about duplicate points as it seems your duplicate lookup maybe whats slow.
For DISTINCT to work efficiently, you'll need a database index on those columns for which you want to eliminate duplicates (e.g. on the latitude/longitude columns, or even on the set of all columns).
So first insert all data into your temp table, then CREATE INDEX (this is usually faster that creating the index beforehand, as maintaining it during insertion is costly), and only afterwards do the INSERT INTO ... SELECT DISTINCT.
An EXPLAIN <your query> can tell you whether the SELECT DISTINCT now uses the index.