My iPhone project doesn't fetch more than 5 rows of the sqlite table? - iphone

When I'm trying to fetch data from the sqlite database table the NSArray has a capacity of 100 and the count array has a capacity of 9. count[5] returns to me rubbish data which is not in the table at all. Even the first 5 records returned correctly.
if(sqlite3_open([dbPath UTF8String],&database)==SQLITE_OK){
const char* sql2= " select * from Bcars";
sqlite3_stmt *selectstatment;
if(sqlite3_prepare_v2(database,sql2,-1,&selectstatment,Nil)==SQLITE_OK){
while(sqlite3_step(selectstatment)==SQLITE_ROW){
// fetch the id
count[i++]=sqlite3_column_int(selectstatment, 0);
carobject.primarykey=sqlite3_column_int(selectstatment, 0);
[ar addObject:[NSString stringWithUTF8String:(char*)sqlite3_column_text(selectstatment, 1)]];
[ar1 addObject:[NSString stringWithUTF8String:(char*)sqlite3_column_text(selectstatment, 2)]];
}
}
}
else
{
sqlite3_close(database);
self.statustext.text=#" database closed";
}
self.statustext.text=[[NSString alloc]initWithFormat:#"%d",count[4]];/* when I try to return count[5] it gives me rabbish value !!*/
self.searchtext.text=(NSString*)[ar objectAtIndex:5];//here is an error occurred !!

There are a couple of issues with your code:
You have not initialized the i counter so I presume that it has been initialized outside the snippet, or?
How do you know how much data is actually available in the database? The count array has to be large enough to hold all the rows or you will overwrite some unallocated memory. Maybe you know this beforehand or else you should find out e.g. by executing some SQL statement aka "SELECT COUNT(*) FROM Bcars".
Are you absolutely sure that the database holds more than 5 entries? The code just assumes this, but doesn't make any checks on the value of i or [ar count] whether you actually received these entries?
Make sure you clean up after execution by calling sqlite3_finalize()

Related

sqlite3 update not working in ios

I am trying to update sqlite db. This is the code I am using
if(sqlite3_open([dbPath UTF8String], &database) == SQLITE_OK)
{
const char * sql;
sql = "update tms set name = ?,place=?,stars=? where id=?";
sqlite3_stmt *selectStatement;
//prepare the select statement
int returnValue = sqlite3_prepare_v2(database, sql, -1, &selectStatement, NULL);
if(returnValue == SQLITE_OK)
{
sqlite3_bind_text(selectStatement, 1,[[payloadDict valueForKey:#"userName"] UTF8String] , [[payloadDict valueForKey:#"userName"] length],SQLITE_STATIC);
sqlite3_bind_text(selectStatement, 2,[[payloadDict valueForKey:#"locName"] UTF8String], [[payloadDict valueForKey:#"locName"] length],SQLITE_STATIC);
sqlite3_bind_int(selectStatement, 3, [[payloadDict valueForKey:#"starCount"] integerValue]);
sqlite3_bind_int(selectStatement, 4, [[payloadDict valueForKey:#"rowid"] integerValue]);
int success = sqlite3_step(selectStatement);
if(success == SQLITE_DONE)
{
isExist = TRUE;
}
else {
//NSAssert1(0,#"Error: Failed to Update %s",sqlite3_errmsg(database));
}
}
I am getting value 101 as success when sqlite3_step is executed. But database is not updated with new values.
How can I do this properly?
Thanks
I agree with #ott's excellent suggestion of making sure the database is located in the Documents directory (though I would have thought that that would have given you an error).
I'd also double check the value returned by [[payloadDict valueForKey:#"rowid"] integerValue] to make sure it matches a value in the id column for one of the existing rows in your table. If it doesn't match anything, sqlite3_step will return SQLITE_DONE even if nothing was updated.
Also note that you might also want to make sure that the id values are stored as numeric values, not text strings as sqlite is pretty lax about letting you store values in whatever data type you originally specified when you first inserted the data, regardless of how the table was defined), and I'm not sure if a WHERE clause looking for a numeric match will succeed if the data was originally stored as a text value. If you used an id column definition like id INTEGER PRIMARY KEY AUTOINCREMENT, where the system defined the values automatically for you, this isn't an issue, but if you manually populated the id column, it might be something to double check. (Generally it does a pretty good job in interpreting strings as numbers on the fly, but there are some weird situations that are problematic: For example, if you stored a string value of "5,127" in a numeric field, if you later then try to retrieve its numeric value, sqlite won't know what to do with the comma in the text value "5,127" and will interpret the numeric value as 5, not as 5127.)

Fastest way to insert many rows into sqlite db on iPhone

Can someone explain what the best way to insert a lot of data on the iPhone using FMDB is? I see things like using the beginTransaction command. I'm honestly not sure what this or setShouldCacheStatements do. I followed what code my coworker did so far, and this is what it looks like:
BOOL oldshouldcachestatements = _db.shouldCacheStatements;
[_db setShouldCacheStatements:YES];
[_db beginTransaction];
NSString *insertQuery = [[NSString alloc] initWithFormat:#"INSERT INTO %# values(null, ?, ?, ?, ?, ?, ?, ?);", tableName];
[tableName release];
BOOL success;
bPtr += 2;
int *iPtr = (int *)bPtr;
int numRecords = *iPtr++;
for (NSInteger record = 0; record < numRecords; record++) {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
// Some business logic to read the binary stream
NSNumber *freq = aFreq > 0 ? [NSNumber numberWithDouble:100 * aFreq / 32768]: [NSNumber numberWithDouble:-1.0];
// these fields were calculated in the business logic section
success = [_db executeUpdate:insertQuery,
cID,
[NSNumber numberWithInt:position],
[NSString stringWithFormat:#"%#%#", [self stringForTypeA:typeA], [self stringForTypeB:typeB]], // methods are switch statements that look up the decimal number and return a string
[NSString stringWithFormat:#"r%i", rID],
[self stringForOriginal:original],
[self stringForModified:modified],
freq];
[pool drain];
}
[outerPool drain];
[_db commit];
[_db setShouldCacheStatements:oldshouldcachestatements];
Is this the fastest I can do? Is the writing the limitation of sqlite? I saw this: http://www.sqlite.org/faq.html#q19 and wasn't sure if this implementation was the best with fmdb, or if there was any other thing I can do. Some other coworkers mentioned something about bulk inserts and optimziing that, but I'm not honestly sure what that means since this is my first sqlite encounter. Any thoughts or directions I can go research? Thanks!
First of all, in most cases you do not have to be concerned about the performance of sqlite3, if you are not using it completely wrong.
The following things boost the performance of INSERT statements:
Transactions
As you already mentioned, transactions are the most important feature. Especially if you have a large amount of queries, transaction will speed up your INSERTs by ~10 times.
PRAGMA Config
Sqlite3 provides several mechanism which avoid the corruption of your database in the worst cases. In some scenarios, this is not needed. In others, it is absolutely essential. The following sqlite3 commands may speed up your INSERT statements. A normal crash of your app will not corrupt the database, but a crash of the OS could.
PRAGMA synchronous=OFF -- may cause corruption if the OS fails
PRAGMA journal_mode=MEMORY -- Insert statements will be written to the disk at the end of the transaction
PRAGMA cache_size = 4000 -- If your SELECTs are really big, you may need to increase the cache
PRAGMA temp_store=MEMORY -- Attention: increases RAM use
Deactivate Indicies
Any SQL Index slows a INSERT statement down. Check if your table has some indices:
.indices <table_name>
If yes, DROP the INDEX and CREATE it after the transaction.
One Select
I do not see a way of using a BULK insert as you are generating new data. However, you could collect data and just perform one INSERT statement. This may boost up your performance dramatically, but it also rises the possibility of failure (syntax, for instance).
One hack meets another hack: As sqlite3 does not support this directly, you have to use the UNION command to collect all insert statements accordingly.
INSERT INTO 'tablename'
      SELECT 'data1' AS 'column1', 'data2' AS 'column2'
UNION SELECT 'data3', 'data4'
UNION SELECT 'data5', 'data6'
UNION SELECT 'data7', 'data8'
Statement Caching
I would suggest to avoid the use of statement caching as there is a unfixed issue with this feature (and far as I know, it does not influence the performance dramatically).
Memory Management
The last point I'd like to mention is about ObjC. Compared to basic operations, memory management needs very very much time. Maybe you could avoid some stringWithFormat: or numberWithDouble: by preparing these variable outside the loop.
Summary
All in all, I don't think that you will have a problem with the speed of sqlite3 if you simply use transactions.
I found it really difficult to find a concrete code example on how to insert many rows really quickly. After much experimentation with FMDB and the help of the answer above, here's what I'm using at the moment:
[_objectController.dbQueue inDatabase:^(FMDatabase *db)
{
[db open];
[db setShouldCacheStatements:YES];
NSArray *documentsJSONStream = responseObject[#"details"][#"stream"];
static NSString *insertSQLStatment = #"INSERT INTO documents (`isShared`,`thumbnailURLString`,`userID`,`version`,`timestamp`,`tags`,`title`,`UDID`,`username`,`documentURLString`,`shareURLString`) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)";
[db beginTransaction];
for (NSDictionary *dJ in documentsJSONStream)
{
[db executeUpdate:insertSQLStatment withArgumentsInArray:#[dJ[#"isShared"],dJ[#"thumbnailURLString"],dJ[#"userID"],dJ[#"version"],dJ[#"timestamp"],dJ[#"tags"],dJ[#"title"],dJ[#"UDID"],dJ[#"username"],dJ[#"documentURLString"],dJ[#"shareURLString"]]];
}
[db commit];
[db close];
}];
Using inTransaction gave me strange FMDB is not open errors. If there are any suggestions on how to improve this, please let me know.

iPhone SDK: loading UITableView from SQLite

I think I got a good handle on UITableViews and on getting/inserting data from/to SQLite db. I am straggling with an architectural question.
My application saves 3 values int the database, there can be many/many rows. However would I load them in the table?
From all the tutorials I have seen, at one point entire database is loaded in the NSMutableArray or similar object via performing SELECT statement.
Then when
-(UITableViewCell *) tableView: (UITableView *) tableView cellForRowAtIndexPath: (NSIndexPath *) indexPath
called, rows required are dolled out from the previous loaded NSMutableArray (or similar stracture).
But what i have have thousands for rows? Why would I pre-load them?
Should I just query database each time cellForRowAtIndexPath is called? If so, what would I use as an index?
Each row in the table will have an AUTOINCREMENT index, but since some rows may be deleted index will not correspond to rows in the table (in the SQL I may have something like this with row with index 3 missing):
1 Data1 Data1
2 Data2 Data2
4. data3 data3
Thanks
I solve this by reading what the table cell needs from my db into the datasource array, of all the existing db entries. The objects don't get removed from the array however, they stay there unless they need to be removed.
For example one of my apps reads 1'700 rows, for each row creates an object, assigns an NSUInteger (the autoincrement value) and an NSString (the name of the object, which will be displayed in the cell) and puts them into the datasource array. This whole process takes only about 200-300 milliseconds - you'll have to test whether it takes too long for 10'000+ entries, but for some thousand entries it sould be okay to just read it all. I remember that memory consumption is also quite low, can't look up how much exactly ATM.
Then, when the user taps a row, I query the datasource array to find the object he just tapped, and this object then loads all its other values from the database, which it can do since it knows its own database key/id (it "hydrates" itself).
For completeness sake (using FMDB):
NSResultSet *res = [db executeQuery:#"SELECT my_key, my_name FROM table"];
while ([res next]) {
NSDictionary *dict = [res resultDict];
MyObj *obj = [MyObj obj];
obj.id = [dict objectForKey:#"my_key"];
obj.name = [dict objectForKey:#"my_name"];
[datasourceArray addObject:obj];
}
Here is what I did, which worked perfectly (i have tested with 3k rows in the DB). The app is EZBudget
I read only indexes into the array, but I do so for entire table, however many rows it has
In the -(UITableViewCell *) tableView: (UITableView *) tableView cellForRowAtIndexPath: (NSIndexPath *) indexPath in do SELECT on the indexPath as index only
If certain rows needs to be deleted (a transaction is deleted) I delete index from the array and I delete corresponding row from the table
When record is added in the table, I get SQLite returns the autoincremented index for the newly added record. I take this index and add to the index array
So my index array is always in sync with the entire table. At any point I can SELECT entire record by index in the array.
Thanks everyone for suggestions!
You can always do one select statement to fetch just the 'autoincrement indices' in the order you want them, and keep them in an array (you'll have an array with thousands of integers, should be fine). Then, in 'cellForRowAtIndexPath', get the right 'autoincrement index' from your array and fetch all the data you need (I'm calling 'id' your 'autoincrement index' below...).
NSArray *ids; // Put the result of 'select id from datatable' here
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {
// Fetch full row where autoincrement index is: [ids objectAtIndex:indexPath.row]
Of course, you can even use a int[] (instead of NSArray* ) if you want to save on calls to intValue and such... You can allocate your int[] using the result of 'select count(*) from datatable'

iPhone Sqlite Performance Problem

Hey guys, here is the low down.
I have one table, consisting of a primary key(col1), text(col2), and text(col3). Basically a map. This table contains about 200k rows. It basically takes me about 1.x seconds to retrieve a single row (this is all I want). I'm basically using select * from table where col2 = 'some value'.
I've tried creating an index for all three columns, each column individually, and col2 and col3, but this really hasn't improved my situation at all.
I'm wondering, is this normal ? I haven't come across any posts of people complaing about slow sqlite performance for big tables, so I'm wondering what I'm doing wrong.
Any help would be greatly appreciated.
I would say, that this is absolutely not typically.
Even when you have a large table, an access via an index should be rather fast.
What could you do: Create only one index on col2 (that is the one and only you need for this select!).
Than use "EXPLAIN SELECT ...." to get the information, what SQLite makes out of it. The result is not easy to read, but with some experience it is possible to see if the index is used. You could also post the result here.
I solved the problem. It was that when I created a new sqlite database file, and added it to the project, xcode didn't properly recompile itself, it was still using the old file. I had to remove the old database from the project, remove the compiled version on the computer, clean the project, then compile it and make sure that it was crashing since the database was missing. Then again remove the compiled files, clean it, and re-add the new sqlite database.
This is why even after I created the index there was no performance improvement whatsoever....
Strange, would this be considered a bug with Xcode ?
I've made this class a singleton (called SQLAdapter), and this it contains two methods in here, one to copy the database if its needed, and the other to exec my sql code:
Here is the sql code method, this was the first time I coded in Obj-C, so just ignore the string append methods, I'm changing this as we speak...
- (NSString *)getMapping:(NSString *)test{
//Our return string
NSString *res = test;
// Setup the database object
sqlite3 *database;
NSString *sqlStmnt;
if (direction) sqlStmnt = #"select * from table where col1 = '";
else sqlStmnt = #"select * from table where col2 = '";
NSString *tStmt = [sqlStmnt stringByAppendingString:test];
NSString *sqlState = [tStmt stringByAppendingString:#"'"];
const char * sqlStatement = [sqlState UTF8String];
// Open the database from the users filessytem
if(sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) {
// Setup the SQL Statement and compile it for faster access
sqlite3_stmt *compiledStatement;
//execute the statement
if (sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL) != SQLITE_OK) {
NSAssert1(0, #"Error: during prepare '%s'.", sqlite3_errmsg(database));
}
//bind our translation into the sql select statment
sqlite3_bind_text( compiledStatement, 1 , [word UTF8String], -1, SQLITE_TRANSIENT);
if(sqlite3_step(compiledStatement) == SQLITE_ROW) { //if execution is successful i.e. we get a match
//lets return the desired language translation
res = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, (direction) ? 2 : 1)];
}
sqlite3_finalize(compiledStatement); //Release the compiled statement from memory
}
sqlite3_close(database); //lets return the translation
return res;
}
Pretty much exactly the same way that the SQLiteBooks project does it if I'm not mistaken...

Best way to reference image through out app?

My application is database driven. Each row contains the main content column I display in a UIWebView. Most of the rows (the content column) have a reference to image1 and some to image2. I convert these to base64 and add the image string into the row. However, if either image changes, it means I have to go back through all the rows and update the base64 string.
I can provide a unique string in the row content such as {image1}. It means I'll have to search through the entire content for that row and replace with the base64 version of the image. These images are also always at the bottom of the row content. Not sure how having to go through all content first before replacing will affect performance. Is there a better way to do this?
I hope I am understanding your question correctly.
If the images are not very large, then it is probably OK to just use the like keyword as in:
sqlite3_stmt *statement = nil;
if(statement == nil)
{
const char *sql = [[NSString stringWithFormat:#"SELECT imageContent FROM imageDatabase WHERE imageContent LIKE '%#%#%#'", #"%", imageValue, #"%"] UTF8String];
if (sqlite3_prepare_v2(db, sql, -1, &statement, NULL) != SQLITE_OK) {
//NSAssert1(0, #"Error: failed to prepare statement with message '%s'.", sqlite3_errmsg(db));
return;
}
}
while (sqlite3_step(statement) == SQLITE_ROW) {
// query was successful
// perform some action on the resulting data
}
sqlite3_finalize(statement);
statement = nil;
If you set imageValue = image1, image2, or whatever, that will give you the item you are looking for from the database without having to do string manipulation in code. I am assuming you know SQL, so sorry if this is redundant information, but the above will search your imageDatabase for anything that contains the image1, image2 imageValue. Once you find the row, you can update it, or you can use the WHERE clause with the UPDATE SQL statement, but I find that to be a bit dangerous due to the possibility of inadvertently updating multiple rows without checking the content first to make sure it is what you want.
Also, if you are doing database updates with this, you will find a major performance boost by wrapping your inserts and updates with transactions like:
const char *sql = "BEGIN TRANSACTION;";
char *errMsg;
sqlite3_exec(db, sql, nil, 0, &errMsg);
const char *commit = "COMMIT;";
sqlite3_exec(db, commit, nil, 0, &errMsg);
It prepares and optimizes your query before executing it. I have seen insert and update queries get twice as fast with transactions.
If the database is very large, this will have a significant performance hit, but doing the string manipulation in memory will have a large memory cost. If you use the SQLite LIKE method, the string search is done in a serial fashion on disk and has less of a memory hit.
Once you have found the specific item you can do the regular expression search and replace on just that particular string keeping your code's memory footprint smaller.
Why not have the images in a table, with image_ID (a unique integer) and image_data (a blob)? Then in your main table, store just the image_ID, and do a join if you need the actual image?
On an alternative interpretation of your question (if that answer doesn't mnake sense to you) why not break the content into three fields: stuff before the image, the image, and the stuff after. Store the image_ID for the middle part (not the data--get that with an sql JOIN on the image table). Then build the final content with concatenation.