iPhone Web App database SQLite and MySQL - iphone

I am making a planner application for the iphone that can work online to store tasks in a mysql server. However, when I attempt to synchronise the two databases I have a problem. The thing seems to be that I can't insert more than one set of values at once into the iPhone database:
INSERT INTO planner (title, duedate, submitdate, subject, info) VALUES ('Poster', '21092010', '28092010', 'chemistry', 'elements poster'), ('Essay', '22092010', '25092010', 'english', 'essay on shakespeare')
This does not work. There is no error or anything like that, it simply does nothing, it sometimes puts the first one in, but not the other. Perhaps I am going about this the wrong way, so to give the situation:
I have an array with a list of these properties, call them 1, 2, 3, 4 and 5, I need all of the array putting into the local database.
People on this site seem to be able to do this so I hope you can help,
Thanks,
Tom Ludlow

The SQLite INSERT syntax only supports single-row inserts. This should not be a problem.
Why? Because you should be using parameterized queries, not concatenating a giant string together and hoping that you've done all the "escaping" properly so that there are no SQL injection vulnerabilities. Additionally, sticking everything into the statement increases parsing overheads (you've spent all that effort escaping things, and now SQLite has to spend some more effort to un-escape things).
The suggested way to use a statement is something like this:
sqlite3_exec(db, "BEGIN", NULL, NULL, NULL);
sqlite3_prepare_v2(db, "INSERT INTO planner (title,duedate,submitdate,subject,info) VALUES (?,?,?,?,?)
For each row you want to insert,
sqlite3_bind() the five parameters (bound parameters are 1-based, so 1, 2, 3, 4, 5).
sqlite3_step(). It should return SQLITE_DONE.
sqlite3_reset() (so you can reuse the statement) and sqlite3_clear_bindings() (for good measure)
sqlite3_finalize() to destroy the statement.
sqlite3_exec(db, "COMMIT", NULL, NULL, NULL);
I've wrapped the inserts in a transaction to increase performance (outside of a transaction, all INSERTs happen their own transaction, which I've found to be significantly slower...).
For an Objective-C wrapper around sqlite, you might try FMDB (it has a reasonably nice wrapper around sqlite3_bind_*(), except it uses SQLITE_STATIC when it should probably be using SQLITE_TRANSIENT or retaining/copying its arguments).

Have you tried to split your inserts so you only insert a single row at a time?
tc hints at this in his answer, though using native code.
Try looking at this example with two inserts:
/* Substitute with your openDatabase call */
var db = openDatabase('yourdb', '1.0', 'Planner DB', 2 * 1024 * 1024);
db.transaction(function (tx) {
tx.executeSql('INSERT INTO planner (title, duedate, submitdate, subject, info) VALUES ("Poster", "21092010", "28092010", "chemistry", "elements poster")');
tx.executeSql('INSERT INTO planner (title, duedate, submitdate, subject, info) VALUES ("Essay", "22092010", "25092010", "english", "essay on shakespeare")');
});
/Mogens

Related

Does PostgreSQL have the equivalent of an Oracle ArrayBind?

Oracle has the ability to do bulk inserts by passing arrays as bind variables. The database then does a separate row insert for each member of the array:
http://www.oracle.com/technetwork/issue-archive/2009/09-sep/o59odpnet-085168.html
Thus if I have an array:
string[] arr = { 1, 2, 3}
And I pass this as a bind to my SQL:
insert into my_table(my_col) values (:arr)
I end up with 3 rows in the table.
Is there a way to do this in PostgreSQL w/o modifying the SQL? (i.e. I don't want to use the copy command, an explicit multirow insert, etc)
Nearest that you can use is :
insert into my_table(my_col) SELECT unnest(:arr)
PgJDBC supports COPY, and that's about your best option. I know it's not what you want, and it's frustrating that you have to use a different row representation, but it's about the best you'll get.
That said, you will find that if you prepare a statement then addBatch and executeBatch, you'll get pretty solid performance. Sufficiently so that it's not usually worth caring about using COPY. See Statement.executeBatch. You can create "array bind" on top of that with a trivial function that's a few lines long. It's not as good as server-side array binding, but it'll do pretty well.
No, you cannot do that in PostgreSQL.
You'll either have to use a multi-row INSERT or a COPY statement.
I'm not sure which language you're targeting, but in Java, for example, this is possible using Connection.createArrayOf().
Related question / answer:
error setting java String[] to postgres prepared statement

Postgres multi-update with multiple `where` cases

Excuse what seems like it could be a duplicate. I'm familiar with multiple updates in Postgres... but I can't seem to figure out a way around this one...
I have a photos table with the following columns: id (primary key), url, sort_order, and owner_user_id.
We would like to allow our interface to allow the user to reorder their existing photos in a collection view. In which case when a drag-reorder interaction is complete, I am able to send a POST body to our API with the following:
req.body.photos = [{id: 345, order: 1, id: 911, order: 2, ...<etc>}]
In which case I can turn around and run the following query in a loop per each item in the array.
photos.forEach(function (item) {
db.runQuery('update photos set sort_order=$1 where id=$2 and owner_user_id=$3', [item.order, item.id, currentUserId])
})
In general, it's generally frowned upon to run database queries inside loops, so if there's anyway this can be done with 1 query that would be fantastic.
Much thanks in advance.
Running a select query inside of a loop is definitely questionable, but I don't think multiple updates is necessarily frowned upon if the data you are updating doesn't natively reside on the database. To do these as separate transactions, however, might be.
My recommendation would be to wrap all known updates in a single transaction. This is not only kinder to the database (compile once, execute many, commit once), but this is an ACID approach to what I believe you are trying to do. If, for some reason, one of your updates fails, they will all fail. This prevents you from having two photos with an order of "1."
I didn't recognize your language, but here is an example of what this might look like in C#:
NpgSqlConnection conn = new NpgSqlConnection(connectionString);
conn.Open();
NpgSqlTransaction trans = conn.BeginTransaction();
NpgSqlCommand cmd = new NpqSqlCommand("update photos set sort_order=:SORT where id=:ID",
conn, trans);
cmd.Parameters.Add(new NpgSqlParameter("SORT", DbType.Integer));
cmd.Parameters.Add(new NpgSqlParameter("ID", DbType.Integer));
foreach (var photo in photos)
{
cmd.Parameters[0].Value = photo.SortOrder;
cmd.Parameters[1].Value = photo.Id;
cmd.ExecuteNonQuery();
}
trans.Commit();
I think in Perl, for example, it would be even simpler -- turn off DBI AutoCommit and commit after the inserts.
CAVEAT: Of course, add error trapping -- I was just illustrating what it might look like.
Also, I changed you update SQL. If "Id" is the primary key, I don't think you need the additional owner_user_id=$3 clause to make it work.

insert based on value in first row

I have a fixed file that I am importing into a single column with data similar to what you see below:
ABC$ WC 11683
11608000163118430001002010056788000000007680031722800315723
11683000486080280000002010043213000000007120012669100126691
ABC$ WC 000000020000000148000
ABC$ WC 11683
1168101057561604000050200001234000000027020023194001231940
54322010240519720000502000011682000000035640006721001067210
1167701030336257000050200008765000000023610029066101151149
11680010471244820000502000011680000000027515026398201263982
I want to split and insert this data into another table but I want to do so as long as the '11683' is equal to a column value in a different table + 1. I will then increment that value (not seen here).
I tried the following:
declare #blob as varchar(5)
declare #Num as varchar(5)
set #blob = substring(sdg_winn_blob.blob, 23,5)
set #Num = (Cnum.num + 1)
IF #blob = #Num
INSERT INTO SDG_CWF
(
GAME,SERIAL,WINNER,TYPE
)
SELECT convert(numeric, substring(blob,28, 5)),convert(numeric, substring(blob, 8, 9)),
(Case when (substring(blob, 6,2)='10') then '3'
when (substring(blob, 6,2)='11') then '4'
else substring(blob, 7, 1)
End),
(Case when (substring(blob, 52,2)='10') then '3'
when (substring(blob, 52,2)='11') then '4'
else substring(blob, 53, 1)
End)
FROM sdg_winn_blob
WHERE blob not like 'ABC$%'
else
print 'The Job Failed'
The insert works fine until I try to check to see if the number at position (23, 5) is the same as the number in the Cnum table. I get the error:
Msg 4104, Level 16, State 1, Line 4
The multi-part identifier "sdg_winn_blob.blob" could not be bound.
Msg 4104, Level 16, State 1, Line 5
The multi-part identifier "Cnum.num" could not be bound.
It looks like you may be used to a procedural, object oriented style of coding. SQL Server wants you to think quite differently...
This line:
set #blob = substring(sdg_winn_blob.blob, 23,5)
Is failing because SQL interprets it in isolation. Within just that line, you haven't told SQL what the object sdg_winn_blob is, nor its member blob.
Since those things are database tables / columns, they can only be accessed as part of a query including a FROM clause. It's the FROM that tells SQL where these things are.
So you'll need to replace that line (and the immediate next one) with something like the following:
Select #blob = substring(sdg_winn_blob.blob, 23,5)
From sdg_winn_blob
Where...
Furthermore, as far as I can tell, your whole approach here is conceptually iterative: you're thinking about this in terms of looking at each line in turn, processing it, then moving onto the next. SQL does provide facilities to do that (which you've not used here), but they are very rarely the best solution. SQL prefers (and is optimised for) a set based approach: design a query that will operate on all rows in one go.
As it stands I don't think your query will ever do quite what you want, because you're expecting iterative behaviour that SQL doesn't follow.
The way you need to approach this if you want to "think like SQL Server" is to construct (using just SELECT type queries) a set of rows that has the '11683' type values from the header rows, applied to each corresponding "data" row that you want to insert to SDG_CWF.
Then you can use a SQL JOIN to link this row set to your Cnum table and ascertain, for each row, whether it meets the condition you want in Cnum. This set of rows can then just be inserted into SDG_CWF. No variables or IF statement involved (they're necessary in SQL far less often than some people think).
There are multiple possible approaches to this, none of them terribly easy (unless I'm missing something obvious). All will need you to break your logic down into steps, taking your initial set of data (just a blob column) and turning it into something a bit closer to what you need, then repeating. You might want to work this out yourself but if not, I've set out an example in this SQLFiddle.
I don't claim that example is the fastest or neatest (it isn't) but hopefully it'll show what I mean about thinking the way SQL wants you to think. The SQL engine behind that website is using SQL 2008, but the solution I give should work equally well on 2005. There are niftier possible ways if you get access to 2012 or later versions.

Catch-all-search, dynamic SQL?

I asked a question yesterday about a procedure we're trying to re-write/optimize in our application. It's off of a search form with a bunch of criteria the user can specify. 40 parameters, 3 of which are long strings of Guids that I am passing into a UDF that returns a Table variable, all 3 of which we JOIN into our main FROM statement.
We did much of this query using Dynamic SQL, one of the main reasons we're re-writing the whole thing is because it's Dynamic SQL. Everything I ever read about Dynamic SQL is bad, especially for execution plans and optimization. Then I start coming across articles like these two....
Sometimes the Simplest Solution isn't the Best Solution
Erland Sommarskog - Dynamic SQL Conditions in T-SQL
I've always though Dynamic SQL was bad for security and optimization, we've tried removing it form our system wherever possible. Now we're restructuring the most executed query in our system (main search query) and we thought stripping all the Dynamic SQL was going to help.
Basically replacing
IF(#Param1 IS NULL)
#SQLString = #SQLString + " AND FieldX = #Param1"
...execute the #SQLString
with one large SQL block that has
WHERE (#Param1 IS NOT NULL AND FieldX = #Param1)
Reading those two articles it seems like this is going to work against me. I can't use RECOMPILE because we're in 2k5 still and even if we could this stored procedure is very high-use. Do I really want to write this Query in Dynamic SQL? How can it be faster if no execution plans can be stored?

sqlite fetching on ios device slow but fast on simulator

I have a database with 3 tables having 700000 records each. I have added a search feature in the app which uses query...
const char *sqlstatement="select * from artist where name like ?";
sqlite3_stmt *compliedstatement;
if(sqlite3_prepare_v2(database,sqlstatement , -1, &compliedstatement, NULL)==SQLITE_OK)
{
sqlite3_bind_text(compliedstatement,1,[[NSString stringWithFormat:#"%%%#%%",self.searchBar.text] UTF8String], -1, SQLITE_STATIC);
This was OK on the simulator but took about a minute on the iOS device. So i used Sqlite Manager to add indexes to the table columns, the database size grew form 76mb to 166mb but now this query takes about 1 to 2 seconds on simulator to run, and abt 10 to 15 seconds on device. SO its an improvement but still any suggestions to further improve it? NO I cannot use CoreData at this point of time.
The first point to note is that SQLite will not use an index for LIKE clauses, even if they are of the form LIKE '...%' (e.g. LIKE 'Fred%'), the point being that most collation sequences are case-sensitive. The improvement in performance that you observe is due to SQLite using what is called an index search - it can search the index rather than having to plough through the whole table.
15 seconds for 700,000 records is not bad, in fact it is about what I am getting myself, for a very similar query. To get any improvement on this, you would have to look into doing a full text search (FTS). This involves a lot more work than simply adding an index, but it may be the way to go for you.
If they query returns a lot of rows, and you are using these to populate a table view, there may be another performance issue due to having to process so many rows, even if you are only pulling off the row ids. My solution was to limit the number of rows fetched to about 5,000, on the grounds that nobody would want to scroll through more than that.
Use the FTS feature of SQLite. FTS is enabled by default and will solve the query performance issue with the "like". You need to add alle rows to an FTS virtual table. Then use "match" instead of "like". See here the documentation: http://www.sqlite.org/fts3.html. You can expect performance to be in ms instead of seconds.