What is best possible way for in-app Sqlite Data Migration. As I am new to iphone development , I created my app using Sqlite. Please help the concern
You create a database upgrade procedure and make it execute the first time when the application starts.
So this will create a new table through a create statement when application loads for the first time.
And also it will help you save all the data that user has in his current database. This is what is most recommended as there is no additional requirement to migrate your existing database anywhere.
Also please refer to my answer in this link:
Don't wan't to replacing the old database when app is updated
Hope this helps you.
If you need more help then please feel free to contact me.
EDIT:
In case when you need to make changes to an exiting table which contains some data then you need to do some steps as follows:
1) Rename the existing table
Say TableName was 'TestTable', So Rename the table to 'TestTableOld'
2) Create a new table called 'TestTable' which has the new columns you want and other altercations made.
3) Copy data from 'TestTableOld' to 'TestTable' with the query like:
Insert into TestTable('col1',col2'...'coln') Values Select col1, col2, col3,...'coln' From TestTableOld;
4) Drop the table 'TestTableOld'
Follow the above four step procedure as this would work as an alter statement and also it would preserve all your data from your original table.
Hope you get it.
Related
I've managed to CREATE PUBLICATION ( and corresponding SUBSCRIPTION ) for a set of tables in my database. It's all working wonderfully. Now someone wants to add a column to one of the base tables (ayayay) and I can't seem to find any documentation regarding how to (most-easily) manage this situation and I'm hoping someone can lead me in the right direction.
Adding a column is easy:
First, add the column on the standby. That won't break replication; the new column will remain empty.
Then, add the column on the primary.
If you change the set of tables that is replicated, don't forget to run
ALTER SUBSCRIPTION ... REFRESH PUBLICATION;
Good morning!
I am using TSQL, there is one table that was created years back by another empolyee, I am not sure how this table is populated? I checked the dependencies and it just gives the name of table. I want to know how the columns/data get populated in that table for example. "Freight.load" is the table name.
Thank you in advance!
Setting up a trace will catch the inserts. Or you could check who has insert permissions on the table.
I am using Eclipse, Java and a Derby database. I want to experiment with changing values that rewrite one of the tables in the db. Before starting the change I would like to copy the particular table (not in code) so that I can restore the original data if necessary. Sof ar googling and searching this site hasnt produced an answer. In Eclipse there is an option to export the db but it calls it a connection so I am not usre what would happen.
If you're not sure about how to connect to the database and issue sql statements, you will need to learn about JDBC. This is a good place to start.
If you're asking about the SQL, it's pretty straight forward. You can create a table based on a select statement.
e.g.
create table table2 as select * from table1 with no data;
Derby is a little strange in this area. You must specify the with no data, and the created table will be empty. You can then issue an insert that will populate the new table if you wish.
insert into table2 select * from table1;
The new table will not have indexes. You will need to create them if you want them. It might retain the primary key. You should check that if you're testing against it. If it doesn't retain the primary key, you should create the primary key before inserting data into the table.
In Eclipse there is an option to export the db but it calls it a connection so I am not sure what would happen.
If what Eclipse does isn't clear for you, you can just as well zip your entire database directory (content of DERBY_HOME env. variable) into an archive. The database must not be running while you make the backup.
I'm working on an application that imports data from Access to SQL Server 2008. Currently, I'm using a stored procedure to import the data individually by record. I can't go with a bulk insert or anything like that because the data is inserted into two related tables...I have a bunch of fields that go into the Account table (first name, last name, etc.) and three fields that will each have a record in an Insurance table, linked back to the Account table by the auto-incrementing AccountID that's selected with SCOPE_IDENTITY in the stored procedure.
Performance isn't very good due to the number of round trips to the database from the application. For this and some other reasons I'm planning to instead use a staging table and import the data from there. Reading up on my options for approaching this, a cursor that executes the same insert stored procedure on the data in the staging table would make sense. However it appears that cursors are evil incarnate and should be avoided.
Is there any way to insert data into one table, retrieve the auto-generated IDs, then insert data for the same records into another table using the corresponding ID, in a set-based operation? Or is a cursor my only option here?
Look at the OUTPUT clause. You should be able to add it to your INSERT statement to do what you want.
BTW, if you need to output columns into the second table that weren't inserted into the first one, then use MERGE instead of INSERT (as suggested in the comment to the original question) as its OUTPUT clause supports referencing other columns from the source table(s). Otherwise, keeping it with an INSERT is more straightforward, and it does give you access to the inserted identity column.
I'm having experiment to worked out in inserting multiple record into related table using databinding. So, try this!
Hopefully this is very helpful. Follow this link How to insert record into related tables. for more information.
I have a need to change the length of CHAR columns in tables in a PostgreSQL v7.4 database. This version did not support the ability to directly change the column type or size using the ALTER TABLE statement. So, directly altering a column from a CHAR(10) to CHAR(20) for instance isn't possible (yeah, I know, "use varchars", but that's not an option in my current circumstance). Anyone have any advice/tricks on how to best accomplish this? My initial thoughts:
-- Save the table's data in a new "save" table.
CREATE TABLE save_data AS SELECT * FROM table_to_change;
-- Drop the columns from the first column to be changed on down.
ALTER TABLE table_to_change DROP column_name1; -- for each column starting with the first one that needs to be modified
ALTER TABLE table_to_change DROP column_name2;
...
-- Add the columns back, using the new size for the CHAR column
ALTER TABLE table_to_change ADD column_name1 CHAR(new_size); -- for each column dropped above
ALTER TABLE table_to_change ADD column_name2...
-- Copy the data bace from the "save" table
UPDATE table_to_change
SET column_name1=save_data.column_name1, -- for each column dropped/readded above
column_name2=save_date.column_name2,
...
FROM save_data
WHERE table_to_change.primary_key=save_data.primay_key;
Yuck! Hopefully there's a better way? Any suggestions appreciated. Thanks!
Not PostgreSQL, but in Oracle I have changed a column's type by:
Add a new column with a temporary name (ie: TMP_COL) and the new data type (ie: CHAR(20))
run an update query: UPDATE TBL SET TMP_COL = OLD_COL;
Drop OLD_COL
Rename TMP_COL to OLD_COL
I would dump the table contents to a flat file with COPY, drop the table, recreate it with the correct column setup, and then reload (with COPY again).
http://www.postgresql.org/docs/7.4/static/sql-copy.html
Is it acceptable to have downtime while performing this operation? Obviously what I've just described requires making the table unusable for a period of time, how long depends on the data size and hardware you're working with.
Edit: But COPY is quite a bit faster than INSERTs and UPDATEs. According to the docs you can make it even faster by using BINARY mode. BINARY makes it less compatible with other PGSQL installs but you won't care about that because you only want to load the data to the same instance that you dumped it from.
The best approach to your problem is to upgrade pg to something less archaic :)
Seriously. 7.4 is going to be removed from "supported versions" pretty soon, so I wouldn't wait for it to happen with 7.4 in production.