searching by exact value in existing varchar column vs add new boolean column - postgresql

Which solution is better for performance in postgres database:
search by exact value of existing varchar column
create new boolean column in existing database and search by its value
table contains user id, date of planned job(reservation), date when started job, date when finished job, job name
for now it has around 10k records and it expand by around 2k/year
we want to allow job without earlier reservation but still keep log of start/end date so when backend detect this type of job and it create record with specific name
there is 2 way to see this records: by user which see only planned records and admins who see all
We can do it by excluding for user this specific name or create boolean column with value true if job was created by backend and false otherwise

Related

iSQLOutput - Update only Selected columns

My flow is simple and I am just reading a raw file into a SQL table.
At times the raw file contains data corresponding to existing records. I do not want to insert a new record in that case and would only want to update the existing record in the SQL table. The challenge is, there is a 'record creation date' column which I initialize at the time of record creation. The update operation overwrites that column too. I just want to avoid overwriting that column, while updating the other columns from the information coming from the raw file.
So far I am having no idea about how to do that. Could someone make a recommendation?
I defaulted the creation column to auto-populate in the SQL database itself. And I changed my flow to just update the remaining records. Talend job is now not touching that column. Problem solved.
Yet another reminder of 'Simplification is underrated'. :)

Merging Data in FMPro with modification of ID values

We are attempting to merge multiple datasets created in in filmmaker pro.
These datasets have multiple tables, and each entry within each table has a local ID that is used to relate entries between tables. The local ID values for all the entries were serially generated values, but some of the ID values are repeated between the different datasets, though the indicated records are non equivalent.
How can the ID values be updated in the data that is being imported to remove these overlaps without destroying the relationships that depend on them?
If you have access to the original database, you can try to migrate the ID's over to UUID or something unique before exporting. This has to be done manually, either cut/paste by hand or by a script.
Such a script will have do the following:
Loop through the parent records
For each record go to the related records
Generate an UUID with the get(UUID) function and put it in a variable
Replace the parent ID in the related record with this variable
Return to the parent record and replace the record ID with the variable.
Move to the next record.
Repeat until all records have been updated.

Select new or updated records com table in Postgresql

It's possible to get all the new or update records from one table in postgresql by
specified date?
something like this:
Select NEW OR UPDATED FROM anyTable WHERE dt_insert or dt_update = '2015-01-01'
tks
You can only do this if you added a trigger-maintained field that keeps track of the last change time.
There is no internal row timestamp in PostgreSQL, so in the absence of a trigger-maintained timestamp for the row, there's no way to find rows changed/added after a certain time.
PostgreSQL does have internal information on the transaction ID that wrote a row, stored in the xmin hidden column. There's no record of what transaction ID committed when, though, until PostgreSQL 9.5 which keeps track of this if and only if the new track_commit_timestamps setting is turned on. Additionally, PostgreSQL eventually clears the creator transaction ID information from a tuple because it re-uses transaction IDs, so it only works for fairly recent transactions.
In other words: it's kind-of possible in a rough way, if you understand the innards of the database, but should really only be used for forensic and recovery purposes.

Insert record in table if does not exist in iPhone app

I am obtaining a json array from a url and inserting data into a table. Since the contents of the url are subject to change, I want to make a second connection to a url and check for updates and insert new records in y table using sqlite3.
The issues that I face are:
1) My table doesn't have a primary key
2) The url lists the changes on the same day. Hence, if I run my app multiple times, when I insert values in my database, I get duplicate entries. I want to keep a check for the day duplicated entries that should be removed. The problem can be solved by adding a constraint, but since the url itself has duplicated values, I find it difficult.
The only way I can see you can do it if you have no primary key or something you can use that is unique to each record, is when you get your new data in you go through the new entries where for each one you check if the exact same data exists in the database already. If it doesn't then you add it, if it does then you skip over it.
You could even do something like create a unique key yourself for each entry which is a concatenation of each column of the table. That way you can quickly do the check for if the entry already exists in the database.
I see two possibilities depending on your setup:
You have a column setup as UNIQUE (this can be through a PRIMARY KEY or not). In this case, you can use the ON CONFLICT clause:
http://www.sqlite.org/lang_conflict.html
If you find this construct a little confusing, you can instead use "INSERT OR REPLACE" or "INSERT OR IGNORE" as described here:
http://www.sqlite.org/lang_insert.html
You do not have a column setup as UNIQUE. In this case, you will need to SELECT first to verify for duplicate data, and based on the result INSERT, UPDATE, or do nothing.
A more common & robust way to handle this is to associate a timestamp with each data item on the server. When your app interrogates the server it provides the timestamp corresponding to the last time it synced. The server then queries its database and returns all values that are timestamped later than the timestamp provided by the app. Then it also returns a new timestamp value for the app to store, to use on the next sync.

Limited no of records in table SQLite

Q 1) I want to insert limited number of records in my db table lets say 10. If i add 11th record then oldest record will be deleted & 11th record will be added as new record.
How can i know which is the oldest record in my table , so that i can delete it & add new record.
Q 2) I want to insert maximum 2 records in my table. My first record is default record. If user doesn't provide second record then i will use my default record. My second record is changeable. User entered second record. Now if user want to change second record how can i change it?
sql = "update abc set name = ? where id = ?" ,newName,existingId
Like above query? But how can i know existingId in this case?
First: add a date field with default value NOW() and then delete from mytable where date=min(date) or sth like that. But you'd better use some other routine than sqlite.
Second: if you've got only two rows and want to change them, you can, surely, hardcode your ids, but it would be an ugly solution. You can use config files or sth like that, or add boolean column default to the table and distinguish them by its value.