I'm actually working on a PostgreSQL DB structure and I'm having hard time figuring out how to solve a problem.
The DB will be recording data regarding architectural objects.
The main table, "object",have attributes that describe the object with information like type, localization, etc.
One of these attributes is a serial named object_num.
Another table is called "code" which contains a code made of three letters corresponding to the town where the mission is conducted.
Example :
I'm working on an architectural inventory for the city of Paris. The code_name will be PRS and the first entry (aka the first architectural entity : house, bridge, etc) will be associated to object_num 001.
So PRS001 will be a unique identifier referring to this specific architectural entity.
Things going on, I might end up with quite a few entries, for example entry PRS745.
Say this mission isn't finished yet but a new one starts for the city of Bordeaux, where BDX is going to identify the inventory. It would be great that the identifier for the first entry will be BDX001 rather than BDX746 (auto-increment).
Considering this, it will be also nice that, going back to the Paris mission after a few records for the Bordeaux mission (say BDX211), the next value will start back at (PRS)745 rather than (BDX)211.
So, is it possible to reset the value of a serial to 1 when using a new code ?
And is it possible to start back serial increment from the last value of a specific code ?
I guess you can perform this task with constraints and checks, but I'm not really familiar with these and am a bit lost...
Thanks for your help,
Yrkoutsk
You could create separate sequences for each code_name and grab your auo-increment based on the code_name:
CREATE SEQUENCE PRS START 1;
CREATE SEQUENCE BDX START 1;
insert into your_table (object_num, code_name, other_data)
values ( code_name||lpad(nextval(code_name)::char,3,'0')
, code_name, other_data);
You will have to create a new sequence every time you add a new code_name otherwise the db will end up throwing an error when you try accessing the nonexistent sequence.
Related
I can't think of a better title, so feel free to make a suggestion once you understand the issue.
I was given a table to work with that I need to call from another table:
Name
Month
Type
Value
For each record in the main table I need to pull one "Value" that corresponds to it. What it is will be determined by all three of the other fields. So for example, if a record in the main table is:
Name:
Google
Date:
3\17\2016
Type:
M
Then I need to pull the value for the record in the other table where the Name is "Google", the month is "3", and the type is "M".
I was able to do this successfully (if slowly) using an ExecuteSQL command in a calculation field, with a ton of nested If statements for the names (I have yet to figure out how to input the record's data directly into the ExecuteSQL statement, it breaks when I try). I would prefer to just grab the data directly. I can't switch over to the other layout because I need to see all of the records at once. I can't do a simple relationship because there isn't a real relationship, it's like there are three foreign keys working in tandem and I only know how to use one to call the data.
Any idea on how to do this more simplistically?
Some ideas I've had but not sure if it will work:
Using a calculation field as a related field to dynamically point to the row by code (concatenate the three relevant fields into a type of code). Not sure if you can connect two tables by a calculation field.
Doing that same thing when calling the data into the table in the first place, adding a code to create a single primary key.
Here are my relationships:
I can't do a simple relationship because there isn't a real
relationship, it's like there are three foreign keys working in tandem
and I only know how to use one to call the data.
Simply define a relationship with three predicates - i.e. three pairs of match fields.
I have a table called transactions. Within that is a field called ipn_type. I would like to create separate table occurrences for the different ipn types I may have.
For example, one value for ipn_type is "dispute". In the past I would create a global field called "rel_dispute" and I would populate that with the value of "dispute". Then I could create a new table occurrence of the transactions table, and make a relationship based on transactions::ipn_type = transactions::rel_dispute. This way only the dispute records would show up in my new table occurrence.
Not long ago, somebody pointed out to me that this is no longer necessary, and there is a simpler way to setup such a relationship to create a new table occurrence. I can't for the life of me remember how that was done, though.
Any information on this would be greatly appreciated. Thanks!
To show a found set of only one type, you must either perform a find or use the Go to Related Record script step to show only related records. What you describe as your previous setup fits the latter.
The simpler way is to perform a find - either on demand, or by a script triggered OnLayoutEnter.
The new 'easy' way is probably:
using one base relationship only and
filtering only the displaying portal by type. This can be done with a global field, a global variable containing current display type. Multiple portals with different filter conditions are possible as well.
~jens
I'm currently working on a project involving keeping track of users and their actions with my database (PostgreSQL as the RDMS), and I have run into an issue when trying to perform COUNT(*) on occurrences of each user. What I want is to be able to, efficiently, count the number of times each user appears from every record, and also be able to achieve looking at counts on a particular date range.
So, the problem is how do we achieve counting the total number of times a user appears from the tables contents, and how do we count the total number on a date range.
What I've tried
As you might know, Postgres doesn't support COUNT(*) very well using indices, so we have to consider other ways to reduce the # of records it looks at in order to speed up the query. So my first approach is to create a table to keep track of the number of times a user has a log message associated with them, and on what day (similar to the idea behind a materialized view, but I dont want continually refresh the materialized view with my count query). Here is what I've come up with:
CREATE TABLE users_counts(user varchar(65536), counter int default 0, day date);
CREATE RULE inc_user_date_count
AS ON INSERT TO main_table
DO ALSO UPDATE users_counts SET counter = counter + 1
WHERE user = NEW.user AND day = DATE(NEW.date_);
What this does is every time a new record is inserted into my 'main_table', we update the current users_counts table to increment the records whose date is equal to the new records date, and the user names are the same.
NOTE: the date_ column in 'main_table' is a timestamp so I must cast the new records date_ to be a DATE type.
The problem is, what if the user column value doesn't already exist in my new table 'users_count' for the current day, then nothing is updated.
Here is my question:
How do I write the rule such that we check if a user exists for the current day, if so increment that counter, otherwise insert new row with user, day, and counter of 1;
I also would like to know if my approach makes sense to do, or is there any ideas I am missing that I just haven't thought about. As my database grows, it is increasingly inefficient to perform counting, so I want to avoid any performance bottlenecks.
EDIT 1: I was able to actually figure this out by creating a separate RULE but I'm not sure if this is correct:
CREATE RULE test_insert AS ON INSERT TO main_table
DO ALSO INSERT INTO users_counts(user, counter, day)
SELECT NEW.user, 1, DATE(NEW.date)
WHERE NOT EXISTS (SELECT user FROM users.log_messages WHERE user = NEW.user_);
Basically, an insert happens if the user doesn't already exist in my CACHED table called user_counts, and the first rule above updates the count.
What I'm unsure of is how do I know when which rule is called first, the update rule or insert.. And there must be a better way, how do I combine the two rules? Can this be done with a function?
It is true that postgresql is notoriously slow when it comes to count(*) queries. However if you do have a where clause that limits the number of entries the query will be much faster. If you are using postgresql 9.2 or newer this query will be just as fast as it's in mysql because of index only scans which was added in 9.2 but it's best to explain analyze your query to make sure.
Does my solution make sense?
Very much so provided that your explain analyze show that index only scans are not being used. Trigger based solutions like the one that you have adapted find wide usage. But as you have realized the problem with the initial state arises (whether to do an update or an insert).
which rule is called first
Multiple rules on the same table and same event type are applied in
alphabetical name order.
from http://www.postgresql.org/docs/9.1/static/sql-createrule.html
the same applies for triggers. If you want a particular rule to be executed first change it's name so that it comes up higher in the alphabetical order.
how do I combine the two rules?
One solution is to modify your rule to perform an upsert (Look right at the bottom of that page for a sample upsert ). The other is to populate the counter table with initial values. The trick is to create the trigger at the same time to avoid errors. This blog post explains it really well.
While the initial setup will be slow each individual insert will probably be faster. The two opposing factors being the slowness of a WHERE NOT EXISTS query vs the overhead of catching an exception.
Tip: A block containing an EXCEPTION clause is significantly more
expensive to enter and exit than a block without one. Therefore, don't
use EXCEPTION without need.
Source the postgresql documentation page linked above.
I have designed a simple database to keep track of company contacts. Now, I am building a form to allow employees to add contacts to the database.
On the form itself, I have all the columns except the primary key (contactID) tied to a text box. I would like the contactID value to be (the total number of entered contacts + 1) when the Add button is clicked. Basically, the first contact entered will have a contactID of 1 (0 + 1 = 1). Maybe the COUNT command factors in?
So, I am looking for assistance with what code I should place in the .Click event. Perhaps it would help to know how similar FoxPro is to SQL.
Thanks
The method you recommend for assigning ContactIDs is not a good idea. If two people are using the application at the same time, they could each create a record with the same ContactID.
My recommendation is that you use VFP's AutoIncrementing Integer capability. That is, set the relevant column to be Integer (AutoInc) in the Table Designer. Then, each new row gets the next available value, but you don't have to do any work to make it happen.
There are various ways to do this. Probably the simplest is to attempt to lock the table with flock() when saving, and if successful do:
calc max id_field to lnMax
Then when inserting your new record use lnMax+1 as the id_field value. Don't forget to
unlock all
... after saving. You'll want to ensure that 'id_field' has an index tag on it, and that you handle the case where someone else might have the table locked.
You can also do it more 'automagically' with a stored procedure.
I am obtaining a json array from a url and inserting data into a table. Since the contents of the url are subject to change, I want to make a second connection to a url and check for updates and insert new records in y table using sqlite3.
The issues that I face are:
1) My table doesn't have a primary key
2) The url lists the changes on the same day. Hence, if I run my app multiple times, when I insert values in my database, I get duplicate entries. I want to keep a check for the day duplicated entries that should be removed. The problem can be solved by adding a constraint, but since the url itself has duplicated values, I find it difficult.
The only way I can see you can do it if you have no primary key or something you can use that is unique to each record, is when you get your new data in you go through the new entries where for each one you check if the exact same data exists in the database already. If it doesn't then you add it, if it does then you skip over it.
You could even do something like create a unique key yourself for each entry which is a concatenation of each column of the table. That way you can quickly do the check for if the entry already exists in the database.
I see two possibilities depending on your setup:
You have a column setup as UNIQUE (this can be through a PRIMARY KEY or not). In this case, you can use the ON CONFLICT clause:
http://www.sqlite.org/lang_conflict.html
If you find this construct a little confusing, you can instead use "INSERT OR REPLACE" or "INSERT OR IGNORE" as described here:
http://www.sqlite.org/lang_insert.html
You do not have a column setup as UNIQUE. In this case, you will need to SELECT first to verify for duplicate data, and based on the result INSERT, UPDATE, or do nothing.
A more common & robust way to handle this is to associate a timestamp with each data item on the server. When your app interrogates the server it provides the timestamp corresponding to the last time it synced. The server then queries its database and returns all values that are timestamped later than the timestamp provided by the app. Then it also returns a new timestamp value for the app to store, to use on the next sync.