I'm new to PostgreSQL.
I was wondering, how do I make a column a drop down list.
So i've got a table called Student. There's a column in there called "student_type", which means whether the student is a part time student, full time student or is sandwich course student.
So I want to make "student_type" a drop down list with 3 choices: "part time" student, "full time" and "sandwich".
How do I do this?
(I'm using pgAdmin to create the databse, by the way.)
A drop-down is a client side thing and should be dealt with accordingly. But as far as a relational database is involved there should exist a student_type relation with the id and type columns which you would query like this:
select st.id, st.type
from student_type st
inner join student s on s.type_id = st.id
group by st.id, st.type
order by st.type
The inner join is to make sure you don't show an option that does not exist in the student table and would therefore produce an empty result if chosen. In the client side the id should be the option value and the type the option text.
If there is no student_type relation as a consequence of bad db design or if you are only allowed to query a denormalized view, you can still use the student relation:
select distinct student_type
from student
order by student_type
In this case the student_type will be both the option value and the option text.
I think You use MS Access before. It is not possible to create such a drop-down list using pgAdmin, but it is possible to limit acceptable values in field "student_type" in a few different ways. Still, this will not be a drop-down or Combobox filed.
eg:
You can use a table with a dictionary and then use a foreign key,
You can use the constraint to check the inserted value
You can use a domain (the field is in the type of Your domain, and the domain is based on
proper constraint)
You can use trigger (before insert or update)
etc.
To put this a different way, Postgres is a database, not a user-interface
library. It doesn't have dropdown lists, text boxes, labels and all that. It is not directly usable by a human that way.
What PG can do is provide a stable source for data used by such widgets. Each widget library has its own rules for how to bind (that is, connect) to a data source. Some let you directly connect the visual component, in this case, the dropdown widget, to a database, or better, a database cursor.
With Postgres, you can create a cursor, which is an in-memory window into the result of a SELECT query of some kind, that your widget or favorite programming language binds to. In your example, the cursor or widget binding would be to the result of a "SELECT student_type FROM student_type" query.
As an aside, the values for "student_type" should not be stored only in
"student". You should have a normalized table structure, which here would give you a "student_type" table that holds the three choices, one per row. (There
are other ways to do this.) The values you specified would be the primary key column. (Alternatively, you'd have those values in a UNIQUE column with a
surrogate key as the primary key, but that's probably overkilling for a simple
lookup table.) The "student.student_type" column would then be a foreign key
into the "student_type.student_type" column.
Related
In some database project, I have a users table which somehow has a computed value avg_service_rating. And there is another table called services with all the services associated to the user and the ratings for that service. Is there a computationally-lite way which I can maintain the avg_service_rating rating without updating it every time an INSERT is done on the services table? Perhaps like a generate column but with a function call instead? Any direct advice or link to resources will be greatly appreciated as well!
CREATE TABLE users (
username VARCHAR PRIMARY KEY,
avg_service_ratings NUMERIC -- is it possible to store some function call for this column?,
...
);
CREATE TABLE service (
username VARCHAR NOT NULL REFERENCE users (username);
service_date DATE NOT NULL,
rating INTEGER,
PRIMARY KEY (username, service_date),
);
If the values should be consistent, a generated column won't fit the bill, since it is only recomputed if the row itself is modified.
I see two solutions:
have a trigger on the services table that updates the users table whenever a rating is added or modified. That slows down data modifications, but not your queries.
Turn users into a view. The original users table would be renamed, and it loses the avg_service_rating column, which is computed on the fly by the view.
To make the illusion perfect, create an INSTEAD OF INSERT OR UPDATE OR DELETE trigger on the view that modifies the underlying table. Then your application does not need to be changed.
With this solution you pay a certain price both on SELECT and on data modifications, but the latter price will be lower, since you don't have to modify two tables (and users might receive fewer modifications than services). An added advantage is that you avoid data duplication.
A generated column would only be useful if the source data is in the same table row.
Otherwise your options are a view (where you could call a function or calculate the value via a subquery), or an AFTER UPDATE OR INSERT trigger on the service table, which updates users.avg_service_ratings. With a trigger, if you get a lot of updates on the service table you'd need to consider possible concurrency issues, but it would mean the figure doesn't need to be calculated every time a row in the users table is accessed.
I have a table from which I want to UPSERT into another, when try to launch the query, I get the "cannot affect row a second time" error. So I tried to see if I have some duplicate on my first table regarding the field with the UNIQUE constraint, and I have none. I must be missing something, but since I cannot figure out what (and my query is a bit complex because it is including some JOIN), here is the query, the field with the UNIQUE constraint is "identifiant_immeuble" :
with upd(a,b,c,d,e,f,g,h,i,j,k) as(
select id_parcelle, batimentimmeuble,etatimmeuble,nb_loc_hab_ma,nb_loc_hab_ap,nb_loc_pro, dossier.id_dossier, adresse.id_adresse, zapms.geom, 1, batimentimmeuble2
from public.zapms
left join geo_pays_gex.dossier on dossier.designation_siea=zapms.id_dossier
left join geo_pays_gex.adresse on adresse.id_voie=(select id_voie from geo_pays_gex.voie where (voie.designation=zapms.nom_voie or voie.nom_quartier=zapms.nom_quartier) and voie.rivoli=lpad(zapms.rivoli,4,'0'))
and adresse.num_voie=zapms.num_voie
and adresse.insee=zapms.insee_commune::integer
)
insert into geo_pays_gex.bal2(identifiant_immeuble, batimentimmeuble, id_etat_addr, nb_loc_hab_ma, nb_loc_hab_ap, nb_loc_pro, id_dossier, id_adresse, geom, raccordement, batimentimmeuble2)
select a,b,c,d,e,f,g,h,i,j,k from upd
on conflict (identifiant_immeuble) do update
set batimentimmeuble=excluded.batimentimmeuble, id_etat_addr=excluded.id_etat_addr, nb_loc_hab_ma=excluded.nb_loc_hab_ma, nb_loc_hab_ap=excluded.nb_loc_hab_ap, nb_loc_pro=excluded.nb_loc_pro,
id_dossier=excluded.id_dossier, id_adresse=excluded.id_adresse,geom=excluded.geom, raccordement=1, batimentimmeuble2=excluded.batimentimmeuble2
;
As you can see, I use several intermediary tables in this query : one to store the street's names (voie), one related to this one storing the adresses (adresse, basically numbers related through a foreign key to the street's names table), and another storing some other datas related to the projects' names (dossier).
I don't know what other information I could give to help find an answer, I guess it is better I do not share the actual content of my tables since it may touch some privacy regulations or such.
Thanks for your attention.
EDIT : I found a workaround by deleting the entries present in the zapms table from the bal2 table, as such
delete from geo_pays_gex.bal2 where bal2.identifiant_immeuble in (select id_parcelle from zapms);
it is not entirely satisfying though, since I would have prefered to keep track of the data creator and the date of creation, as much as the fact that the data has been modified (I have some fields to store this information) and here I simply erase all this history... And I have another table with the primary key of the bal2 table as a foreign key. I am still in the DB creation so I can afford to truncate this table, but in production it wouldn't be possible since I would lose some datas.
I have to create a database for a venue.
A client books a room for an event. The problem is that the clients don't always provide their name, their email, and their phone number. Most of the time it's either name and email or name and phone. It's rarely all 3 but it happens.
I need to store each of these in their respective attribute (name, email, phone). But the way they give me their info, I have a lot of null values.
What can I do with these nulls? I've been told that it's better to not have nulls. I also need to normalize my table after that.
SQL treats NULL specially per its version of 3VL (3-valued logic). Normalization & other relational theory does not. However, we can translate SQL designs into relational designs and back. (Assume no duplicate rows here.)
Normalization happens to relations and is defined in terms of operators that don't treat NULL specially. The term "normalization" has two most common distinct meanings: putting a table into "1NF" and into "higher NFs (normal forms)". NULL doesn't affect "normalization to 1NF". "Normalization to higher NFs" replaces a table by smaller tables that natural join back to it. For purposes of normalization you could treat NULL like a value that is allowed in the domain of a nullable column in addition to the values of its SQL type. If our SQL tables have no NULLs then we can interpret them as relations & SQL join etc as join, etc. But if you decompose where a nullable column was shared between components then realize that to reconstruct the original in SQL you have to SQL join on same-named columns being equal or both NULL. And you won't want such CKs (candidate keys) in an SQL database. Eg you can't declare it as an SQL PK (primary key) because that means UNIQUE NOT NULL. Eg a UNIQUE constraint involving a nullable column allows multiple rows that have a NULL in that column, even if the rows have the same values in every column. Eg NULLs in SQL FKs cause them to be satisfied (in various ways per MATCH mode), not to fail from not appearing in the referenced table. (But DBMSs idiosyncratically differ from standard SQL.)
Unfortunately decomposition might lead to a table with all CKs containing NULL, so that we have nothing to declare as SQL PK or UNIQUE NOT NULL. The only sure solution is to convert to a NULL-free design. After then normalizing we might want to reintroduce some nullability in the components.
In practice, we manage to design tables so that there is always a set of NULL-free columns that we can declare as CK, via SQL PK or UNIQUE NOT NULL. Then we can get rid of a nullable column by dropping it from the table and adding a table with that column and the columns of some NULL-free CK: If the column is non-NULL for a row in the old design then a row with its CK subrow and column value go in the added table; otherwise it is NULL in the old design and no corresponding row is in the added table. (The original table is a natural left join of the new ones.) Of course, we also have to modify queries from the old design to the new design.
We can always avoid NULLs via a design that adds a boolean column for each old nullable column and has the old column NOT NULL. The new column says for a row whether the old column was NULL in the old design and when true has the old column be some one value that we pick for that purpose for that type throughout the database. Of course, we also have to modify queries from the old design to the new design.
Whether you want to avoid NULL is a separate question. Your database might in some way be "better" or "worse" for your application with either design. The idea behind avoiding NULL is that it complicates the meanings of queries, hence complicates querying, in a perverse way, compared to the complication of more joins from more NULL-free tables. (That perversity is typically managed by removing NULLs in query expressions as close to where they appear as possible.)
PS Many SQL terms including PK & FK differ from the relational terms. SQL PK means something more like superkey; SQL FK means something more like foreign superkey; but it doesn't even make sense to talk about a "superkey" in SQL:
Because of the resemblance of SQL tables to relations, terms that involve relations get sloppily applied to tables. But although you can borrow terms and give them SQL meanings--value, table, FD (functional dependency), superkey, CK (candidate key), PK (primary key), FK (foreign key), join, and, predicate, NF (normal form), normalize, 1NF, etc--you can't just substitute those SQL meanings for those words in RM definitions, theorems or algorithms and get something sensible or true. Moreover SQL presentations of RM notions almost never actually tell you how to soundly apply RM notions to an SQL database. They just parrot RM presentations, oblivious to whether their use of SQL meanings for terms makes things nonsensical or invalid.
First of all there is nothing wrong with nulls in a database. And they are made exactly for this purpose where attributes are unknown. To avoid nulls in a database is an advice that makes little sense in my opinion.
So you'd have three (or four) values - name (first/last), email address, and phone number - identifying a client. You can have them in a table and add a constraint to it assuring that always at least one of these columns is filled, e.g. coalesce(name, email, phone) is not null. This makes sure a booking cannot be done completely anonymously.
From your explanation it is not clear whether you will always have the same information from a client. So can it happen that a client books a room giving their name and later they book another room giving their phone instead? Or will the client be looked up in the database, their name found and the two booking assigned to them? In the latter case you can have a clients table holding all information you got so far, and the booking will contain the client record ID as a reference to this data. In the former case you may not want to have a clients table, because you cannot identify whether two clients (Jane Miller and mrsx#gmail.com) are really two different clients or only one client actually.
The tables I see so far:
room (room_id, ...)
venue (venue_id, ...)
client (client_id, name, email, phone)
booking (venue_id, room_id, client_id, ...)
I'm creating a schema for a database using MySQL's workbench. One of my tables contains fields for a personId, as well as a national id number if they have one (which they may not).
The personId field is the one used as a unique identifier throughout the schema, so I've ticked the "PK" and "NN" options for it. Now I'd like to be able to ensure that the system won't allow a new insert with a different personId if it has the same national id as an entity that already exists. However, national ids are not primary keys and may in fact be null.
I've been looking at the 'UQ' option, but I can't find clear documentation on what it actually does. I'm worried it'll create the numbers automatically when I actually want them to be inserted by a user or left null. Does anyone know?
UQ tags a field as a unique key. This enforces uniqueness in a given field, except for NULLs. This is exactly what I need for my national id field.
From http://dev.mysql.com/doc/refman/5.5/en/create-table.html :
A UNIQUE index creates a constraint such that all values in the index must be distinct. An error occurs if you try to add a new row with a key value that matches an existing row. For all engines, a UNIQUE index permits multiple NULL values for columns that can contain NULL.
I need to create a report that shows, for each master row, two sets of child (detail) rows. In database terms you can think of this as table 'A' having a 1:M relationship to table 'B' and also a 1:M relationship to table 'C'. So, for each row from table 'A', I want to display a list of child rows from 'B' (preferably in one section) and a list of child rows from 'C' (preferably in another section). I would also prefer to avoid the use of sub-reports, if possible. How can I best accomplish this?
Thanks.
I think I understand your question correctly, ie for a given row in Table A, you want the details section(s) to show all connected rows in Table B and then all connected rows in Table C (for any number of rows in B or C, including zero). I only know of two solutions to this, neither of which is straightforward.
The first is, as you've guessed, the disliked subreport option.
The second involves some additional work in the database; specifically, creating a view that combines the entries in Table B and Table C into one table, which can then be used in the main report as a linkable object to report on and you can group on Table A as desired. How straightforward this is will depend on how similar the structures of B and C are. If they were effectively identical, the view could contain something simple like
SELECT 'B' AS DetailType, Field1, Field2, FieldLinkedToTableA
FROM TableB
UNION ALL
SELECT 'C' AS DetailType, Field1, Field2, FieldLinkedToTableA
FROM TAbleC
However, neither option will scale well to reports with lots of results (unless your server supports indexing the view).
This is exactly what Crystal was made for :)
Make a blank .rpt and connect to your data sources as you normally would.
Go to Report->Group Expert and choose your grouping field (aka Index field).
You will now see a Group Header section in your design view. This is your "Master row". The Details sections will be your "Child rows".
In the example image below, this file is grouped by {Client}. For client "ZZZZ", there are 2 records, so there are 2 details sections while all the other clients only have 1 details section.
Edit
Based on your response, how about this:
In your datasource (or perhaps using some kind of intermediary like MS Access), start SQLing as follows.
Make a subquery left joining the primary key of TblA and the foreign key of TblB. Add a third column containing a constant, e.g. "TblB"
Make a subquery left joining the primary key of TblA and the foreign key of TblC. Add a third column containing a different constant, e.g. "TblC"
Union those 2 queries together. That'll be your "index table" of your crystal report.
In Crystal, you can have multiple grouping levels. So group first by your constant column, then by the TblA primary key, then by the foreign key.
This way, all the results from TblB will be displayed first, then TblC. And with a little work, tables B & C won't even have to have the same field definitions.
You can use or create columns that are used for grouping, then group on the table A column, then the table B column, then C. (Crystal's group is not the same as t-sql "group by". In Crystal, it's more of a sorting than a grouping.)