SQL language, in particular PostgreSQL 9+, have many ways to do the same thing... But in many circumstances (see Notes sec. for a rationale) we need to "cut diversity" and opt to a standard way.
There are a tendency to adopt text data type instead varchar. Will be "the standard way to express strings" in PostgreSQL (!), avoiding lost time in project discussions and casting similar formats...
But, how to use text preserving the size limit constraint?
I use CHECK(char_length(field)<N) and have no problem to change the limit in live environment, so it is perhaps the best way... Is it?
Some variations: in general what is the best choice?
in CREATE TABLE:
1.1. CHECK after the data type, just like default value definitions. This is the best practice?
1.2. CHECK after all column definitions. Usual to multi-column declaration like CHECK(char_length(col1)<N1 AND char_length(col2)<N2).
1.2.1. Some people like also to express all individual CHECKs after, to not "pollute" column declarations.
Use in trigger: there are some advantage?
Other ways... Other relevant one?
1.1, 1.2, 2 or 3, what is the best practice ?
CONTEXT NOTES
In projects and teams with some KISS or Convention over configuration demands, we need "good practices" recommendations... I was looking for it, in the context of CREATE TABLE ... text/varchar and project maintenance. No unbiased "good practices" recommendation in the horizon: Stackoverflow votings are the only reasonable record of this kind of recommendation.
Convention scope
(edit) For individual use, of course, as #ConsiderMe commented, "no matter what you choose, as long as you stick with it throughout the entire time there will be no problem with it".
This question, by other hand, is about "SQL community" or "PostgreSQL community" best practices conventions.
I like to keep the code as short as it's possible so it'd go with length(string) in the CHECK constraint. I do not see a particular use for char_length in this case - it takes up more "code space".
Internally, they are both textlen anyways.
You should be careful about signs that take more than 1 byte. In this case I would use octet_length. As an example consider character ą which returns 1 when asked for length, and 2 when asked for octet_length. It's been a pain doing migrations between database systems with different length enforcement.
I believe that a good source for "best practices" would be to follow documentation.
It says that using CHECK constraint inline with a column implies a column constraint which is bound to a particular column.
It mentions table constraint which is written separately from any column definition and enforces data corectness between several columns.
Basically in projects I'm involved I follow this rule for readability and maintenance purposes.
I wouldn't even consider creating trigger for such things. To me they are designed for much more complex tasks. I don't see a reason to enforce simple data correctness rules in triggers.
I can't think of any other solution which would be as basic as the standard ones and still doing it's simple job as those mentioned above.
The Depesz article on which this reasoning was based is outdated. The only argument against varchar(N) was that changing N required a table rewrite, and as of Postgres 9.2, this is no longer the case.
This gives varchar(N) a clear advantage, as increasing N is basically instantaneous, while changing the CHECK constraint on a text field will involve re-checking the entire table.
Related
Some of the Users in my database will also be Practitioners.
This could be represented by either:
an is_practitioner flag in the User table
a separate Practitioner table with a user_id column
It isn't clear to me which approach is better.
Advantages of flag:
fewer tables
only one id per user (hence no possibility of confusion, and also no confusion in which id to use in other tables)
flexibility (I don't have to decide whether fields are Practitioner-only or not)
possible speed advantage for finding User-level information for a practitioner (e.g. e-mail address)
Advantages of new table:
no nulls in the User table
clearer as to what information pertains to practitioners only
speed advantage for finding practitioners
In my case specifically, at the moment, practitioner-related information is generally one-to-many (such as the locations they can work in, or the shifts they can work, etc). I would not be at all surprised if it turns I need to store simple attributes for practitioners (i.e., one-to-one).
Questions
Are there any other considerations?
Is either approach superior?
You might want to consider the fact that, someone who is a practitioner today, is something else tomorrow. (And, by that I don't mean, not being a practitioner). Say, a consultant, an author or whatever are the variants in your subject domain, and you might want to keep track of his latest status in the Users table. So it might make sense to have a ProfType field, (Type of Professional practice) or equivalent. This way, you have all the advantages of having a flag, you could keep it as a string field and leave it as a blank string, or fill it with other Prof.Type codes as your requirements grow.
You mention, having a new table, has the advantage for finding practitioners. No, you are better off with a WHERE clause on the users table for that.
Your last paragraph(one-to-many), however, may tilt the whole choice in favour of a separate table. You might also want to consider, likely number of records, likely growth, criticality of complicated queries etc.
I tried to draw two scenarios, with some notes inside the image. It's really only a draft just to help you to "see" the various entities. May be you already done something like it: in this case do not consider my answer please. As Whirl stated in his last paragraph, you should consider other things too.
Personally I would go for a separate table - as long as you can already identify some extra data that make sense only for a Practitioner (e.g.: full professional title, University, Hospital or any other Entity the Practitioner is associated with).
So in case in the future you discover more data that make sense only for the Practitioner and/or identify another distinct "subtype" of User (e.g. Intern) you can just add fields to the Practitioner subtable, or a new Table for the Intern.
It might be advantageous to use a User Type field as suggested by #Whirl Mind above.
I think that this is just one example of having to identify different type of Objects in your DB, and for that I refer to one of my previous answers here: Designing SQL database to represent OO class hierarchy
When I define a CHECK CONSTRAINT on a table, I find the condition clause stored can be different than what I entered.
Example:
Alter table T1 add constraint C1 CHECK (field1 in (1,2,3))
Looking at what is stored:
select cc.Definition from sys.check_constraints cc
inner join sys.objects o on o.object_id = cc.parent_object_id
where cc.type = 'C' and cc.name = 'T1';
I see:
([field1]=(3) OR [field1]=(2) OR [field1]=(1))
Whilst these are equivalent, they are not the same text.
(A similar behaviour occurs when using a BETWEEN clause).
My reason for wishing this did not happen is that I am trying to programatically ensure that all my CHECK constraints are correct by comparing the text I would use to define
the constraint with that stored in sys.check_constraints - and if different then drop and recreate the constraint.
However, in these cases, they are always different and so the program would always think it needs to recreate the constraint.
Question is:
Is there any known reason why SQL Server does this translation? Is it just removing a bit of syntactic sugar and storing the clause in a simpler form?
Is there a way to avoid the behaviour (other than to write my constraint clauses in the long form to match what SQL Server would change it to)?
Is there another way to tell if my check constraint is 'out of date' and needs recreating?
Is there any known reason why SQL Server does this translation? Is it just removing a bit of syntactic sugar and storing the clause in a simpler form?
I'm not aware of any reasons documented in the Books Online, or elsewhere. However, my guess is that it's normalized for some purposes that are internal to SQL Server. It might allow SQL Server to be a bit lenient in defining the expression (such as using Database for a column name), but guaranteeing that the column names are always appropriately escaped for whatever engine needs to parse the expression (ie, [Database]).
Is there a way to avoid the behaviour (other than to write my constraint clauses in the long form to match what SQL Server would change it to)?
Probably not. But if your constraints aren't terribly complicated, is re-writing the constraint clauses in the long form such a bad idea?
Is there another way to tell if my check constraint is 'out of date' and needs recreating?
Before I answer this directly, I'd point out that there's a bit of programming philosophy involved here. The API that SQL Server provides for the text of a CHECK constraint only guarantees that you'll get something equivalent to the original expression. While you could certainly build some fancy methods to try to ensure that you'll always be able to reproduce SQL Server's normalized version of the expression, there's no guarantee that Microsoft won't change its normalization rules in the future. And indeed, there's probably no guarantee that two equivalent expressions will always be normalized identically!
So, I'd first advise you to re-examine your architecture, and see if you can accomplish the same result without having to rely on undocumented API behavior.
Having said that, there are a number of methods outlined in this question (and answer).
Another alternative, which is a bit more brute-force but perhaps acceptable, would be to always assume that the expression is "out of date" and simply drop/re-create the constraint every time you check. Unless you're expecting these constraints to frequently become out-of-date (or the tables are quite large), it seems this would be a decent solution. You could probably even run it in a transaction, so that if the new constraint is already violated, simply roll-back the transaction and report the error.
I recently found myself using some rather lengthy names for the tables and views involved in a development piece, which got me wondering whether it's possible to create client/database/server level aliases for objects.
Say for example I have a view named dbo.vAlphaBetaGammaDelta . Is there a way (with or without Intellisense) to create a reference to it named dbo.vABGD ?
If not, would there be any downfalls to creating a view of a view or single table aside from maintenance necessary if/when the table schema changes?
I should note that these aliases/views would not be intended for use in other objects, but for alleviation and prevention of carpal tunnel during day-to-day troubleshooting and delving xD
SQL Server allows for the creation of synonyms. That seems to be what you are looking for: http://msdn.microsoft.com/en-us/library/ms177544.aspx
However, as #MitchWheat mentioned. this seems to be going in the wrong direction. There are a few quite good SSMS plugins available that provide auto completion of long object names (e.g. SQL Prompt). Incidentally those products have trouble with synonyms...
There are many cases where you would like to have synonyms.
Let me state just one for start:
You have a well defined hypothetical name of a table: GlobalStatisticalRecord. Hundreds of lines of code and objects (keys, indexes, etc.) in SQL and elsewhere are referring to this table name.
After 5 years of usage, the abbreviation GSR was accustomed not only among the technical people, but also among the business users. So, to stress again, GSR is now even more recognizable than GlobalStatisticalRecord. However, for the new people that come into the technical team, it is good to keep the name GlobalStatisticalRecord as a table name, since it nicely describes what is the table all about. Now, when writing a quick adhoc query - and that may not be from your tool of choice with all the Intellisense features you are accustom to - then these aliases are really saving your time (and "life" at 2am in the morning when you are frantically trying to diagnose a production problem).
Please, if you never faced a case when you would need this, just don't assume that there is none.
I stressed the adhoc adjective, since I agree that in permanent queries (stored procedures, etc.), for the reasons you pointed out, it is advisable to use the full table names.
Today I've been presented with a fun challenge and I want your input on how you would deal with this situation.
So the problem is the following (I've converted it to demo data as the real problem wouldn't make much sense without knowing the company dictionary by heart).
We have a decision table that has a minimum of 16 conditions. Because it is an impossible feat to manage all of them (2^16 possibilities) we've decided to only list the exceptions. Like this:
As an example I've only added 10 conditions but in reality there are (for now) 16. The basic idea is that we have one baseline (the default) which is valid for everyone and all the exceptions to this default.
Example:
You have a foreigner who is also a pirate.
If you go through all the exceptions one by one, and condition by condition you remove the exceptions that have at least one condition that fails. In the end you'll end up with the following two exceptions that are valid for our case. The match is on the IsPirate and the IsForeigner condition. But as you can see there are 2 results here, well 3 actually if you count the default.
Our solution
Now what we came up with on how to solve this is that in the GUI where you are adding these exceptions, there should run an algorithm which checks for such cases and force you to define the exception more specifically. This is only still a theory and hasn't been tested out but we think it could work this way.
My Question
I'm looking for alternative solutions that make the rules manageable and prevent the problem I've shown in the example.
Your problem seem to be resolution of conflicting rules. When multiple rules match your input, (your foreigner and pirate) and they end up recommending different things (your cangetjob and cangetevicted), you need a strategy for resolution of this conflict.
What you mentioned is one way of resolution -- which is to remove the conflict in the first place. However, this may not always be possible, and not always desirable because when a user adds a new rule that conflicts with a set of old rules (which he/she did not write), the user may not know how to revise it to remove the conflict.
Another possible resolution method is prioritization. Mark a priority on each rule (based on things like the user's own authority etc.), sort the matching rules according to priority, and apply in ascending sequence of priority. This usually works and is much simpler to manage (e.g. everybody knows that the top boss's rules are final!)
Prioritization may also be used to mark a certain rule as "global override". In your example, you may want to make "IsPirate" as an override rule -- which means that it overrides settings for normal people. In other words, once you're a pirate, you're treated differently. This make it very easy to design a system in which you have a bunch of normal business rules governing 90% of the cases, then a set of "exceptions" that are treated differently, automatically overriding certain things. In this case, you should also consider making "?" available in the output columns as well.
One other possible resolution method is to include attributes in each of your conditions. For example, certain conditions must have no "zeros" in order to pass (? doesn't matter). Some conditions must have at least one "one" in order to pass. In other words, mark each condition as either "AND", "OR", or "XOR". Some popular file-system security uses this model. For example, CanGetJob may be AND (you want to be stringent on rights-to-work). CanBeEvicted may be OR -- you may want to evict even a foreigner if he is also a pirate.
An enhancement on the AND/OR method is to provide a threshold that the total result must exceed before passing that condition. For example, putting CanGetJob at a threshold of 2 then it must get at least two 1's in order to return 1. This is sometimes useful on conditions that are not clearly black-and-white.
You can mix resolution methods: e.g. first prioritize, then use AND/OR to resolve rules with similar priorities.
The possibilities are limitless and really depends on what your actual needs are.
To me this problem reminds business rules engine where there is no known algorithm to define outputs from inputs (e.g. using boolean logic) but the user (typically some sort of administrator) has to define all or some the logic itself.
This might sound a bit of an overkill but OTOH this provides virtually limit-less extension capabilities: you don't have to code any new business logic, just define a new rule set.
As I understand your problem, you are looking for a nice way to visualise the editing for these rules. But this all depends on your programming language and the tool you select for this. Java, for example, has JBoss Drools. Quoting their page:
Drools Guvnor provides a (logically
centralized) repository to store you
business knowledge, and a web-based
environment that allows business users
to view and (within certain
constraints) possibly update the
business logic directly.
You could possibly use this generic tool or write your own.
Everything depends on what your actual rules will look like. Rules like 'IF has an even number of these properties THEN' would be painful to represent in this format, whereas rules like 'IF pirate and not geek THEN' are easy.
You can 'avoid the ambiguity' by stating that you'll always be taking the first actual match, in other words your rules have a priority. You'd then want to flag rules which have no effect because they are 'shadowed' by rules higher up. They're not hard to find, so it's something your program should do.
Your interface could also indicate groups of rules where rules within the group can be in any order without changing the outcomes. This will add clarity to what the rules are really saying.
If some of your outputs are relatively independent of the others, you will also get a more compact and much clearer table by allowing question marks in the output. In that design the scan for first matching rule is done once for each output. Consider for example if 'HasChildren' is the only factor relevant to 'Can Be Evicted'. With question marks in the outputs (= no effect) you could be halving the number of exception rules.
My background for this is circuit logic design, not business logic. What you're designing is similar to, but not the same as, a PLA. As long as your actual rules are close to sum of products then it can work well. If your rules aren't, for example the 'even number of these properties' rule, then the grid like presentation will break down in a combinatorial explosion of cases. Your best hope if your rules are arbitrary is to get a clearer more compact presentation with either equations or with diagrams like a circuit diagram. To be avoided, if you can.
If you are looking for a Decision Engine with a GUI, than you can try this one: http://gandalf.nebo15.com/
We just released it, it's open source and production ready.
You probably need some kind of inference engine. Think about doing it in prolog.
Recently there was a big debate during a code reveiw session on the use of constants.
The developers had used constants for the following purposes:
Each and every message key used in the i18N application was declared as a constant. The application contained around 3000 message keys and hence the same number of constants.
Each and every database column name was declared as a constant. There were around 5000 column names and still counting..
Does it make sense to have such a huge number of constants in any application?
IMHO, common sense should prevail. Message keys just don't need to be declared as constants. We already have one level of indirection - why add one more?
Reg. database column names, I have mixed opinions. If a column is being used in multiple classes, does it make sense to declare it as a global constant?
Please pour in with your thoughts...
If I18N message keys aren't defined as constants, how do you enforce consistency? How do you automatically differentiate between a typo and a missing value? How do you audit to make sure that all I18N keys are fulfilled in each new language file?
As to database columns, you could definitely use some indirection - if your application knows about column names, you've got a binding problem. So there, you might consider a config file with the actual column names - but of course, you would want to refer to the column names by symbolic keys, which should be defined as auditable constants, just like the I18N keys.
I think is a good practice to put message keys used for i18N as constants.
I don't see much benefits in doing the same for the DB columns, if you have a well designed persistence layer.
This depends on the programming language, I think.
In PHP it's not uncommon to ude defines aka contants for such things, while I'd not use this in Java or C#.
In most projects we tried to extract the SQL to templates, so not only the table and column names were configurable but the whole sql statement. We used velocity for basic templating mechanics like variables, small loops,...
Regarding the language constants:
Another layer doesn't make much sense to me, but you hav eto choose your identifiers for the language translation carefully. Using the whole english sentence as key may end up in a lot of work for the translators if you fix the wording for example in the english sentence without changing the meaning. So all translators would have to update their files.
If the constant is used in multiple places and the compiler really catches the problem, yes.