Null value in Database - rdbms

Null value means
No value
Inapplicable,unassigned, unknown, or unavailable
Which is true?

It's all about the context in which it's used. A null means there is no value but the reason for this will depend on the domain in which it is being used. In many cases the items you've listed are all valid uses of a null.

It can mean any of those things (and it is not always obvious which), which is one argument against using nulls at all.
See: http://en.wikipedia.org/wiki/Null_(SQL)#Controversy

From Wikipedia
Null is a special marker used in
Structured Query Language (SQL) to
indicate that a data value does not
exist in the database. Introduced by
the creator of the relational database
model, E. F. Codd, SQL Null serves to
fulfill the requirement that all true
relational database management systems
(RDBMS) support a representation of
"missing information and inapplicable
information". Codd also introduced the
use of the lowercase Greek omega (ω)
symbol to represent Null in database
theory. NULL is also an SQL reserved
keyword used to identify the Null
special marker.

Obviously you have the DB definition of what null means, however to an application it can mean anything. I once worked on a strange application (disclaimer- I didn't design it), that used null in a junction table to represent all of the options (allegedly it was designed this way to "save space"). This was a DB design for user and role management.
So null in this case meant the user was in all roles. That's one for daily WTF. :-)
Like many people I tend to avoid using nulls where realistically possible.

null indicates that a data value does not exist in the database, thus representing missing information.
Also allows for three-way truth value; true, false and unknown.

The only answer supported by SQL semantics is "unknown." If it meant "no value," then
'Hi there' = NULL
would return FALSE, but it returns NULL. This is because the NULL value in the expression means an unknown value, and the unknown value could very well be 'Hi there' as far as the system knows.

NULL is a representation that a field has not had a value set, or has been re-set to NULL.
It is not unknown or unavailable.
Note, that when looking for NULL values, do not use '=' in a where clause, use 'is', e.g.:
select * from User where username is NULL;
Not:
select * from User where username = NULL;

NULL, in the relational model, means Unknown. It's a mark that appears instead of a value wherever a value can appear in SQL.

Null means nothing, unknown and no value.
It does not mean unavailable or in applicable.

Null is a testable state of a column in a row, but it has no value itself.
By example:
An int can be only ...,0,1,2,3,... and also NULL.
An datetime can be only a valid date... and also NULL.
A bit can be only 0 or 1... and also NULL.
An varchar can be a string... and also NULL.
see the pattern?
You can make a column NOT NULL-able so that you can force a column to take a value.

The NULL SQL keyword is used to represent either a missing value or a value that is not applicable in a relational table

all :-)
if you want to add a semantic meaning to your field, add an ENUM
create TABLE myTable
(
myfield varchar(50)
myfieldType enum ('OK','NoValue','InApplicable','Unassigned','Unknown','Unavailable') NOT NULL
)

Related

How to prevent Entity Framework from converting empty strings to null in database-first approach

I have to insert empty strings in a non-nullable varchar field in an oracle db.
The property of the object I'm trying to save is set to empty string, but when I call SaveChanges I get an error because EF converts my empty string to null.
I know that, in code-first approach, there you can use ConvertEmptyStringToNull=false: is there a way to achieve the same behavior with database-first approach?
It appears that in Oracle (at least for now) the empty string is treated as null.
Therefore there is no way to save an empty string in a varchar field.
Note:Oracle Database currently treats a character value with a length of zero as null. However, this may not continue to be true in future releases, and Oracle recommends that you do not treat empty strings the same as nulls.
Source

Is there any case of table rewrite on add nullable column?

On the one hand the documentation states clearly that
When a column is added with ADD COLUMN and a non-volatile DEFAULT is
specified, the default is evaluated at the time of the statement and
the result stored in the table's metadata. That value will be used for
the column for all existing rows. If no DEFAULT is specified, NULL is
used. In neither case is a rewrite of the table required.
https://www.postgresql.org/docs/12/sql-altertable.html
On the other hand I've heard that because of tuple headers structure in some cases a table might be rewritten.
The documentation states that there are three fields responsible for null values storage behavior
The null bitmap is only present if the HEAP_HASNULL bit is set in
t_infomask. If it is present it begins just after the fixed header and
occupies enough bytes to have one bit per data column (that is, the
number of bits that equals the attribute count in t_infomask2). In
this list of bits, a 1 bit indicates not-null, a 0 bit is a null. When
the bitmap is not present, all columns are assumed not-null.
https://www.postgresql.org/docs/12/storage-page-layout.html
I used this information and pageinspect extension to see what is going on with tuple headers when nullable columns are added or deleted. In all cases when I add or delete nullable column I see that existed tuples never change their headers.
So, I cannot find any evidence to the statement that a table might be rewritten. Fields t_infomask, t_infomask2, t_bits never change their values.
Am I miss something? Can I say that adding nullable columns to high-volume tables is absolutely safe in terms of disk activity (excluding the fact of exclusive locking)?
The behavior has changed in PostgreSQL v11. Before that, adding a column with a non-NULL default will rewrite the table.
The point is that from v11 on, the tuple and its header don't have to be modified, adding such a column is just a metadata change. You won't see that with pageinspect. NULL values are never stored in the tuple, there is just the NULL bitmap, which is present if one of the table columns is nullable.

How to set transform_null_equals postgres parameter on a connection?

First some context in case it helps with your understanding of the why piece and sometimes leads to other good answers. I love this settings transform_null_equals because instead of
sql column value=NULL where I am told null means unknown
sql WHERE clause value=null where null means null
The setting in the title basically changes postgres so that null in WHERE clause AND column value BOTH mean 'unknown'. I can then say WHERE c.col=null (which means find any columns WHERE c.col is unknown) and I can also do WHERE c.col="value"
In this way in null languages, I can then do c.col=variable and variable can be null for unknown and a value for something that is known. perfect!
I realize this is a violation of the spec but it makes our team super fast(which is way more important in our business)....we have had less bugs, and WAY WAY simpler queries....OMG, way simpler.
Now, we set this on the user, but I want to set this via the connection instead so when someone installs postgres, it just magically works without them having to remember to set the setting.
How to do in jdbc?
Even better, How to do in Hikari Pool
You may have less trouble writing your queries with transform_null_equals, but I doubt that they will perform better, since this will just replace = NULL with IS NULL before the query is optimized.
Anyway, you can use the options parameter in a JDBC connection string to supply the parameter to the server process:
jdbc:postgresql:dbname?user=myuser&password=mypwd&options=-ctransform_null_equals%3Don

tSQLt FakeTable fails when user-defined date types are used

I got the following error when I tried to fake a table that uses user-defined data types.
COLLATE clause cannot be used on user-defined data types.{,1}
It looks like it's a known issue in tSQLt.
https://groups.google.com/forum/?fromgroups#!topic/tsqlt/AS-Eqy6BjlA
Besides altering table definition, is there a workaround? Thanks.
Thanks to Chris Francisco, there is now a patch:
FYI I ran into the same issue and was able to get around it by digging
into the tSQLt code a bit. There's a Function named
Private_GetFullTypeName
The last column returned by the function is named "Collation" which
returns an empty string '' if the variable #CollationName is null. If
you add "OR is_user_defined = 1" to the condition for the empty string
I've been able to fake tables with user defined data types.
I haven't run into any weird side affects from this alteration but I'm
no expert on the inner workings of tSQLt (or on tSQLt in general) so
this may break functionality elsewhere.
In other words:
CASE WHEN #CollationName IS NULL OR is_user_defined = 1 THEN ''
ELSE ' COLLATE ' + #CollationName

Null db values and defaults

I have 2 fields that I'm adding to a current database table with data in it. One is a bit and one is an int. If I am setting defaults for both, should I just set them to not null since there is no case where they would be null?
If you will ever need to store data where you need the ability to indicate "we don't know" then you may consider allowing null values.
For example, I store data from remote sensors. When I am unable to retrieve the sensor data, like due to network problems, I use null.
If, however, you require that a value always be present, then you should use the NOT NULL constraint.
Yes, that would do the trick. If you set those columns as not null and you don't specify a default value, you'll definitely get an error from the DB.