Why uses TYPO3 smallint(5) instead of tinyint(1) for storing BOOLEAN values in database? - boolean

In TYPO3 8.7.x the datatype for booleans in MySQL-Database are smallint(5) and I wonder why it is not tinyint(1). In example for fields like "deleted" and "hidden". So, is there a good reason not to use tinyint(1) for storing boolean values in my own extension?

I just had the same question. It seems tinyint is MySQL specific. Doctrine dbal maps that to smallint to be db agnostic. (Learned on Slack, thanks to Christian Kuhn!)
smallint is a signed integer, while tinyint is unsigned, according to Doctrine's documentation:
Not all of the database vendors support unsigned integers, so such an assumption might not be propagated to the database.

Related

Creating a postgres column which allows all datatypes

I want to create a logging table which tracks changes in a certain table, like so:
CREATE TABLE logging.zaak_history (
event_id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
tstamp timestamp DEFAULT NOW(),
schemaname text,
tabname text,
columnname text,
operation text,
who text DEFAULT current_user,
new_val <any_type>,
old_val <any_type>
);
However, the column that I want to track can take different datatypes, such as text, boolean and numeric. Is there a datatype that support the functionality?
Currently I am thinking about storing is as jsonb, as this will deal with the datatype in the json formatting, but I was wondering if there is a better way.
There is no postgres data type that isn't strongly typed, because the "any" data type that is available as a pseudo type cannot be used as a column (it can be used in functions, etc.)
You could store the binary representation of your data, because every type does have a binary representation.
Your approach of using JSON seems more flexible, as you can also store meta data (such as type information).
However, I recommend looking at how other people have solved the same issue for alternative ideas. For example, most wikis store a copy of the entire record for history, which is easy to reconstruct, can be referenced independently, and has no typing issues.

MySQL to PostgreSQL table create conversion - charset and collation

I want to migrate from MySQL to PostgreSQL.My query for create table is like this.
CREATE TABLE IF NOT EXISTS conftype
(
CType char(1) NOT NULL,
RegEx varchar(300) default NULL,
ErrStr varchar(300) default NULL,
Min integer default NULL,
Max integer default NULL,
PRIMARY KEY (CType)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin;
What is the converted form of this query. I am confused with DEFAULT CHARSET=latin1 COLLATE=latin1_bin part. How can I convert this part?
That one would mean that the table uses only latin-1 (iso-8859-1) character set and latin-1 binary sorting order. In PostgreSQL the character set is database-wide, there is no option to set it on table level.
You could create a mostly compatible database with:
CREATE DATABASE databasenamegoeshere WITH ENCODING 'LATIN1' LC_COLLATE='C'
LC_CTYPE='C' TEMPLATE=template0;
However, I personally would consider a MySQL->PostgreSQL port also worthy of switching to UTF-8/Unicode.
The character set is defined when you create the database, you can't overwrite that per table in Postgres.
A non-standard collation can be defined only on column level in Postgres, not on table level. I think(!) that the equivalent to latin1_bin in MySQL would be the "C" collation in Postgres.
So if you do need a different collation, you need something like this
RegEx varchar(300) default NULL collate "C",
ErrStr varchar(300) default NULL collate "C",
min and max are reserved wordds in SQL and you shouldn't use them as column names (although using them as column names will work I strongly suggest you find different names to avoid problems in the future)

T-SQL implicit conversion between 2 varchars

I have some T-SQL (SQL Server 2008) that I inherited and am trying to find out why some of queries are running really slow. In the Actual Execution Plan I have three clustered index scans which are costing me 19%, 21% and 26%, so this seems to be the source of my problem.
The contents of the fields are usually numeric (but some job numbers have an alpha prefix)
The database design (vendor supplied) is pretty poor. The max length of a job number in their application is 12 chars, but in the tables that are joined it is defined as varchar(50) in some places and varchar(15) in others. My parameter is a varchar(12), but I get same thing if I change it to a varchar(50)
The node contains this:
Predicate: [Live_Costing].[dbo].[TSTrans].[JobNo] as [sts1].[JobNo]=CONVERT_IMPLICIT(varchar(50),[#JobNo],0)
sts1 is a derived table, but the table it pulls jobno from is a varchar(50)
I don't understand why it's doing an implicit conversion between 2 varchars. Is it just because they are different lengths?
I'm fairly new to the execution plan
Is there an easy way to figure out which node in the exec plan relates to which part of the query?
Is the predicate, the join clause?
Regards
Mark
Some variables can have collation: enter link description here
Regardless you need to verify your collations, which can be specified at server, DB, table, and column level.
First, check your collation between tempdb and the vendor supplied database. It should match. If it doesn't, it will tend to do implicit conversions.
Assuming you cannot modify the vendor supplied code base, one or more of the following should help you:
1) Predefine your temp tables and specify the same collation for the key field as in the db in use, rather than tempdb.
2) Provide collations when doing string comparisons.
3) Specify collation for key values if using "select into" with a temp table
4) Make sure your collations on your tables and columns match your database collation (VERY important if you imported only specific tables from a vendor into an existing database.)
If you can change the vendor supplied code base, I would suggest reviewing the cost for making all of your char keys the same length and NOT varchar. Varchar has an overhead of 10. The caveat is that if you create a fixed length character field not null, it will be padded to the right (unavoidable).
Ideally, you would have int keys, and only use varchar fields for user interaction/lookup:
create table Products(ProductID int not null identity(1,1) primary key clustered, ProductNumber varchar(50) not null)
alter table Products add constraint uckProducts_ProductNumber unique(ProductNumber)
Then do all joins on ProductID, rather than ProductNumber. Just filter on ProductNumber.
would be perfectly fine.

Why we haven't boolean datatype in Firebird?

Unless I'm totally wrong, we have no boolean datatype (1 bit) in Firebird, even SQL Server. Why? I think boolean usefull in various situations... And very low space consuption...
Firebird 3 introduces the boolean datatype. See the Firebird 3 release notes, BOOLEAN data type. You can get Firebird 3 from http://www.firebirdsql.org/en/firebird-3-0/
See also the original announcement: http://asfernandes.blogspot.com/2010/12/introducing-boolean-datatype.html
you have to create domain for it
CREATE DOMAIN D_BOOLEAN
AS smallint
CHECK (VALUE IS NULL OR VALUE IN (0, 1));
and then
alter table sometable add somefield d_boolean
works perfectly at our DB :)
Firebird has booleans, in the form of the bit data type.
http://www.firebirdsql.org/manual/migration-mssql-data-types.html
FTA:
Converting the bit data type
The bit data type is used to hold a single boolean value, 0 or 1. MS SQL does not support assigning NULL to this fields. InterBase can emulate this with an INTEGER or a CHAR(1) data type.
The acceptable values can be restricted using domains. For more information on Firebird domains, see the Data Definition documentation.

Database Schema for Machine Tags?

Machine tags are more precise tags: http://www.flickr.com/groups/api/discuss/72157594497877875. They allow a user to basically tag anything as an object in the format
object:property=value
Any tips on a rdbms schema that implements this? Just wondering if anyone
has already dabbled with this. I imagine the schema is quite similar to implementing
rdf triples in a rdbms
Unless you start trying to get into some optimisation, you'll end up with a table with Object, Property and Value columns Each record representing a single triple.
Anything more complicated, I'd suggested looking the documentation for Jena, Sesame, etc.
If you want to continue with the RDBMS approach then the following schema might work
CREATE TABLE predicates (
id INT PRIMARY KEY,
namespace VARCHAR(255),
localName VARCHAR(255)
)
CREATE TABLE values (
subject INT,
predicate INT,
value VARCHAR(255)
)
The table predicates holds the tag definitions and values the values.
But Mat is also right. If there are more requirements then it's probably feasible to use an RDF engine with SQL persistence support.
I ended up implementing this schema