Create table, starting with digit in postgresql - postgresql

Can you give me suggestion to create table with starting with digits in postgresql.

use double quotes, eg:
t=# create table "42 Might be not The be$t idea" (i serial);
CREATE TABLE
t=# \d+ "42 Might be not The be$t idea"
Table "public.42 Might be not The be$t idea"
Column | Type | Modifiers | Storage | Stats target | Descript
ion
--------+---------+-----------------------------------------------------------------------------+---------+--------------+---------
----
i | integer | not null default nextval('"42 Might be not The be$t idea_i_seq"'::regclass) | plain | |
Please look close at what it leads to. Generally using mixed case, special characters and starting relation from number is kept a bad practice. Despite the fact that Postgres understands and works with such relation names, you have a risk to hit the bug with other software.
Without an experience you most probably shoot yourself in the foot. Eg pg_dump -t "badName" won't work. Bash will understand double quotes as own - and it is meant to work this way. So you have to specify pg_dump -t '"badName"' to find the table. And if you just fail to find a table you are lucky. Disaster is when you have badname and Badname in same schema.
The fact that it is doable does not mean you should jump into using it.

Related

Postgres is adding a space at the beginning and end of all fields

SLES 12 SP3
Postgres 10.8
I have duplicated a table to migrate data from a DB2 instance. The fields are all of type CHAR, VARCHAR, or TIMESTAMP. I originally tried to use \COPY to pull the data in from a pipe delimited file. But, it put a space at the beginning and end of all of the fields, even if this caused the field to be longer than it is defined. I found a claim online that this was a known issue with \COPY. At that point, I dropped the table, used sed and some other tools to convert the pipe delimited data into an SQL INSERT statement. I again had a leading and trailing space in every field.
There are a lot of columns but as an example of what I have follows:
FLD1 CHAR(6) PRIMARY KEY
FLD2 VARCHAR(8)
FLD3 TIMESTAMP
I am using the short form of INSERT.
INSERT INTO my_table VALUES
('123456', '12345678', '2021-01-01 12:34:56');
But when I do a SELECT, I get (note the leading and trailing spaces):
123456 | 12345678 | 2021-01-01 12:34:56 |
I would point out that the first two fields are now longer than they are defined by 2 characters.
Does anyone how I might fix this?
The -A argument to psql gives me the desired result.

Last Accessed Date and Last modified Date in postgresql [duplicate]

On development server I'd like to remove unused databases. To realize that I need to know if database is still used by someone or not.
Is there a way to get last access or modification date of given database, schema or table?
You can do it via checking last modification time of table's file.
In postgresql,every table correspond one or more os files,like this:
select relfilenode from pg_class where relname = 'test';
the relfilenode is the file name of table "test".Then you could find the file in the database's directory.
in my test environment:
cd /data/pgdata/base/18976
ls -l -t | head
the last command means listing all files ordered by last modification time.
There is no built-in way to do this - and all the approaches that check the file mtime described in other answers here are wrong. The only reliable option is to add triggers to every table that record a change to a single change-history table, which is horribly inefficient and can't be done retroactively.
If you only care about "database used" vs "database not used" you can potentially collect this information from the CSV-format database log files. Detecting "modified" vs "not modified" is a lot harder; consider SELECT writes_to_some_table(...).
If you don't need to detect old activity, you can use pg_stat_database, which records activity since the last stats reset. e.g.:
-[ RECORD 6 ]--+------------------------------
datid | 51160
datname | regress
numbackends | 0
xact_commit | 54224
xact_rollback | 157
blks_read | 2591
blks_hit | 1592931
tup_returned | 26658392
tup_fetched | 327541
tup_inserted | 1664
tup_updated | 1371
tup_deleted | 246
conflicts | 0
temp_files | 0
temp_bytes | 0
deadlocks | 0
blk_read_time | 0
blk_write_time | 0
stats_reset | 2013-12-13 18:51:26.650521+08
so I can see that there has been activity on this DB since the last stats reset. However, I don't know anything about what happened before the stats reset, so if I had a DB showing zero activity since a stats reset half an hour ago, I'd know nothing useful.
PostgreSQL 9.5 let us to track last modified commit.
Check track commit is on or off using the following query
show track_commit_timestamp;
If it return "ON" go to step 3 else modify postgresql.conf
cd /etc/postgresql/9.5/main/
vi postgresql.conf
Change
track_commit_timestamp = off
to
track_commit_timestamp = on
Restart the postgres / system
Repeat step 1.
Use the following query to track last commit
SELECT pg_xact_commit_timestamp(xmin), * FROM YOUR_TABLE_NAME;
SELECT pg_xact_commit_timestamp(xmin), * FROM YOUR_TABLE_NAME where COLUMN_NAME=VALUE;
My way to get the modification date of my tables:
Python Function
CREATE OR REPLACE FUNCTION py_get_file_modification_timestamp(afilename text)
RETURNS timestamp without time zone AS
$BODY$
import os
import datetime
return datetime.datetime.fromtimestamp(os.path.getmtime(afilename))
$BODY$
LANGUAGE plpythonu VOLATILE
COST 100;
SQL Query
SELECT
schemaname,
tablename,
py_get_file_modification_timestamp('*postgresql_data_dir*/*tablespace_folder*/'||relfilenode)
FROM
pg_class
INNER JOIN
pg_catalog.pg_tables ON (tablename = relname)
WHERE
schemaname = 'public'
I'm not sure if things like vacuum can mess this aproach, but in my tests it's a pretty acurrate way to get tables that are no longer used, at least, on INSERT/UPDATE operations.
I guess you should activate some log options. You can get information about logging on postgreSQL here.

TYPO3 RTE: Saving mathematical/greek symbols doesn't work

I need to display some mathematical/greek symbols in the RTE and later in the frontend. Inserting them via copy/paste or the "Insert characters" option works great, but as soon as I save the text, the inserted symbol get's replaced with a question mark and T3 throws following error:
1: These fields of record 56 in table "tt_content" have not been saved correctly: bodytext! The values might have changed due to type casting of the database.
I think there is an issue with the character set of T3 or my DB, but I don't know where to start looking.
Tested on my 7.6.8 and it seems to work OK. When I login to my mysql and run this query:
SELECT default_character_set_name FROM information_schema.SCHEMATA
WHERE schema_name = "7_6_local_typo3_org";
(7_6_local_typo3_org is database name) it returns:
+----------------------------+
| default_character_set_name |
+----------------------------+
| utf8 |
+----------------------------+
1 row in set (0.00 sec)
and also collation:
SELECT default_collation_name FROM information_schema.SCHEMATA
WHERE schema_name = "7_6_local_typo3_org";
+------------------------+
| default_collation_name |
+------------------------+
| utf8_general_ci |
+------------------------+
1 row in set (0.00 sec)
Then also I have in my my.cnf (mysql config file):
character-set-server = utf8
collation-server = utf8_general_ci
Similar problem when pasting HTML with UTF-Icons into Raw-HTML content-element in TYPO3-8.7.x but it works when i encode the symbols, for example:
<span class="menuicon">⌚</span>
Possible reasons for error message
1: These fields of record X in table "tt_content" have not been saved correctly: bodytext! The values might have changed due to type casting of the database.
in a TYPO3 installation (example installation's version: 10.4.20) can be
the MySQL/MariaDB tables of this TYPO3 installation are using an inappropriate/outdated character set and/or collation (Step 1 below).
this TYPO3 installation is not yet configured to use utf8mb4 for the database (Step 2 below).
TYPO3 supports utf8mb4 since at least version 9.5. With it comes proper Unicode support, including emojis, mathematical symbols, and Greek letters (e.g. ⌚∰β) in CKEditor bodytext.
I migrated my TYPO3 installation's database and configuration to utf8mb4 in the following way, getting rid of the aforementioned error message and saving and displaying Unicode multibyte characters correctly.
Be sure to apply these migrations in a test environment first, then check existing content and test usual content editing scenarios before applying these migrations on a production system to make sure MySQL/MariaDB converted between the character sets correctly and without data loss (truncation).
Step 1
Update TYPO3 database tables to use utf8mb4 as character set and utf8mb4_unicode_ci as collation.
The following bash one-liner loops over all tables in database typo3 and applies these updates. It assumes MySQL/MariaDB root privileges, a password-less socket connection, and a TYPO3 database (table_schema) named typo3. Adapt accordingly. Tested successfully on
Debian 11 MariaDB Server (10.5.12-MariaDB-0+deb11u1)
Ubuntu 20.04 LTS MySQL Server (8.0.27-0ubuntu0.20.04.1)
for tbl in $(mysql --disable-column-names --batch -e 'select distinct TABLE_NAME from information_schema.tables where table_schema="typo3" and table_type="BASE TABLE";'); do echo "updating table $tbl" && mysql -e "ALTER TABLE typo3.${tbl} CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;"; done
To ensure that during this conversion (from a "smaller" encoding to the up-to-four-bytes-per-character utf8mb4 encoding) no (string) data gets lost/truncated, MySQL/MariaDB automatically adapts a text/string column's datatype to a larger text/string datatype, e.g. from TEXT to MEDIUMTEXT.
To restore some TYPO3 (extension) table's column back to its specified datatype, visit TYPO3 backend -> Maintenance -> Analyze Database Structure. This tool will allow to restore those column's original (smaller) datatypes. This may cause data truncations. I'm not sure whether TYPO3 will warn if truncation actually occurs, though assuming the TYPO3 (extension) developers had utf8mb4 in mind when specifying/designing a column's datatype and the user-provided content of a particular database cell is not too large, truncation should not be happening (overview of text/string datatype sizes).
Step 2
Configure TYPO3 to use utf8mb4. For example, when leveraging typo3conf/AdditionalConfiguration.php, have the following configurations in AdditionalConfiguration.php:
// ...
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default']['charset'] = 'utf8mb4';
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default']['tableoptions']['charset'] = 'utf8mb4';
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default']['tableoptions']['collate'] = 'utf8mb4_unicode_ci';
// ...

PostgreSQL: How to modify the text before /copy it

Lets say I have some customer data like the following saved in a text file:
|Mr |Peter |Bradley |72 Milton Rise |Keynes |MK41 2HQ |
|Mr |Kevin |Carney |43 Glen Way |Lincoln |LI2 7RD | 786 3454
I copied the aforementioned data into my customer table using the following command:
\copy customer(title, fname, lname, addressline, town, zipcode, phone) from 'customer.txt' delimiter '|'
However, as it turns out, there are some extra space characters before and after various parts of the data. What I'd like to do is call trim() before copying the data into the table - what is the best way to achieve this?
Is there a way to call trim() on every value of every row and avoid inserting unclean data in the first place?
Thanks,
I think the best way to go about this is to add a BEFORE INSERT trigger to the table you're inserting to. This way, you can write a stored procedure that will execute before every record is inserted and trim whitepsace (or do any other transformations you may need) on any columns that need it. When you're done, simply remove the trigger (or leave it, which will improve data integrity if you never want that whitespace int those columns). I think explaining how to create a trigger and stored procedure in PostgreSQL is probably outside the scope of this question, but I will link to the documentation for each.
I think this is the best way because it is simpler than parsing through a text file or writing shell code to do this. This kind of sanitization is the kind of thing triggers do very well and very simply.
Creating a Trigger
Creating a Trigger Function
I have somehow similar use case in one of the projects. My input files:
has number of lines in the file as a last line;
needs to have line numbers added on every line;
needs to have file_id added to every line.
I use the following piece of shell code:
FACT=$( dosql "TRUNCATE tab_raw RESTART IDENTITY;
COPY tab_raw(file_id,lnum,bnum,bname,a_day,a_month,a_year,a_time,etype,a_value)
FROM stdin WITH (DELIMITER '|', ENCODING 'latin1', NULL '');
$(sed -e '$d' -e '=' "$FILE"|sed -e 'N;s/\n/|/' -e 's/^/'$DSID'|/')
\.
VACUUM ANALYZE tab_raw;
SELECT count(*) FROM tab_raw;
" | sed -e 's/^[ ]*//' -e '/^$/d'
)
dosql is a shell function, that executes psql with proper connectivity info and executes everything, that was given as an argument.
As a result of this operation I will have $FACT variable holding a total count of inserter records (for error detection).
Later I do another dosql call:
dosql "SET work_mem TO '800MB';
SELECT tab_prepare($DSID);
VACUUM ANALYZE tab_raw;
SELECT tab_duplicates($DSID);
SELECT tab_dst($DSID);
SELECT tab_gaps($DSID);
SELECT tab($DSID);"
to get analyze and move data into the final tables from auxiliary one.

Postgresql order by - danish characters is expanded

I'm trying to make a "order by" statement in a sql query work. But for some reason danish special characters is expanded in stead of their evaluating their value.
SELECT roadname FROM mytable ORDER BY roadname
The result:
Abildlunden
Æblerosestien
Agern Alle 1
The result in the middle should be the last.
The locale is set to danish, so it should know the value of the danish special characters.
What is the collation of your database? (You might also want to give the PostgreSQL version you are using) Use "\l" from psql to see.
Compare and contrast:
steve#steve#[local] =# select * from (values('Abildlunden'),('Æblerosestien'),('Agern Alle 1')) x(word)
order by word collate "en_GB";
word
---------------
Abildlunden
Æblerosestien
Agern Alle 1
(3 rows)
steve#steve#[local] =# select * from (values('Abildlunden'),('Æblerosestien'),('Agern Alle 1')) x(word)
order by word collate "da_DK";
word
---------------
Abildlunden
Agern Alle 1
Æblerosestien
(3 rows)
The database collation is set when you create the database cluster, from the locale you have set at the time. If you installed PostgreSQL through a package manager (e.g. apt-get) then it is likely taken from the system-default locale.
You can override the collation used in a particular column, or even in a particular expression (as done in the examples above). However if you're not specifying anything (likely) then the database default will be used (which itself is inherited from the template database when the database is created, and the template database collation is fixed when the cluster is created)
If you want to use da_DK as your default collation throughout, and it's not currently your database default, your simplest option might be to dump the database, then drop and re-create the cluster, specifying the collation to initdb (or pg_createcluster or whatever tool you use to create the server)
BTW the question isn't well-phrased. PostgreSQL is very much not ignoring the "special" characters, it is correctly expanding "Æ" into "AE"- which is a correct rule for English. Collating "Æ" at the end is actually more like the unlocalised behaviour.
Collation documentation: http://www.postgresql.org/docs/current/static/collation.html