Inputted string must be exactly 10 characters - postgresql

I am creating a column in postgres that must only accept 10 characters. VARCHAR(10) works in the sense that it allows data to be a maximum of 10 characters, however i cant seem to figure out a way to define a minimum.
How can one do this using 11.5 psql?

Add a constraint to the table that checks the length:
ALTER TABLE tbl ADD CONSTRAINT check_min_len CHECK (length(col) >= 10);

Related

Is it possible to limit character length with byte size for Postgres?

I just would like to know if it is possible to limit character length with byte size for Postgres.
It would be very appreciated if you would tell me that.
Postgres version: 9.2.17
The length limit for a varchar column is in characters based on the encoding of the database. Unless you want to change your database to use a single-byte encoding (which I would strongly discourage), there is no direct way to do this.
What you can do is to use a check constraint that converts the character value to a byte array based on a specific encoding and then checks the length of the array:
alter table the_table
add constraint check_byte_length
check ( length(convert_to(the_column, 'UTF-8')) <= 42 )

POSTGRESQL:autoincrement for varchar type field

I'm switching from MongoDB to PostgreSQL and was wondering how I can implement the same concept as used in MongoDB for uniquely identifying each raws by MongoId.
After migration, the already existing unique fields in our database is saved as character type. I am looking for minimum source code changes.
So if any way exist in postgresql for generating auto increment unique Id for each inserting into table.
The closest thing to MongoDB's ObjectId in PostgreSQL is the uuid type. Note that ObjectId has only 12 bytes, while UUIDs have 128 bits (16 bytes).
You can convert your existsing IDs by appending (or prepending) f.ex. '00000000' to them.
alter table some_table
alter id_column
type uuid
using (id_column || '00000000')::uuid;
Although it would be the best if you can do this while migrating the schema + data. If you can't do it during the migration, you need to update you IDs (while they are still varchars: this way the referenced columns will propagate the change), drop foreign keys, do the alter type and then re-apply foreign keys.
You can generate various UUIDs (for default values of the column) with the uuid-ossp module.
create extension "uuid-ossp";
alter table some_table
alter id_column
set default uuid_generate_v4();
Use a sequence as a default for the column:
create sequence some_id_sequence
start with 100000
owned by some_table.id_column;
The start with should be bigger then your current maximum number.
Then use that sequence as a default for your column:
alter table some_table
alter id_column set default nextval('some_id_sequence')::text;
The better solution would be to change the column to an integer column. Storing numbers in a text (or varchar) column is a really bad idea.

Prevent non-collation characters in a NVarChar column using constraint?

Little weird requirement, but here it goes. We have a CustomerId VarChar(25) column in a table. We need to make it NVarChar(25) to work around issues with type conversions.
CHARINDEX vs LIKE search gives very different performance, why?
But, we don't want to allow non-latin characters to be stored in this column. Is there any way to place such a constraint on column? I'd rather let database handle this check. In general we OK with NVarChar for all of our strings, but some columns like ID's is not a good candidates for this because of possibility of look alike strings from different languages
Example:
CustomerId NVarChar(1) - PK
Value 1: BOPOH
Value 2: ВОРОН
Those 2 strings different (second one is Cyrillic)
I want to prevent this entry scenario. I want to make sure Value 2 can not be saved into the field.
Just in case it helps somebody. Not sure it's most "elegant" solution but I placed constraint like this on those fields:
ALTER TABLE [dbo].[Carrier] WITH CHECK ADD CONSTRAINT [CK_Carrier_CarrierId] CHECK ((CONVERT([varchar](25),[CarrierId],(0))=[CarrierId]))
GO

get index from postgresql sequence using liquibase

What attribute of column I should use in order to get index value from postgresql sequence? valueNumeric? valueComputed?
As far as I understand the value of attribute should be nextval( 'simple_id_seq' ).
In postgresql sequence values are created as INTEGER or BIGINT.
Often this was done by using SERIAL or BIGSERIAL as column type ... but will indirectly create a sequencer of int or bigint and set the default value of the column to nextval(sequencer).
In a resultset of table data the column contains int or bigint.
Normaly there is no need to use nextval(sequencer) ... it fills the column on INSERT automatically (in the INSERT statemant the column shoult not appear).
Refer to http://www.postgresql.org/docs/9.3/static/datatype-numeric.html
If you do not want to use SERIAL or BIGSERIAL as suggested by #double_word_distruptor, use valueComputed.
With valueComputed you are telling Liquibase you are passing a function like nextval('simple_id_seq') and it will not try to parse it as a number or do any quoting.
You may also be able to use valueSequenceNext="simple_id_seq" to gain a little cross-database compatibility.

Increasing the size of character varying type in postgres without data loss

I need to increase the size of a character varying(60) field in a postgres database table without data loss.
I have this command
alter table client_details alter column name set character varying(200);
will this command increase the the field size from 60 to 200 without data loss?
The correct query to change the data type limit of the particular column:
ALTER TABLE client_details ALTER COLUMN name TYPE character varying(200);
Referring to this documentation, there would be no data loss, alter column only casts old data to new data so a cast between character data should be fine. But I don't think your syntax is correct, see the documentation I mentioned earlier. I think you should be using this syntax :
ALTER [ COLUMN ] column TYPE type [
USING expression ]
And as a note, wouldn't it be easier to just create a table, populate it and test :)
Yes. But it will rewrite this table and lock it exclusively for duration of rewriting — any query trying to access this table will wait until rewrite finishes.
Consider changing type to text and using check constraint for limiting size — changing constraint would not rewrite or lock a table.
you can use this below sql command
ALTER TABLE client_details
ALTER COLUMN name TYPE varchar(200)
From PostgreSQL 9.2 Relase Notes E.15.3.4.2
Increasing the length limit for a varchar or varbit column, or removing the limit altogether, no longer requires a table rewrite.
Changing the Column Size in Postgresql 9.1 version
During the Column chainging the varchar size to higher values, table re write is required during this lock will be held on table and user table not able access
till table re-write is done.
Table Name :- userdata
Column Name:- acc_no
ALTER TABLE userdata ALTER COLUMN acc_no TYPE varchar(250);