How do you manually insert a uuid type field in Postgres SQL DB? - postgresql

I have a Postgres table with a field called user_uuid with type uuid. I also have a valid UUID value that I want to manually insert in that row, but I can't seem to find a way to manually create this.
This is an example of the statement I'm trying to execute:
insert into my_table (account_number, type, user_uuid) values ('1252', 'residential', 'dOfa6513-aOfd-4e78-9941-724b22804e9f');
I've tried appending ::UUID which I read somewhere might work, and to enclose the UUID text value inside curly brackets, instead of single quotes. None of that has worked, and the docs are not helpful either. The error I get is the following:
invalid input syntax for type uuid: 'dOfa6513-aOfd-4e78-9941-724b22804e9f'

The UUID you're trying to insert is not a valid UUID.
You can check the validity here https://www.freecodeformat.com/validate-uuid-guid.php
This is one example of a valid UUID: a8adfa00-6680-49b3-bf94-caa8c3f1d823,
can try pass this into your insert query and check if ok.

There are 2 occurrences of the letter O in your uuid.
It should have been the digit 0 instead (zero) to make it a proper hexadecimal string: d0fa6513-a0fd-4e78-9941-724b22804e9f

Related

Postgresql 15 - Trailing junk after numeric literal error

After the Postgresql update v15, I realised that even I have a column that accepts UUID data type, it will throw me similar error like this whenever I try to Insert UUID data into the table :
Script :
INSERT INTO public.testing(uuid, rating) VALUES (${uuid}, ${rating})'
Error:
error running query error: trailing junk after numeric literal at or near "45c"
Postgresql 15 release note:
Prevent numeric literals from having non-numeric trailing characters (Peter Eisentraut)
Is there any solution for this issue? Or there an alternative data type that allows storing UUID into my table?
It seems that you forgot the single quotes around the UUID, so that the PostgreSQL parser took the value for a subtraction and complained that there were letters mixed in with the digits. This may throw a different error on older PostgreSQL versions, but it won't do the right thing either.
Be careful about SQL injection when you quote the values.

Why can't Update set change data type Postgres

I have a CSV which contains numbers stored as strings example: 1,200 when loading in these are stored as VARCHAR
I'd like to store these as integers. So tested the below;
update data
set stringy_number = replace (stringy_number,',','')::integer
This runs and removes the , from the number but doesn't change the character type. I then tried;
update data
set stringy_number::integer = replace (stringy_number,',','')::integer
Which threw a syntax error. At which point I switched to the below which worked, but I don't understand why I can't set a data type along with an update
alter table data
alter column stringy_number type integer using replace(stringy_number,',','')::integer;
update works on the values. You can cast from a datatype to another, but the result is still cast to to underlying column type.
--> you can save a "number" in a text column because it is easy to cast a number to a text. You cannot save a letter in a numerical column because the cast cannot (easily) be done.
alter column works on the entire column type. When changing the type, you can supply a custom transformation method allowing the old data to match the new data type.

Combining a non-*-projection and "returning" in Slick

I have two working statements to insert a row into a table named document with Slick.
The first one inserts all columns, taken from the row object and returns the ID generated by Postgres:
(Tables.Document returning Tables.Document.map(_.id)) +=
DocumentRow(id=-1, name="Name", fulltext="The text")
The second one ignores the column named fulltext and only inserts the name but does not return the generated ID:
Tables.Document.map(r => (r.name)) += ("Name")
How can I combine both (limiting the insert to a subset of columns and returning the generated ID at the same time)?
Background:
The reason why I want to exclude the fulltext column from the insert is the fact that it is of Postgres type tsvector, but the generated Slick code treats it as a String. At insert time the value (even if null or None) is converted into some text type which is incompatible with tsvector and raises an exception. I found no solution to insert a tsvector without an additional library. Please comment if you think there is and I should be rather following this path.
Although I believe the right way is to fix tsvector issue, I don't have enough experience with Postres to help you with it. As for your workaround, you can do it and the code should look something like this:
(Tables.Document.map(r => (r.name)) returning Tables.Document.map(_.id)) += ("Name")
If you split it into parts you can see that first you create a Query as in your second example but then rather than apply += to it immediately you first chain it with returning and only then call +=.
P.S. What is the issue with using some additional Postres-aware library?

How to determine which column is implicated in "value too long for type character varying"?

I'm programatically adding data to a PostgreSQL table using Python and psycopg - this is working fine.
Occasionally though, a text value is too long for the containing column, so I get the message:
ERROR: value too long for type character varying(1000)
where the number is the width of the offending column.
Is there a way to determine which column has caused the error? (Aside from comparing each column's length to see whether it is 1000)
Many thanks to #Tometzky, whose comment pointed me in the right direction.
Rather than trying to determine which column caused the problem after the fact, I modified my Python script to ensure that the value was truncated before inserting into the database.
access the table's schema using select column_name, data_type, character_maximum_length from information_schema.columns where table_name='test'
when building the INSERT statement, use the schema definition to identify character fields and truncate if necessary
I don't think there's an easy way.
I tried to set VERBOSITY in psql, as I assumed this would help, but unfortunately not (on 9.4):
psql
\set VERBOSITY verbose
dbname=> create temporary table test (t varchar(5));
CREATE TABLE
dbname=> insert into test values ('123456');
ERROR: 22001: value too long for type character varying(5)
LOCATION: varchar, varchar.c:623
This might be something that warrants discussion on the mailing list, as you are not the only one with this problem.

how to understand column types from sql file?

I am not good at column types as I understand. From another country with another system they just send me a sql file and they claim that there is an image on that sql file. I guess it is byte array, however I couldnt insert it into PostgreSQL. When I try to insert it says:
LINE 1: ...ES ('00246c4e-1bc8-4dde-bb89-e9dee69990d5', '0', 0xffa0ffa40...
^
********** Error **********
ERROR: syntax error at or near "xffa0f
Could you please help me to create related table with its column properties?
I know that it is not good question, however here is starting of sql file;
INSERT INTO `fps` VALUES ('00246c4e-1bc8-4dde-bb89-e9dee69990d5', '0', 0xffa0ffa4003a0907000932d325cd000ae0f3199a010a41eff19a010b8e2......
What is the type of 0xffa0ff....?
'00246c4e-1bc8-4dde-bb89-e9dee69990d5' is a UUID.
'0' is just a character string. There are a few different string types to choose from. However, if all of these values are integers, you may want to create the column as an INTEGER instead.
0xff... is a hex string, though not in a format that Postgres will recognise. You can store this data in a bytea column, but in order for the INSERT to succeed, you will need to modify the script, replacing, for example,
0xab...ef
with
'\xab...ef'