PostgresSQL Error - Issue with Real DataType - postgresql

The following is the error I get for trying to insert a number for a column with the data type of real for a Postgres DB:
ERROR: invalid input syntax for type real: "40960.00"
CONTEXT: COPY table_of_interest, line 1, column col_1: "40960.00"
I am a bit of a noob when it comes to Postgres, I have much more experience with Oracle, MySQL, and Microsoft SQL Server. I can't figure out why this does not insert? In the CSV it is trying to insert from, the column is just a basic number column. That number does not contain the double quotes around it.
I am also getting a following error when trying to insert via python:
TypeError: not all arguments converted during string formatting
This is the first row I want to insert, as an example (error above still remains):
(Timestamp('2019-01-31 00:00:00'),
Timestamp('2018-10-03 00:00:00'),
'APP-552498',
'Company Name Lawyer',
'Funded',
36500,
1095.0,
1.35,
49275.0,
15509.0,
251.0,
'Daily',
1825.0,
196.31,
78,
0.0,
'Law Offices',
NaT,
'',
'CO',
8.4,
'Company Name',
0.7647,
38003.68,
7154.34,
'West',
33766.0,
'N')
I put the first two columns as type date, every column with a number as real, and the strings as character varying. Of course, the column that has NaT is also a date (and accordingly a datetime in python).
The following is the python code:
df_vals = [tuple(x) for x in df.values]
c.execute("""INSERT INTO schema.table VALUES (%s)""", df_vals[0])
c.executemany("""INSERT INTO schema.table VALUES (%s)""", df_vals)
where c is an already created cursor from conn.cursor()

Related

T-SQL Error converting data type on join, on non-joining field

A query is written to join data from two tables and group it. If the two lines defining the join are commented out, the query returns correctly. Here is the query (with the join commented out):
SELECT tblTurbineLocations.TurbineLayoutProjectID as ProjectID
,TurbineLayoutNumber
,Count(HubHeight) as NumTurbines
,tblTurbineLocations.WTCode
FROM [TurbineLayout].[dbo].[tblTurbineLocations]
--LEFT OUTER JOIN [TurbineModel].[dbo].[tblTurbineModels]
--ON str(tblTurbineLocations.WTCode) = str(tblTurbineModels.WTCode) --Need to force string conversion to avoid data type conflict.
WHERE tblTurbineLocations.TurbineLayoutProjectID = 2255
AND tblTurbineLocations.TurbineLayoutNumber IN (406, 407)
GROUP BY tblTurbineLocations.TurbineLayoutProjectID ,tblTurbineLocations.TurbineLayoutNumber ,tblTurbineLocations.WTCode
Removing the two commented join lines then attempts a join on the WTCode field. This returns the following error:
Msg 8114, Level 16, State 5, Line 2
Error converting data type nvarchar to float.
The error points to line 2, rather than the line containing the join. The error raises that nvarchar cannot be converted to float. However the column in line 2, tblTurbineLocations.TurbineLayoutProjectID, is not an nvarchar; it is an int:
Reviewing the other columns in the query, none are of type nvarchar save for the joining column, WTCode (nvarchar(11) in one table, nvarchar(5) in the other). Both are cast as strings to avoid a different error (that is resolved by casting as str):
Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation.
WTCode is not being cast as a float in the query.
What is the error in my code or my approach?
As indicated by Larnu in the comments:
The issue is that the "str" function is expecting a float, thus is returning the error when it is fed an nvarchar. To solve the collation conflict, COLLATE can be used after the join to confirm what collation should be used for the nvarchar fields. Thus the following works:
SELECT tblTurbineLocations.TurbineLayoutProjectID as ProjectID
,TurbineLayoutNumber
,Count(HubHeight) as NumTurbines
,tblTurbineLocations.WTCode
FROM [TurbineLayout].[dbo].[tblTurbineLocations]
LEFT OUTER JOIN [TurbineModel].[dbo].[tblTurbineModels]
ON tblTurbineLocations.WTCode = tblTurbineModels.WTCode
COLLATE Latin1_General_CS_AS_KS_WS
WHERE tblTurbineLocations.TurbineLayoutProjectID = 2255
AND tblTurbineLocations.TurbineLayoutNumber IN (406, 407)
GROUP BY tblTurbineLocations.TurbineLayoutProjectID ,tblTurbineLocations.TurbineLayoutNumber ,tblTurbineLocations.WTCode

Convert jsonb column to a user-defined type

I'm trying to convert each row in a jsonb column to a type that I've defined, and I can't quite seem to get there.
I have an app that scrapes articles from The Guardian Open Platform and dumps the responses (as jsonb) in an ingestion table, into a column called 'body'. Other columns are a sequential ID, and a timestamp extracted from the response payload that helps my app only scrape new data.
I'd like to move the response dump data into a properly-defined table, and as I know the schema of the response, I've defined a type (my_type).
I've been referring to the 9.16. JSON Functions and Operators in the Postgres docs. I can get a single record as my type:
select * from jsonb_populate_record(null::my_type, (select body from data_ingestion limit 1));
produces
id
type
sectionId
...
example_id
example_type
example_section_id
...
(abbreviated for concision)
If I remove the limit, I get an error, which makes sense: the subquery would be providing multiple rows to jsonb_populate_record which only expects one.
I can get it to do multiple rows, but the result isn't broken into columns:
select jsonb_populate_record(null::my_type, body) from reviews_ingestion limit 3;
produces:
jsonb_populate_record
(example_id_1,example_type_1,example_section_id_1,...)
(example_id_2,example_type_2,example_section_id_2,...)
(example_id_3,example_type_3,example_section_id_3,...)
This is a bit odd, I would have expected to see column names; this after all is the point of providing the type.
I'm aware I can do this by using Postgres JSON querying functionality, e.g.
select
body -> 'id' as id,
body -> 'type' as type,
body -> 'sectionId' as section_id,
...
from reviews_ingestion;
This works but it seems quite inelegant. Plus I lose datatypes.
I've also considered aggregating all rows in the body column into a JSON array, so as to be able to supply this to jsonb_populate_recordset but this seems a bit of a silly approach, and unlikely to be performant.
Is there a way to achieve what I want, using Postgres functions?
Maybe you need this - to break my_type record into columns:
select (jsonb_populate_record(null::my_type, body)).*
from reviews_ingestion
limit 3;
-- or whatever other query clauses here
i.e. select all from these my_type records. All column names and types are in place.
Here is an illustration. My custom type is delmet and CTO t remotely mimics data_ingestion.
create type delmet as (x integer, y text, z boolean);
with t(i, j, k) as
(
values
(1, '{"x":10, "y":"Nope", "z":true}'::jsonb, 'cats'),
(2, '{"x":11, "y":"Yep", "z":false}', 'dogs'),
(3, '{"x":12, "y":null, "z":true}', 'parrots')
)
select i, (jsonb_populate_record(null::delmet, j)).*, k
from t;
Result:
i
x
y
z
k
1
10
Nope
true
cats
2
11
Yep
false
dogs
3
12
true
parrots

psycopg2 - TypeError: not all arguments converted during string formatting

I'm using python 3.8 and psycopg2
I'm trying to insert a registry in the database.
I have a function that formats a query and send as result a list with 2 values, one is the query and the other the values.
I made a test and put a fixed value with the exact value of the result list query[1] and worked without error, but when I use the query[1] as values instead the value by itself I got this error:
TypeError: not all arguments converted during string formatting
At my log I have these values for the query list, result of my query construction function.
['INSERT INTO country (code, name, flag, update_time) VALUES(%s,%s,%s,%s)', "('US', 'USA', 'https://example.com/flags/us.svg', 1596551810)"]
query[0]
INSERT INTO country (code, name, flag, update_time) VALUES(%s,%s,%s,%s)
query[1]
('US', 'USA', 'https://example.com/flags/us.svg', 1596551810)
This is the code snipet
`
cursor = connection.cursor()
query_insert = query[0]
query_values = tuple(query[1])
cursor.execute(query_insert,(query_values))
I tried to put it as tuple, use parentheses, but error persists.
If I put the value of the query[1] at my code,as values, work well, so I suppose that the error is at the values part of the cursor.execute parameters.
Any help is welcome !

How to insert value into uuid column in Postgres?

I have a table with a uuid column, and some of the rows are missing the data. I need to insert data into this uuid column. The data is entered manually, so we are suffixing with other column data to differentiate, but it gives me an error.
UPDATE schema.table
SET uuid_column = CONCAT ('f7949f56-8840-5afa-8c6d-3b0f6e7f93e9', '-', id_column)
WHERE id_column = '1234';
Error: [42804] ERROR: column "uuid_column" is of type uuid but expression is of type text
Hint: You will need to rewrite or cast the expression.
Position: 45
I also tried
UPDATE schema.table
SET uuid_column = CONCAT ('f7949f56-8840-5afa-8c6d-3b0f6e7f93e9', '-', id_column)::uuid
WHERE id_column = '1234';
Error: [22P02] ERROR: invalid input syntax for uuid: "f7949f56-8840-5afa-8c6d-3b0f6e7f93e9-1234"
An UUID consists of 16 bytes, which you see displayed in hexadecimal notation.
You cannot have a UUID with fewer or more bytes.
I recommend using the type bytea if you really need to do such a thing.

Fixing column "columnname" does not exist pgsql in database. Double quote vs single quote error

I have a table review(movie_id, user_id, reviewtext, date, time, likes, status)/
I get the error
column "exist" does not exist LINE 1: INSERT INTO review values ($1, $2, $3,$4,$5 ,0,"exist") ^ )
when I want to insert values into a postgresql database. I can not modify the code anymore so is there any way to make this work by altering the database like adding a column?
The code to insert is as follows:
$query = $this->db->prepare('INSERT INTO review values (:movieid, :userid, :review,:date,:time ,0,"exist")');
$result = $query->execute(Array(':movieid' => $movieid, ':userid' => $userid, ':review' => $review, ':date' => $date, ':time' => $time));
I understand that a way to fix this is to use single quotes for the column 'status' but the only thing I can do is alter the database.
No you can't.
If you had used proper insert - with named columns:
insert into review (column1, column2, column3) values (....)
then it could be theoretically possible to do by adding column "exist" and a trigger. But this would be very far away from being sane solution.