I'm new to Postgresql and I'm trying to migrate my application from MySQL.
I have a table with the following structure:
Table "public.tbl_point"
Column | Type | Modifiers | Storage | Description
------------------------+-----------------------+-----------+----------+-------------
Tag_Id | integer | not null | plain |
Tag_Name | character varying(30) | not null | extended |
Quality | integer | not null | plain |
Execute | integer | not null | plain |
Output_Index | integer | not null | plain |
Last_Update | abstime | | plain |
Indexes:
"tbl_point_pkey" PRIMARY KEY, btree ("Tag_Id")
Triggers:
add_current_date_to_tbl_point BEFORE UPDATE ON tbl_point FOR EACH ROW EXECUTE PROCEDURE update_tbl_point()
Has OIDs: no
when I run the query through a C program using libpq:
UPDATE tbl_point SET "Execute"=0 WHERE "Tag_Id"=0
I got the following output:
ERROR: record "new" has no field "last_update"
CONTEXT: PL/pgSQL function "update_tbl_point" line 3 at assignment
I get exactly the same error when I try to change the value of "Execute" or any other column using pgAdminIII.
Everything works fine if I change the column name from "Last_Update" to "last_update".
I found the same problem with other tables I have in my database and the column always appears with abstime or timestamp columns.
Your update_tbl_point function is probably doing something like this:
new.last_update = current_timestamp;
but it should be using new."Last_Update" so fix your trigger function.
Column names are normalized to lower case in PostgreSQL (the opposite of what the SQL standard says mind you) but identifiers that are double quoted maintain their case:
Quoting an identifier also makes it case-sensitive, whereas unquoted names are always folded to lower case. For example, the identifiers FOO, foo, and "foo" are considered the same by PostgreSQL, but "Foo" and "FOO" are different from these three and each other. (The folding of unquoted names to lower case in PostgreSQL is incompatible with the SQL standard, which says that unquoted names should be folded to upper case. Thus, foo should be equivalent to "FOO" not "foo" according to the standard. If you want to write portable applications you are advised to always quote a particular name or never quote it.)
So, if you do this:
create table pancakes (
Eggs integer not null
)
then you can do any of these:
update pancakes set eggs = 11;
update pancakes set Eggs = 11;
update pancakes set EGGS = 11;
and it will work because all three forms are normalized to eggs. However, if you do this:
create table pancakes (
"Eggs" integer not null
)
then you can do this:
update pancakes set "Eggs" = 11;
but not this:
update pancakes set eggs = 11;
The usual practice with PostgreSQL is to use lower case identifiers everywhere so that you don't have to worry about it. I'd recommend the same naming scheme in other databases as well, having to quote everything just leaves you with a mess of double quotes (standard), backticks (MySQL), and brackets (SQL Server) in your SQL and that won't make you any friends.
Related
I have several CSVs with varying field names that I am copying into a Postgres database from an s3 data source. There are quite a few of them that contain empty strings, "" which I would like to convert to NULLs at import. When I attempt to copy I get an error along the lines of this (same issue for other data types, integer, etc.):
psycopg2.errors.InvalidDatetimeFormat: invalid input syntax for type date: ""
I have tried using FORCE_NULL (field 1, field2, field3) - and this works for me, except I would like to do FORCE_NULL (*) and apply to all of the columns as I have A LOT of fields I am bringing in that I'd like this applied to.
Is this available?
This is an example of my csv:
"ABC","tgif","123","","XyZ"
Using psycopg2 Copy functions. In this case copy_expert:
cat empty_str.csv
1, ,3,07/22/2
2,test,4,
3,dog,,07/23/2022
create table empty_str_test(id integer, str_fld varchar, int_fld integer, date_fld date);
import psycopg2
con = psycopg2.connect("dbname=test user=postgres host=localhost port=5432")
cur = con.cursor()
with open("empty_str.csv") as csv_file:
cur.copy_expert("COPY empty_str_test FROM STDIN WITH csv", csv_file)
con.commit()
select * from empty_str_test ;
id | str_fld | int_fld | date_fld
----+---------+---------+------------
1 | | 3 | 2022-07-22
2 | test | 4 |
3 | dog | | 2022-07-23
From here COPY:
NULL
Specifies the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings. This option is not allowed when using binary format.
copy_expert allows you specify the CSV format. If you use copy_from it will use the text format.
I have a column in a Postgresql table that is unique and character varying(10) type. The table contains old alpha-numeric values that I need to keep. Every time a new row is created from this point forward, I want it to be numeric only. I would like to get the max numeric-only value from this table for this column then create a new row with that max value incremented by 1.
Is there a way to query this table for the max numeric value only for this column?
For example, if this column currently has the values:
1111
A1111A
1234
1234A
3331
B3332
C-3333
33-D33
3**333*
Is there a query that will return 3333, AKA cut out all the non-numeric characters from the values and then perform a MAX() on them?
Not precisely what you asking, but something that I think will work better for you.
To go over all the columns, convert each to numbers, and then cast it to integer & return max.:
SELECT MAX(regexp_replace(my_column, '[^0-9]', '', 'g')::int) FROM public.foobar;
This gets you your max value... say 2999.
Now, going forward, consider making the default for your column a serial-like value, and convert it to text... that way you set the "MAX" once, and then let postgres do all the work for future values.
-- create simple integer sequence
CREATE SEQUENCE public.foobar_my_column_seq
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 1
CACHE 0;
-- use new sequence as default value for column __and__ convert to text
ALTER TABLE foobar
ALTER COLUMN my_column
SET DEFAULT nextval('publc.foobar_my_column_seq'::regclass)::text;
-- initialize "next value" of sequence to whatever is larger than
-- what you already have in your data ... say 3000:
ALTER SEQUENCE public.foobar_my_column_seq RESTART WITH 3000;
Because you're simply setting default, you don't change your current alpha-numeric values.
I figured it out. The following query works.
select text_value, regexp_replace(text_value, '[^0-9]+', '') as new_value from the_table;
Result:
text_value | new_value
-----------------------+-------------
4*215474 | 4215474
740024 | 740024
4*100535 | 4100535
42356 | 42356
CASH |
4*215474 | 4215474
740025 | 740025
740026 | 740026
4*5089655798 | 45089655798
4*15680 | 415680
4*224034 | 4224034
4*265718708 | 4265718708
I have a column that I want to get an average of, the column is varchar(200). I keep getting this error. How do I convert the column to numeric and get an average of it.
Values in the column look like
16,000.00
15,000.00
16,000.00 etc
When I execute
select CAST((COALESCE( bonus,'0')) AS numeric)
from tableone
... I get
ERROR: invalid input syntax for type numeric:
The standard way to represent (as text) a numeric in SQL is something like:
16000.00
15000.00
16000.00
So, your commas in the text are hurting you.
The most sensible way to solve this problem would be to store the data just as a numeric instead of using a string (text, varchar, character) type, as already suggested by a_horse_with_no_name.
However, assuming this is done for a good reason, such as you inherited a design you cannot change, one possibility is to get rid of all the characters which are not a (minus sign, digit, period) before casting to numeric:
Let's assume this is your input data
CREATE TABLE tableone
(
bonus text
) ;
INSERT INTO tableone(bonus)
VALUES
('16,000.00'),
('15,000.00'),
('16,000.00'),
('something strange 25'),
('why do you actually use a "text" column if you could just define it as numeric(15,0)?'),
(NULL) ;
You can remove all the straneous chars with a regexp_replace and the proper regular expression ([^-0-9.]), and do it globally:
SELECT
CAST(
COALESCE(
NULLIF(
regexp_replace(bonus, '[^-0-9.]+', '', 'g'),
''),
'0')
AS numeric)
FROM
tableone ;
| coalesce |
| -------: |
| 16000.00 |
| 15000.00 |
| 16000.00 |
| 25 |
| 150 |
| 0 |
See what happens to the 15,0 (this may NOT be what you want).
Check everything at dbfiddle here
I'm going to go out on a limb and say that it might be because you have Empty strings rather than nulls in your column; this would result in the error you are seeing. Try wrapping the column name in a nullif:
SELECT CAST(coalesce(NULLIF(bonus, ''), '0') AS integer) as new_field
But I would really question your schema that you have numeric values stored in a varchar column...
I create table gs:
foo=>create table gs as select generate_subscripts('{{1,2,3},{4,5,6}}'::integer[],2);
I alias the table and the column:
foo=> select s.a from gs s(a);
a
---
1
2
3
(3 rows)
If I only alias the table, I see composite types:
foo=> select s from gs s;
s
-----
(1)
(2)
(3)
(3 rows)
But when I only alias a function as if it was a table, I do not see composite types, but it is as if I had aliased a table and column:
foo=> select s from generate_subscripts('{{1,2,3},{4,5,6}}'::integer[],2) s;
s
---
1
2
3
(3 rows)
I do not understand why I do not see composite types instead.
Set returning functions (or table functions, SRF) are treated differently than actual relations (tables, views, etc.). Especially, when they return only one column (a base type):
If no table_alias is specified, the function name is used as the table name; in the case of a ROWS FROM() construct, the first function's name is used.
If column aliases are not supplied, then for a function returning a base data type, the column name is also the same as the function name. For a function returning a composite type, the result columns get the names of the individual attributes of the type.
What is not covered though, is the case, when you supply a table alias, but not a column alias (for a single-columned SRF). In that case, the column alias will be the same as the table alias, so you can't access the function's row-type (composite type) explicitly (its reference is hidden by the column alias).
select s from generate_subscripts('{{1,2,3},{4,5,6}}'::integer[],2) s;
-- "s" is both a column alias and a table alias here, so:
select s.s from generate_subscripts('{{1,2,3},{4,5,6}}'::integer[],2) s;
-- is also valid
More intriguing however, is that when you use an explicit table alias and an explicit column alias for a single-columned SRF, the table alias also becomes a column alias (its type will be the base type, not a row-type -- composite type).
select s, a from generate_subscripts('{{1,2,3},{4,5,6}}'::integer[],2) s(a);
+---+---+
| s | a |
+---+---+
| 1 | 1 |
| 2 | 2 |
| 3 | 3 |
+---+---+
I'm not sure though if it's a bug, or just an undocumented "feature".
http://rextester.com/VJOBI47962
I have two database tables:
# \d table_1
Table "public.table_1"
Column | Type | Modifiers
------------+---------+-----------
id | integer |
value | integer |
date_one | date |
date_two | date |
date_three | date |
# \d table_2
Table "public.table_2"
Column | Type | Modifiers
------------+---------+-----------
id | integer |
table_1_id | integer |
selector | text |
The values in table_2.selector can be one of one, two, or three, and are used to select one of the date columns in table_1.
My first implementation used a CASE:
SELECT value
FROM table_1
INNER JOIN table_2 ON table_2.table_1_id = table_1.id
WHERE CASE table_2.selector
WHEN 'one' THEN
table_1.date_one
WHEN 'two' THEN
table_1.date_two
WHEN 'three' THEN
table_1.date_three
ELSE
table_1.date_one
END BETWEEN ? AND ?
The values for selector are such that I could identify the column of interest as eval(date_#{table_2.selector}), if PL/pgSQL allows evaluation of strings as expressions.
The closest I've been able to find is EXECUTE string, which evaluates entire statements. Is there a way to evaluate expressions?
In the plpgsql function you can dynamically create any expression. This does not apply, however, in the case you described. The query must be explicitly defined before it is executed, while the choice of the field occurs while the query is executed.
Your query is the best approach. You may try to use a function, but it will not bring any benefits as the essence of the issue will remain unchanged.