Unable to insert data into postgresql table using flat file with copy command - postgresql

My table structure is
company=# \d address
Table "public.address"
Column | Type | Modifiers
----------+-----------------------+-----------
name | character varying(80) |
age | integer |
dob | date |
village | character varying(8) |
locality | character varying(80) |
district | character varying(80) |
state | character varying(80) |
pin | integer |
and i have following data in the flat file(*.txt file).
insert into address(name,age,dob,village,locality,district,state,pin)
values('David',43,'1972-10-23','Elchuru','Addanki','Prakasam','AP',544421);
insert into address(name,age,dob,village,locality,district,state,pin)
values('George',53,'1962-10-23','London','London','LN','LN',544421);
insert into address(name,age,dob,village,locality,district,state,pin)
values('David',28,'1982-10-23','Ongole','Ongole','Prakasam','AP',520421);
Now I am trying load into my table 'address' using following query i psql shell.
copy address from 'C:/P Files/address_data.txt';
Error is:
company=# copy address from 'C:/P Files/address_data.txt';
ERROR: value too long for type character varying(80)
CONTEXT: COPY address, line 1, column name: "insert into address(name,age,dob,village,locality,district,state,pin) values('David',43,'1972-10-23'..."
Please suggest modifications to be done in the above query

You don't have a data file. You have a file with a set of commands.
You can use the psql command to execute the inserts.
A data file would look more like this:
David,43,1972-10-23,Elchuru,Addanki,Prakasam,AP,544421
George,53,1962-10-23,London,London,LN,LN,544421
David,28,1982-10-23,Ongole,Ongole,Prakasam,AP,520421

Related

unique constraint values in postgres

I applied over a postgres USER table and unique contraint over email. The problem that I am facing now is that the constraint seems to register each value I insert (or try to insert) no matter if a record with that value exists or not.
I.e
Table:
id
user
1
mail#gmail.com
2
mail2#gmail.com
if i insert mail3#gmail.com, delete the value and try to insert mail3#gmail.com again it says:
sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "email"
my doubt is: the unique constraint guarantees that the value is newer always or that there is only one record of that value in the column?
documentations says it the second but the experience shows is the first one
more details:
| Column | Type | Nullable |
|------------------|-----------------------------|----------|
| id | integer | not null |
| email | character varying(100) | |
| password | character varying(100) | |
| name | character varying(1000) | |
| lastname | character varying(1000) | |
| dni | character varying(20) | |
| cellphone | character varying(20) | |
| accepted_terms | boolean | |
| investor_test | boolean | |
| validated_email | boolean | |
| validated_cel | boolean | |
| last_login_at | timestamp without time zone | |
| current_login_at | timestamp without time zone | |
| last_login_ip | character varying(100) | |
| current_login_ip | character varying(100) | |
| login_count | integer | |
| active | boolean | |
| fs_uniquifier | character varying(255) | not null |
| confirmed_at | timestamp without time zone | |
Indexes:
"bondusers_pkey" PRIMARY KEY, btree (id)
"bondusers_email_key" UNIQUE CONSTRAINT, btree (email)
"bondusers_fs_uniquifier_key" UNIQUE CONSTRAINT, btree (fs_uniquifier)
Insert Statement:
INSERT INTO bondusers (email, password, name, lastname, dni, cellphone, accepted_terms, investor_test, validated_email, validated_cel, last_login_at, current_login_at, last_login_ip, current_login_ip, login_count, active, fs_uniquifier, confirmed_at) VALUES ('mail3#gmail.com', '$pbkdf2-sha256$29000$XyvlfI8x5vwfYwyhtBYi5A$Hhfrzvqs94MjTCmDOVmmnbUyf7ho4kLEY8UYUCdHPgM', 'mail', 'mail3', '123123123', '1139199196', false, false, false, false, NULL, NULL, NULL, NULL, NULL, true, '1c4e60b34a5641f4b560f8fd1d45872c', NULL);
ERROR: duplicate key value violates unique constraint "bondusers_fs_uniquifier_key"
DETAIL: Key (fs_uniquifier)=(1c4e60b34a5641f4b560f8fd1d45872c) already exists.
but when:
select * from bondusers where fs_uniquifier = '1c4e60b34a5641f4b560f8fd1d45872c';
result is 0 rows
I assume that if you run the INSERT, DELETE, INSERT directly within Postgres command line it works OK?
I noticed your error references SQLAlchemy (sqlalchemy.exc.IntegrityError), so I think it may be that and not PostgreSQL. Within a transaction SQLAlchemy's Unit of Work pattern can re-order SQL statements for performance reasons.
The only ref I could find was here https://github.com/sqlalchemy/sqlalchemy/issues/5735#issuecomment-735939061 :
if there are no dependency cycles between the target tables, the flush proceeds as follows:
<snip/>
a. within a particular table, INSERT operations are processed in the order in which objects were add()'ed
b. within a particular table, UPDATE and DELETE operations are processed in primary key order
So if you have the following within a single transaction:
INSERT x
DELETE x
INSERT x
when you commit it, it's probably getting reordered as:
INSERT x
INSERT x
DELETE x
I have more experience with this problem in Java/hibernate. The SQLAlchemy docs do claim it's unit of work pattern is "Modeled after Fowler's "Unit of Work" pattern as well as Hibernate, Java's leading object-relational mapper." so probably relevant here too
To supplement Ed Brook's insightful answer, you can work around the problem by flushing the session after deleting the record:
with Session() as s, s.begin():
u = s.scalars(sa.select(User).where(User.user == 'a')).first()
s.delete(u)
s.flush()
s.add(User(user='a'))
Another solution would be to use a deferred constraint, so that the state of the index is not evaluated until the end of the transaction:
class User(Base):
...
__table_args__ = (
sa.UniqueConstraint('user', deferrable=True, initially='deferred'),
)
but note, from the PostgreSQL documentation:
deferrable constraints cannot be used as conflict arbitrators in an INSERT statement that includes an ON CONFLICT DO UPDATE clause.

How to change column type in PostgreSQL from text to array and cast only non-null values?

I have a table in PostgreSQL:
| id | country| type |
| 1 | USA | FOO |
| 2 | null | BAR |
I want to change the column type for the country column from text to array and cast to the new type only non-null values to have the table look as follows:
| id | country | type |
| 1 | {USA} | FOO |
| 2 | null | BAR |
So far, I have come up with this expression that casts any value to the array. So for the 2nd row, I have an array with a null value.
ALTER TABLE my_table
ALTER COLUMN country TYPE TEXT[]
USING ARRAY[country];
How can I use the USING expression to cast only not null values?
Simply do
ALTER TABLE my_table
ALTER COLUMN country TYPE TEXT[]
USING string_to_array(country,'');
You can use a CASE expression
ALTER TABLE my_table
ALTER COLUMN country TYPE TEXT[]
USING case
when country is null then null
else ARRAY[country]
end;

Select rows where column matches any IP address in inet[] array

I'm trying to display rows that have at least one value in the inet[] type column.
I really don't know any better, so it seems it would be easiest to use something like this, but it returns results with {} which I guess is null according to the inet[] type, but not from the perspective of the is not null query?
peering_manager=# select asn,name,potential_internet_exchange_peering_sessions from peering_autonomoussystem where potential_internet_exchange_peering_sessions is not null order by potential_internet_exchange_peering_sessions limit 1;
asn | name | potential_internet_exchange_peering_sessions
------+---------------------------------+----------------------------------------------
6128 | Cablevision Systems Corporation | {}
(1 row)
peering_manager=#
So trying to dig a little more into it, I think maybe if I can try to match the existence of any valid IP address in the inet[] column, that would work, however I'm getting an error and I don't understand what it's referring to or how to resolve it to achieve the desired results:
peering_manager=# select asn,name,potential_internet_exchange_peering_sessions from peering_autonomoussystem where potential_internet_exchange_peering_sessions << inet '0.0.0.0/0';
ERROR: operator does not exist: inet[] << inet
LINE 1: ...here potential_internet_exchange_peering_sessions << inet '0...
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
peering_manager=#
Maybe it's saying that the << operator is invalid for the inet[] type or that the << operator is an invalid operation when trying to query an inet type from a value stored as an inet[] type? Or something else?
In any event, I'm kind of lost. Maybe there's a better way to do this?
Here's the table, and a sample of the data set I'm working with.
peering_manager=# \d peering_autonomoussystem;
Table "public.peering_autonomoussystem"
Column | Type | Modifiers
----------------------------------------------+--------------------------+-----------------------------------------------------------------------
id | integer | not null default nextval('peering_autonomoussystem_id_seq'::regclass)
asn | bigint | not null
name | character varying(128) | not null
comment | text | not null
ipv6_max_prefixes | integer | not null
ipv4_max_prefixes | integer | not null
updated | timestamp with time zone |
irr_as_set | character varying(255) |
ipv4_max_prefixes_peeringdb_sync | boolean | not null
ipv6_max_prefixes_peeringdb_sync | boolean | not null
irr_as_set_peeringdb_sync | boolean | not null
created | timestamp with time zone |
potential_internet_exchange_peering_sessions | inet[] | not null
contact_email | character varying(254) | not null
contact_name | character varying(50) | not null
contact_phone | character varying(20) | not null
Indexes:
"peering_autonomoussystem_pkey" PRIMARY KEY, btree (id)
"peering_autonomoussystem_asn_ec0373c4_uniq" UNIQUE CONSTRAINT, btree (asn)
Check constraints:
"peering_autonomoussystem_ipv4_max_prefixes_check" CHECK (ipv4_max_prefixes >= 0)
"peering_autonomoussystem_ipv6_max_prefixes_check" CHECK (ipv6_max_prefixes >= 0)
Referenced by:
TABLE "peering_directpeeringsession" CONSTRAINT "peering_directpeerin_autonomous_system_id_691dbc97_fk_peering_a" FOREIGN KEY (autonomous_system_id) REFERENCES peering_autonomoussystem(id) DEFERRABLE INITIALLY DEFERRED
TABLE "peering_internetexchangepeeringsession" CONSTRAINT "peering_peeringsessi_autonomous_system_id_9ffc404f_fk_peering_a" FOREIGN KEY (autonomous_system_id) REFERENCES peering_autonomoussystem(id) DEFERRABLE INITIALLY DEFERRED
peering_manager=#
peering_manager=# select asn,name,potential_internet_exchange_peering_sessions from peering_autonomoussystem limit 7;
asn | name | potential_internet_exchange_peering_sessions
-------+---------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
37662 | WIOCC | {2001:504:1::a503:7662:1,198.32.160.70}
38001 | NewMedia Express Pte Ltd | {2001:504:16::9471,206.81.81.204}
46562 | Total Server Solutions | {2001:504:1::a504:6562:1,198.32.160.12,2001:504:16::b5e2,206.81.81.81,2001:504:1a::35:21,206.108.35.21,2001:504:2d::18:80,198.179.18.80,2001:504:36::b5e2:0:1,206.82.104.156}
55081 | 24Shells Inc | {2001:504:1::a505:5081:1,198.32.160.135}
62887 | Whitesky Communications | {2001:504:16::f5a7,206.81.81.209}
2603 | NORDUnet | {2001:504:1::a500:2603:1,198.32.160.21}
6128 | Cablevision Systems Corporation | {}
(7 rows)
You can use array_length(). On empty arrays or nulls this returns NULL.
...
WHERE array_length(potential_internet_exchange_peering_sessions, 1) IS NOT NULL
...
better to compare array length with integer number
...
WHERE array_length(potential_internet_exchange_peering_sessions, 1) > 0
...

Cannot Save Grafana Datasource - not-null Postgres Error

I no longer seem to be able to save new Datasources in Grafana.
In particular I am trying to add new InfluxDB database as a datasource. When hitting the Add button it pop up an error of Problem! Failed to add datasource in the UI.
The Grafana logs show the following:
t=2018-07-17T09:59:32+0000 lvl=eror msg="Failed to add datasource" logger=context userId=0 orgId=1 uname= error="pq: null value in column \"id\" violates not-null constraint"
Checking the Database logs (PostgreSQL) there is a related error:
2018-07-19 07:12:46 UTC:10.204.145.134(36768):admin#grafana:[477]:DETAIL: Failing row contains (null, 1, 0, influxdb, jenkins, proxy, http://localhost:8086, root, root, jenkins, f, , , f, {}, 2018-07-19 07:12:46, 2018-07-19 07:12:46, f, {}).
2018-07-19 07:12:46 UTC:10.204.145.134(36768):admin#grafana:[477]:STATEMENT: INSERT INTO "data_source"
As you can see the UI seems to be trying to insert null as the index which produces the error.
Although we recently migrated databases (from one PG to another, same version) the config did not change and there don't appear to have been any other ill effects.
EDIT: Seems this actually affects any database operation Grafana tries to perform when it comes to adding new resources. I just had a dev try to import a new dashboard and the PostgreSQL logs show a similar error:
2018-07-19 08:05:07 UTC:10.204.25.220(34412):sharedadmin#grafana:[14263]:DETAIL: Failing row contains (null, 1, pcs-again, PCS-AGAIN, {"__requires":[{"id":"grafana","name":"Grafana","type":"grafana"..., 1, 2018-07-19 08:05:07, 2018-07-19 08:05:07, -1, -1, 0, ).
After much delving we managed to find the answer. The issues lies within the AWS Database Migration Service (DMS) we used to migrate from one RDS instance to another. It would seems that DMS does not handle PostgreSQL to PostgreSQL well, some caveats can be found in the docs here.
In the case of Grafana the streaming replication did not pick up the column modifiers. One of the migrated tables:
grafana-> \d data_source
Table "public.data_source"
Column | Type | Modifiers
---------------------+--------------------------------+-----------
id | integer | not null
org_id | bigint | not null
version | integer | not null
type | character varying(255) | not null
name | character varying(190) | not null
access | character varying(255) | not null
url | character varying(255) | not null
password | character varying(255) |
user | character varying(255) |
database | character varying(255) |
basic_auth | boolean | not null
basic_auth_user | character varying(255) |
basic_auth_password | character varying(255) |
is_default | boolean | not null
json_data | text |
created | timestamp(6) without time zone | not null
updated | timestamp(6) without time zone | not null
with_credentials | boolean | not null
secure_json_data | text |
Indexes:
"data_source_pkey" PRIMARY KEY, btree (id)
and the corresponding table from a non-migrated instance:
grafana=> \d data_source
Table "public.data_source"
Column | Type | Modifiers
---------------------+-----------------------------+-----------------------------------------------------------
id | integer | not null default nextval('data_source_id_seq1'::regclass)
org_id | bigint | not null
version | integer | not null
type | character varying(255) | not null
name | character varying(190) | not null
access | character varying(255) | not null
url | character varying(255) | not null
password | character varying(255) |
user | character varying(255) |
database | character varying(255) |
basic_auth | boolean | not null
basic_auth_user | character varying(255) |
basic_auth_password | character varying(255) |
is_default | boolean | not null
json_data | text |
created | timestamp without time zone | not null
updated | timestamp without time zone | not null
with_credentials | boolean | not null default false
secure_json_data | text |
Indexes:
"data_source_pkey1" PRIMARY KEY, btree (id)
"UQE_data_source_org_id_name" UNIQUE, btree (org_id, name)
"IDX_data_source_org_id" btree (org_id)
The moral of the story is that DMS is not suitable to all databases, read the documentation thoroughly and in some cases using the native PostgreSQL tools is better.
In order to fix this particular issue, we dropped the database (after making sure we had exports of all the dashboards), re-created it then restarted Grafana.
Please check you PSQL Tables data_source and user_auth.
grafana=# \d data_source
Table "public.data_source"
Column | Type | Collation | Nullable | Default
---------------------+-----------------------------+-----------+----------+-----------------------------------------
id | integer | | not null | nextval('data_source_id_seq'::regclass) <=== (Probably its ok Constraint)
grafana=# \d user_auth
Table "public.user_auth"
Column | Type | Collation | Nullable | Default
----------------------+-----------------------------+-----------+----------+---------------------------------------
id | bigint | | not null | <=== (Probably it is blank)
Logon as grafana user in grafana PSQL DB and run:
Alter table user_auth:
alter table user_auth
alter column id TYPE integer,
alter column id SET not null;
CREATE SEQUENCE user_auth_id_seq
START WITH 1
MINVALUE 1
NO MAXVALUE
CACHE 1;
CMD:
grafana=# CREATE SEQUENCE user_auth_id_seq
grafana-# START WITH 1
grafana-# MINVALUE 1
grafana-# NO MAXVALUE
grafana-# CACHE 1;
CREATE SEQUENCE
grafana=# ALTER TABLE user_auth ALTER COLUMN id SET DEFAULT nextval('user_auth_id_seq');
ALTER TABLE
grafana=# \d user_auth
Table "public.user_auth"
Column | Type | Collation | Nullable | Default
----------------------+-----------------------------+-----------+----------+---------------------------------------
id | integer | | not null | nextval('user_auth_id_seq'::regclass) <=== (check it is ok now)
Test nexval:
grafana=# SELECT nextval('user_auth_id_seq');
nextval
---------
1
(1 row)
grafana=# SELECT nextval('user_auth_id_seq');
nextval
---------
2
(1 row)
grafana=# SELECT nextval('user_auth_id_seq');
nextval
---------
3
(1 row)
Try to connect with ldap.
Regards

Accessing postgres data structure

I have a table in my postgres table which has data structured strangely. Here is an example of the data structure:
id | 1
name | name
data | :type: information
| :url: url
| :platform:
| android: ''
| iphone: ''
created_at | 2016-07-29 11:39:44.938359
updated_at | 2016-08-22 12:24:32.734321
How do i change data > platform > android for example?
Just did some more research and found this which did the trick:
postgresql - replace all instances of a string within text field