This is the dummy function I wrote to update the counter.
def updateTable(tableName, visitorId, dtWithZone):
db_uri = app.config["SQLALCHEMY_DATABASE_URI"]
engine = create_engine(db_uri, connect_args={"options": "-c timezone={}".format(dtWithZone.timetz().tzinfo.zone)})
# create session
Session = sessionmaker()
Session.configure(bind=engine)
session = Session()
meta = MetaData(engine, reflect=True)
table = meta.tables[tableName]
print dir(table)
# update row to database
row = session.query(table).filter(
table.c.visitorId == visitorId).first()
print 'original:', row.count
row.count = row.count + 1
print "updated {}".format(row.count)
session.commit()
conn.close()
but when it reaches the line row.count = row.count + 1 it throws error:
AttributeError: can't set attribute
this is the table
\d visitorinfo;
Table "public.visitorinfo"
Column | Type | Modifiers
--------------+--------------------------+-----------
platform | character varying(15) |
browser | character varying(10) |
visitorId | character varying(10) | not null
language | character varying(10) |
version | character varying(20) |
cl_lat | double precision |
cl_lng | double precision |
count | integer |
ip | character varying(20) |
visitor_time | timestamp with time zone |
Indexes:
"visitorinfo_pkey" PRIMARY KEY, btree ("visitorId")
what am I doing wrong ?why is it saying cannot set attribute?
part of updated code:
# update row to database
row = session.query(table).filter(
table.c.visitorId == visitorId).first()
print 'original:', row.count
val = row.count
row.count = val + 1
print "updated {}".format(row.count)
use an update query:
UPDATE public.visitorinfo SET counter = counter + 1 WHERE visitorId = 'VisitorID'
Make sure that last 'VisitorID' is filled from your application
Related
I applied over a postgres USER table and unique contraint over email. The problem that I am facing now is that the constraint seems to register each value I insert (or try to insert) no matter if a record with that value exists or not.
I.e
Table:
id
user
1
mail#gmail.com
2
mail2#gmail.com
if i insert mail3#gmail.com, delete the value and try to insert mail3#gmail.com again it says:
sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "email"
my doubt is: the unique constraint guarantees that the value is newer always or that there is only one record of that value in the column?
documentations says it the second but the experience shows is the first one
more details:
| Column | Type | Nullable |
|------------------|-----------------------------|----------|
| id | integer | not null |
| email | character varying(100) | |
| password | character varying(100) | |
| name | character varying(1000) | |
| lastname | character varying(1000) | |
| dni | character varying(20) | |
| cellphone | character varying(20) | |
| accepted_terms | boolean | |
| investor_test | boolean | |
| validated_email | boolean | |
| validated_cel | boolean | |
| last_login_at | timestamp without time zone | |
| current_login_at | timestamp without time zone | |
| last_login_ip | character varying(100) | |
| current_login_ip | character varying(100) | |
| login_count | integer | |
| active | boolean | |
| fs_uniquifier | character varying(255) | not null |
| confirmed_at | timestamp without time zone | |
Indexes:
"bondusers_pkey" PRIMARY KEY, btree (id)
"bondusers_email_key" UNIQUE CONSTRAINT, btree (email)
"bondusers_fs_uniquifier_key" UNIQUE CONSTRAINT, btree (fs_uniquifier)
Insert Statement:
INSERT INTO bondusers (email, password, name, lastname, dni, cellphone, accepted_terms, investor_test, validated_email, validated_cel, last_login_at, current_login_at, last_login_ip, current_login_ip, login_count, active, fs_uniquifier, confirmed_at) VALUES ('mail3#gmail.com', '$pbkdf2-sha256$29000$XyvlfI8x5vwfYwyhtBYi5A$Hhfrzvqs94MjTCmDOVmmnbUyf7ho4kLEY8UYUCdHPgM', 'mail', 'mail3', '123123123', '1139199196', false, false, false, false, NULL, NULL, NULL, NULL, NULL, true, '1c4e60b34a5641f4b560f8fd1d45872c', NULL);
ERROR: duplicate key value violates unique constraint "bondusers_fs_uniquifier_key"
DETAIL: Key (fs_uniquifier)=(1c4e60b34a5641f4b560f8fd1d45872c) already exists.
but when:
select * from bondusers where fs_uniquifier = '1c4e60b34a5641f4b560f8fd1d45872c';
result is 0 rows
I assume that if you run the INSERT, DELETE, INSERT directly within Postgres command line it works OK?
I noticed your error references SQLAlchemy (sqlalchemy.exc.IntegrityError), so I think it may be that and not PostgreSQL. Within a transaction SQLAlchemy's Unit of Work pattern can re-order SQL statements for performance reasons.
The only ref I could find was here https://github.com/sqlalchemy/sqlalchemy/issues/5735#issuecomment-735939061 :
if there are no dependency cycles between the target tables, the flush proceeds as follows:
<snip/>
a. within a particular table, INSERT operations are processed in the order in which objects were add()'ed
b. within a particular table, UPDATE and DELETE operations are processed in primary key order
So if you have the following within a single transaction:
INSERT x
DELETE x
INSERT x
when you commit it, it's probably getting reordered as:
INSERT x
INSERT x
DELETE x
I have more experience with this problem in Java/hibernate. The SQLAlchemy docs do claim it's unit of work pattern is "Modeled after Fowler's "Unit of Work" pattern as well as Hibernate, Java's leading object-relational mapper." so probably relevant here too
To supplement Ed Brook's insightful answer, you can work around the problem by flushing the session after deleting the record:
with Session() as s, s.begin():
u = s.scalars(sa.select(User).where(User.user == 'a')).first()
s.delete(u)
s.flush()
s.add(User(user='a'))
Another solution would be to use a deferred constraint, so that the state of the index is not evaluated until the end of the transaction:
class User(Base):
...
__table_args__ = (
sa.UniqueConstraint('user', deferrable=True, initially='deferred'),
)
but note, from the PostgreSQL documentation:
deferrable constraints cannot be used as conflict arbitrators in an INSERT statement that includes an ON CONFLICT DO UPDATE clause.
Postgres returning empty result if one of the outcome is null.
For a scenario, consider a table,
table: books
id | title | is_free |
1 | A | true |
2 | B | false |
select 'some_text' as col, b.title
from (select title from books
where id = 3) as b;
In this case, the number of rows returned is 0.
col | title |
(0 rows)
How to return Null as return value?
col | title |
some_text | NULL |
(1 row)
Use a subquery in a different way:
select 'some_text' as col,
(select title from books where id = 3);
I have a table named async_data, which has id column also auto increment defined. But in production I am seeing some insert queries are failing saying
PG::NotNullViolation: ERROR: null value in column "id" violates not-null constraint
Rails Migration file
class CreateAsyncData < ActiveRecord::Migration[5.0]
def change
create_table :async_data do |t|
t.integer :request_id
t.integer :sourced_contact_id
t.integer :data_source_id
t.boolean :is_enriched
t.column :requested_params, :json
t.text :q
t.datetime :fetched_at
t.timestamps
end
end
end
CREATE TABLE public.async_data (
id integer NOT NULL,
request_id integer,
sourced_contact_id integer,
data_source_id integer,
is_enriched boolean DEFAULT false,
requested_params json,
fetched_at timestamp without time zone,
created_at timestamp without time zone NOT NULL,
updated_at timestamp without time zone NOT NULL,
q text,
is_processed boolean DEFAULT false NOT NULL,
is_data_pushed boolean DEFAULT false NOT NULL
);
\d async_data;
Table "public.async_data"
Column | Type | Collation | Nullable | Default
-------------------+-----------------------------+-----------+----------+----------------------------------------------------
id | integer | | not null | nextval('async_data_id_seq'::regclass)
request_id | integer | | |
source_company_id | integer | | |
data_source_id | integer | | |
is_enriched | boolean | | |
requested_params | json | | |
q | text | | |
fetched_at | timestamp without time zone | | |
created_at | timestamp without time zone | | not null |
updated_at | timestamp without time zone | | not null |
Indexes:
"async_data_pkey" PRIMARY KEY, btree (id)
--
-- Name: async_data_id_seq; Type: SEQUENCE; Schema: public; Owner: -
--
CREATE SEQUENCE public.async_data_id_seq
AS integer
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
I want to re-produce the same in dev environment and want to know why id nil was generated.
I'm trying to display rows that have at least one value in the inet[] type column.
I really don't know any better, so it seems it would be easiest to use something like this, but it returns results with {} which I guess is null according to the inet[] type, but not from the perspective of the is not null query?
peering_manager=# select asn,name,potential_internet_exchange_peering_sessions from peering_autonomoussystem where potential_internet_exchange_peering_sessions is not null order by potential_internet_exchange_peering_sessions limit 1;
asn | name | potential_internet_exchange_peering_sessions
------+---------------------------------+----------------------------------------------
6128 | Cablevision Systems Corporation | {}
(1 row)
peering_manager=#
So trying to dig a little more into it, I think maybe if I can try to match the existence of any valid IP address in the inet[] column, that would work, however I'm getting an error and I don't understand what it's referring to or how to resolve it to achieve the desired results:
peering_manager=# select asn,name,potential_internet_exchange_peering_sessions from peering_autonomoussystem where potential_internet_exchange_peering_sessions << inet '0.0.0.0/0';
ERROR: operator does not exist: inet[] << inet
LINE 1: ...here potential_internet_exchange_peering_sessions << inet '0...
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
peering_manager=#
Maybe it's saying that the << operator is invalid for the inet[] type or that the << operator is an invalid operation when trying to query an inet type from a value stored as an inet[] type? Or something else?
In any event, I'm kind of lost. Maybe there's a better way to do this?
Here's the table, and a sample of the data set I'm working with.
peering_manager=# \d peering_autonomoussystem;
Table "public.peering_autonomoussystem"
Column | Type | Modifiers
----------------------------------------------+--------------------------+-----------------------------------------------------------------------
id | integer | not null default nextval('peering_autonomoussystem_id_seq'::regclass)
asn | bigint | not null
name | character varying(128) | not null
comment | text | not null
ipv6_max_prefixes | integer | not null
ipv4_max_prefixes | integer | not null
updated | timestamp with time zone |
irr_as_set | character varying(255) |
ipv4_max_prefixes_peeringdb_sync | boolean | not null
ipv6_max_prefixes_peeringdb_sync | boolean | not null
irr_as_set_peeringdb_sync | boolean | not null
created | timestamp with time zone |
potential_internet_exchange_peering_sessions | inet[] | not null
contact_email | character varying(254) | not null
contact_name | character varying(50) | not null
contact_phone | character varying(20) | not null
Indexes:
"peering_autonomoussystem_pkey" PRIMARY KEY, btree (id)
"peering_autonomoussystem_asn_ec0373c4_uniq" UNIQUE CONSTRAINT, btree (asn)
Check constraints:
"peering_autonomoussystem_ipv4_max_prefixes_check" CHECK (ipv4_max_prefixes >= 0)
"peering_autonomoussystem_ipv6_max_prefixes_check" CHECK (ipv6_max_prefixes >= 0)
Referenced by:
TABLE "peering_directpeeringsession" CONSTRAINT "peering_directpeerin_autonomous_system_id_691dbc97_fk_peering_a" FOREIGN KEY (autonomous_system_id) REFERENCES peering_autonomoussystem(id) DEFERRABLE INITIALLY DEFERRED
TABLE "peering_internetexchangepeeringsession" CONSTRAINT "peering_peeringsessi_autonomous_system_id_9ffc404f_fk_peering_a" FOREIGN KEY (autonomous_system_id) REFERENCES peering_autonomoussystem(id) DEFERRABLE INITIALLY DEFERRED
peering_manager=#
peering_manager=# select asn,name,potential_internet_exchange_peering_sessions from peering_autonomoussystem limit 7;
asn | name | potential_internet_exchange_peering_sessions
-------+---------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
37662 | WIOCC | {2001:504:1::a503:7662:1,198.32.160.70}
38001 | NewMedia Express Pte Ltd | {2001:504:16::9471,206.81.81.204}
46562 | Total Server Solutions | {2001:504:1::a504:6562:1,198.32.160.12,2001:504:16::b5e2,206.81.81.81,2001:504:1a::35:21,206.108.35.21,2001:504:2d::18:80,198.179.18.80,2001:504:36::b5e2:0:1,206.82.104.156}
55081 | 24Shells Inc | {2001:504:1::a505:5081:1,198.32.160.135}
62887 | Whitesky Communications | {2001:504:16::f5a7,206.81.81.209}
2603 | NORDUnet | {2001:504:1::a500:2603:1,198.32.160.21}
6128 | Cablevision Systems Corporation | {}
(7 rows)
You can use array_length(). On empty arrays or nulls this returns NULL.
...
WHERE array_length(potential_internet_exchange_peering_sessions, 1) IS NOT NULL
...
better to compare array length with integer number
...
WHERE array_length(potential_internet_exchange_peering_sessions, 1) > 0
...
I have this query (PostgreSQL 9.1):
=> update tbp setĀ super_answer = null where packet_id = 18;
ERROR: syntax error at or near "="
I don't get it. I'm really out of words here.
Table "public.tbp"
Column | Type | Modifiers
--------------+------------------------+-----------
id | bigint | not null
super_answer | bigint |
packet_id | bigint |
It turned out I've copied some white unicode character and Postgres didn't like it.
In a Python console:
>>> u'update "tbp" setĀ "super_answer"=null where "packet_id" = 18'
u'update "tbp" set\xa0"super_answer"=null where "packet_id" = 18'
Life can be strange sometimes.