Sequence not start with a initial number using aparment gem - apartment-gem

I try start a sequence with initial number in tenants, but only public schema got this.
Take a look at my migration:
class CreateDisputes < ActiveRecord::Migration[5.0]
def change
create_table :disputes, id: :uuid do |t|
...
t.integer :code
...
end
execute %{
CREATE SEQUENCE disputes_code_seq INCREMENT BY 1
NO MINVALUE NO MAXVALUE
START WITH 1001 CACHE 1
OWNED BY disputes.code;
ALTER TABLE ONLY disputes
ALTER COLUMN code SET DEFAULT nextval('disputes_code_seq'::regclass);
}
...
end
end
Thanks!

I am no expert in apartement gem. But, apartment is not creating the disputes_code_seq in the tenant's schema.
The workaround for this is to uncomment the following line in config/initializers/apartment.rb
# Apartment can be forced to use raw SQL dumps instead of schema.rb for creating new schemas.
# Use this when you are using some extra features in PostgreSQL that can't be respresented in
# schema.rb, like materialized views etc. (only applies with use_schemas set to true).
# (Note: this option doesn't use db/structure.sql, it creates SQL dump by executing pg_dump)
#
config.use_sql = true
With config.user_sql set to true, Apartment migration will create the sequence for tenant. Here is the log(s) from migration and rails console for reference.
Following is the migration log
ubuntu#ubuntu-xenial:~/devel/apartment/testseq$ rails db:migrate
== 20170224161015 CreateDisputes: migrating ===================================
-- create_table(:disputes)
-> 0.0035s
-- execute("\n CREATE SEQUENCE disputes_code_seq INCREMENT BY 1\n NO MINVALUE NO MAXVALUE\n START WITH 1001 CACHE 1\n OWNED BY disputes.code;\n\n ALTER TABLE ONLY disputes\n ALTER COLUMN code SET DEFAULT nextval('disputes_code_seq'::regclass);\n ")
-> 0.0012s
== 20170224161015 CreateDisputes: migrated (0.0065s) ==========================
[WARNING] - The list of tenants to migrate appears to be empty. This could mean a few things:
1. You may not have created any, in which case you can ignore this message
2. You've run `apartment:migrate` directly without loading the Rails environment
* `apartment:migrate` is now deprecated. Tenants will automatically be migrated with `db:migrate`
Note that your tenants currently haven't been migrated. You'll need to run `db:migrate` to rectify this.
Following is the log of tenant creation and adding a row to disputes
irb(main):001:0> Apartment::Tenant.create('tenant2')
<output snipped for brevity>
irb(main):005:0> Apartment::Tenant.switch!('tenant2')
=> "\"tenant2\""
irb(main):006:0> d = Dispute.new
=> #<Dispute id: nil, code: nil, created_at: nil, updated_at: nil>
irb(main):007:0> d.save
(0.2ms) BEGIN
SQL (0.6ms) INSERT INTO "disputes" ("created_at", "updated_at") VALUES ($1, $2) RETURNING "id" [["created_at", 2017-02-25 03:09:49 UTC], ["updated_at", 2017-02-25 03:09:49 UTC]]
(0.6ms) COMMIT
=> true
irb(main):008:0> d.reload
Dispute Load (0.3ms) SELECT "disputes".* FROM "disputes" WHERE "disputes"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
=> #<Dispute id: 1, code: 1001, created_at: "2017-02-25 03:09:49", updated_at: "2017-02-25 03:09:49">
As you can see in the following log , code is starting with sequence numbers.
irb(main):009:0> d = Dispute.new
=> #<Dispute id: nil, code: nil, created_at: nil, updated_at: nil>
irb(main):010:0> d.save
(0.3ms) BEGIN
SQL (0.6ms) INSERT INTO "disputes" ("created_at", "updated_at") VALUES ($1, $2) RETURNING "id" [["created_at", 2017-02-25 03:11:13 UTC], ["updated_at", 2017-02-25 03:11:13 UTC]]
(0.5ms) COMMIT
=> true
irb(main):011:0> d.reload
Dispute Load (0.5ms) SELECT "disputes".* FROM "disputes" WHERE "disputes"."id" = $1 LIMIT $2 [["id", 2], ["LIMIT", 1]]
=> #<Dispute id: 2, code: 1002, created_at: "2017-02-25 03:11:13", updated_at: "2017-02-25 03:11:13">

Related

How do I use the Class::DBI->sequence() method to fill 'id' field automatically in perl?

I'm following the example Class::DBI.
I create the cd table like that in my MariaDB database:
CREATE TABLE cd (
cdid INTEGER PRIMARY KEY,
artist INTEGER, # references 'artist'
title VARCHAR(255),
year CHAR(4)
);
The primary key cdid is not set to auto-incremental. I want to use a sequence in MariaDB. So, I configured the sequence:
mysql> CREATE SEQUENCE cd_seq START WITH 100 INCREMENT BY 10;
Query OK, 0 rows affected (0.01 sec)
mysql> SELECT NEXTVAL(cd_seq);
+-----------------+
| NEXTVAL(cd_seq) |
+-----------------+
| 100 |
+-----------------+
1 row in set (0.00 sec)
And set-up the Music::CD class to use it:
Music::CD->columns(Primary => qw/cdid/);
Music::CD->sequence('cd_seq');
Music::CD->columns(Others => qw/artist title year/);
After that, I try this inserts:
# NORMAL INSERT
my $cd = Music::CD->insert({
cdid => 4,
artist => 2,
title => 'October',
year => 1980,
});
# SEQUENCE INSERT
my $cd = Music::CD->insert({
artist => 2,
title => 'October',
year => 1980,
});
The "normal insert" succeed, but the "sequence insert" give me this error:
DBD::mysql::st execute failed: You have an error in your SQL syntax; check the manual that
corresponds to your MariaDB server version for the right syntax to use near ''cd_seq')' at line
1 [for Statement "SELECT NEXTVAL ('cd_seq')
"] at /usr/local/share/perl5/site_perl/DBIx/ContextualFetch.pm line 52.
I think the quotation marks ('') are provoking the error, because when I put the command "SELECT NEXTVAL (cd_seq)" (without quotations) in mysql client it works (see above). I proved all combinations (', ", `, no quotation), but still...
Any idea?
My versions: perl 5.30.3, 10.5.4-MariaDB
The documentation for sequence() says this:
If you are using a database with AUTO_INCREMENT (e.g. MySQL) then you do not need this, and any call to insert() without a primary key specified will fill this in automagically.
MariaDB is based on MySQL. Therefore you do not need the call to sequence(). Use the AUTO_INCREMENT keyword in your table definition instead.

How to update or insert into same table in DB2

I am trying to update if exists or insert into if not exists in same table in DB2 (v 9.7).
I have one table "V_OPORNAC" (scheme is SQLDBA) which contains three columns with two primary keys: IDESTE (PK), IDEPOZ (PK), OPONAR
My case is, if data (OPONAR) where IDESTE = 123456 AND IDEPOZ = 0 not exits then insert new row, if exits then update (OPONAR). I have tried this:
MERGE INTO SQLDBA.V_OPONAROC AS O1
USING (SELECT IDESTE, IDEPOZ, OPONAR FROM SQLDBA.V_OPONAROC WHERE IDESTE = 123456 AND IDEPOZ = 0) AS O2
ON (O1.IDESTE = O2.IDESTE)
WHEN MATCHED THEN
UPDATE SET
OPONAR = 'test text'
WHEN NOT MATCHED THEN
INSERT
(IDESTE, IDEPOZ, OPONAR)
VALUES (123456, 0, 'test new text')
Executing code above I am getting this error:
Query 1 of 1, Rows read: 0, Elapsed time (seconds) - Total: 0,013, SQL query: 0,013, Reading results: 0
Query 1 of 1, Rows read: 3, Elapsed time (seconds) - Total: 0,002, SQL query: 0,001, Reading results: 0,001
Warning: DB2 SQL Warning: SQLCODE=100, SQLSTATE=02000, SQLERRMC=null, DRIVER=4.21.29
SQLState: 02000
ErrorCode: 100
I figured out, by using "SYSIBM.SYSDUMMY1"
MERGE INTO SQLDBA.V_OPONAROC AS O1
USING (SELECT 1 AS IDESTE, 2 AS IDEPOZ, 3 AS OPONAR FROM SYSIBM.SYSDUMMY1) AS O2
ON (O1.IDESTE = 123456 AND O1.IDEPOZ = 0)
WHEN MATCHED THEN
UPDATE SET
O1.OPONAR = 'test text'
WHEN NOT MATCHED THEN
INSERT
(O1.IDESTE, O1.IDEPOZ, O1.OPONAR)
VALUES (123456, 0, 'test new text')

Make insert in view work with EF6 and SQLite

I created a entity based on a sqlite view, and I want to insert data in it using EF6. I altered the .edmx file to make EF see the view as a table and define a primary key on the view.
The problem is that when my program tries to insert data into the view, (which of course has a INSTEAD OF INSERT trigger that insert the data on 2 underlying tables of the view), I get the exception:
Store update, insert, or delete statement affected an unexpected
number of rows (0). Entities may have been modified or deleted since
entities were loaded. Handling optimistic concurrency exceptions.
The underlying query called from EF, is that:
Opened connection at 12/07/2019 16:58:18 +02:00
Started transaction at 12/07/2019 16:58:18 +02:00
INSERT INTO [testcasesView]([name], [deviceversion_id], [user_id], [timestamp], [comment], [static_result], [description], [precondition], [action], [expected_result], [reviewed], [automated])
VALUES (#p0, #p1, #p2, #p3, NULL, #p4, #p5, #p6, #p7, #p8, #p9, #p10);
SELECT [id]
FROM [testcasesView]
WHERE last_rows_affected() > 0 AND [id] = last_insert_rowid()
;
-- #p0: 'asd' (Type = String)
-- #p1: '67' (Type = Int64)
-- #p2: '20' (Type = Int64)
-- #p3: '1562943498' (Type = Int64)
-- #p4: 'as' (Type = String)
-- #p5: 'asd' (Type = String)
-- #p6: 'asd' (Type = String)
-- #p7: 'asd' (Type = String)
-- #p8: 'asd' (Type = String)
-- #p9: '0' (Type = Int64)
-- #p10: '0' (Type = Int64)
-- Executing at 12/07/2019 16:58:18 +02:00
-- Completed in 1 ms with result: SQLiteDataReader
Closed connection at 12/07/2019 16:58:18 +02:00
Disposed transaction at 12/07/2019 16:58:18 +02:00
I understood why but I don't know if I can solve this. The problem is that since the real insert happens inside the trigger (I don't post the code because it's two simple insert), the query
SELECT [id]
FROM [testcasesView]
WHERE last_rows_affected() > 0 AND [id] = last_insert_rowid()
Returns no contents, then EF thinks that no rows was affected.
Is there a statement or code that I can add to my trigger that makes EF see that the insert happened?

nextval(seq_name) not fetching correct value from DB

I have a flask with sqlalchemy tied to a postgres db. All components are working with reads fully functional. I have a simple model:
class School(db.Model):
__tablename__ = 'schools'
id = db.Column(db.Integer, db.Sequence('schools_id_seq'), primary_key=True)
name = db.Column(db.String(80))
active = db.Column(db.Boolean)
created = db.Column(db.DateTime)
updated = db.Column(db.DateTime)
def __init__(self, name, active, created, updated):
self.name = name
self.active = active
self.created = created
self.updated = updated
which is working on a postgres table:
CREATE SEQUENCE schools_id_seq;
CREATE TABLE schools(
id int PRIMARY KEY NOT NULL DEFAULT nextval('schools_id_seq'),
name varchar(80) NOT NULL,
active boolean DEFAULT TRUE,
created timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
);
ALTER SEQUENCE schools_id_seq OWNED BY schools.id;
when I work with an insert on this table from psql, all is well:
cake=# select nextval('schools_id_seq');
nextval
---------
65
(1 row)
cake=# INSERT INTO schools (id, name, active, created, updated) VALUES (nextval('schools_id_seq'),'Test', True, current_timestamp, current_timestamp);
INSERT 0 1
resulting in:
66 | Test | 0 | t | 2016-08-25 14:12:24.928456 | 2016-08-25 14:12:24.928456
but when I try the same insert from flask, stack trace complains about a duplicate id, but it is using nextval to get that value:
sqlalchemy.exc.IntegrityError: (psycopg2.IntegrityError) duplicate key value violates unique constraint "schools_pkey"
DETAIL: Key (id)=(7) already exists.
[SQL: "INSERT INTO schools (id, name, active, created, updated) VALUES (nextval('schools_id_seq'), %(name)s, %(active)s, %(created)s, %(updated)s) RETURNING schools.id"] [parameters: {'active': True, 'name': 'Testomg', 'updated': datetime.datetime(2016, 8, 25, 14, 10, 5, 703471), 'created': datetime.datetime(2016, 8, 25, 14, 10, 5, 703458)}]
Why would the sqlalchemy call to nextval not return the same next val that the same call within the postgres db yields?
UPDATE: #RazerM told me about the echo=true param that I didn't know about. With
app.config['SQLALCHEMY_ECHO']=True
I yielded from a new insert (note that on this try it fetched 10, should be 67):
2016-08-25 14:47:40,127 INFO sqlalchemy.engine.base.Engine select version()
2016-08-25 14:47:40,128 INFO sqlalchemy.engine.base.Engine {}
2016-08-25 14:47:40,314 INFO sqlalchemy.engine.base.Engine select current_schema()
2016-08-25 14:47:40,315 INFO sqlalchemy.engine.base.Engine {}
2016-08-25 14:47:40,499 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1
2016-08-25 14:47:40,499 INFO sqlalchemy.engine.base.Engine {}
2016-08-25 14:47:40,594 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1
2016-08-25 14:47:40,594 INFO sqlalchemy.engine.base.Engine {}
2016-08-25 14:47:40,780 INFO sqlalchemy.engine.base.Engine show standard_conforming_strings
2016-08-25 14:47:40,780 INFO sqlalchemy.engine.base.Engine {}
2016-08-25 14:47:40,969 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2016-08-25 14:47:40,971 INFO sqlalchemy.engine.base.Engine INSERT INTO schools (id, name, active, created, updated) VALUES (nextval('schools_id_seq'), %(name)s, %(active)s, %(created)s, %(updated)s) RETURNING schools.id
2016-08-25 14:47:40,971 INFO sqlalchemy.engine.base.Engine {'name': 'Testing', 'created': datetime.datetime(2016, 8, 25, 14, 47, 38, 785031), 'active': True, 'updated': datetime.datetime(2016, 8, 25, 14, 47, 38, 785050)}
2016-08-25 14:47:41,064 INFO sqlalchemy.engine.base.Engine ROLLBACK
sqlalchemy.exc.IntegrityError: (psycopg2.IntegrityError) duplicate key value violates unique constraint "schools_pkey"
DETAIL: Key (id)=(10) already exists.
[SQL: "INSERT INTO schools (id, name, active, created, updated) VALUES (nextval('schools_id_seq'), %(name)s, %(active)s, %(created)s, %(updated)s) RETURNING schools.id"] [parameters: {'updated': datetime.datetime(2016, 8, 25, 14, 54, 18, 262873), 'created': datetime.datetime(2016, 8, 25, 14, 54, 18, 262864), 'active': True, 'name': 'Testing'}]
Well, solution is simple in that case, it doesn't explain why, because I think we should look at entire environment, which you cannot show us or it will take too long. So try to insert as many records as it will reach 67 and next inserts should apply without any error, because sequence minimum will reach proper value. Of course you can try to add server_default option to id property first:
server_default=db.Sequence('schools_id_seq').next_value()
So
seq = db.Sequence('schools_id_seq')
And in a class:
id = db.Column(db.Integer, seq, server_default=seq.next_value(), primary_key=True)
Sqlalchemy mention about that in this way:
Sequence was originally intended to be a Python-side directive first and foremost so it’s probably a good idea to specify it in this way as well.
Sequences are always incremented, so both your select statement and SQLAlchemy incremented the value.
As stated in Sequence Manipulation Functions:
Advance the sequence object to its next value and return that value. This is done atomically: even if multiple sessions execute nextval concurrently, each will safely receive a distinct sequence value.
If a sequence object has been created with default parameters, successive nextval calls will return successive values beginning with 1. Other behaviors can be obtained by using special parameters in the CREATE SEQUENCE command; see its command reference page for more information.
Important: To avoid blocking concurrent transactions that obtain numbers from the same sequence, a nextval operation is never rolled back; that is, once a value has been fetched it is considered used and will not be returned again. This is true even if the surrounding transaction later aborts, or if the calling query ends up not using the value. For example an INSERT with an ON CONFLICT clause will compute the to-be-inserted tuple, including doing any required nextval calls, before detecting any conflict that would cause it to follow the ON CONFLICT rule instead. Such cases will leave unused "holes" in the sequence of assigned values. Thus, PostgreSQL sequence objects cannot be used to obtain "gapless" sequences.

SQLAlchemy: Problems Migrating to PostgreSQL from SQLite (e.g. sqlalchemy.exc.ProgrammingError:)

I am having difficulties migrating a working a working script to PGSQL from SQLite. I am using SQLalchemy. When I run the script, it raises the following errors:
raise exc.DBAPIError.instance(statement, parameters, e, connection_invalidated=is_disconnect)
sqlalchemy.exc.ProgrammingError: (ProgrammingError) can't adapt 'INSERT INTO cnn_hot_stocks (datetime, list, ticker, price, change, "pctChange") VALUES (%(datetime)s, %(list)s, %(ticker)s, %(price)s, %(change)s, %(pctChange)s)' {'price': Decimal('7.94'), 'list': 'active', 'datetime': datetime.datetime(2012, 6, 23, 11, 45, 1, 544361), 'pctChange': u'+1.53%', 'ticker': u'BAC', 'change': Decimal('0.12')}
The insert call works well when using sqlite engine, but I want to use pgsql to utilize the native Decimal type for keeping financial data correct. I copied the script and just changed the db engine to my postgresql server. Any advice on how to troubleshoot this error would be greatly appreciated for this SQLalchemy newbie... I think I am up a creek on this one! Thanks in advance!
Here are my relevant code segments and table descriptions:
dbstring = "postgresql://postgres:postgres#localhost:5432/algo"
db = create_engine(dbstring)
db.echo = True # Try changing this to True and see what happens
metadata = MetaData(db)
cnn_hot_stocks = Table('cnn_hot_stocks', metadata, autoload=True)
i = cnn_hot_stocks.insert() # running log from cnn hot stocks web-site
def scrape_data():
try:
html = urllib2.urlopen('http://money.cnn.com/data/hotstocks/').read()
markup, errors = tidy_document(html)
soup = BeautifulSoup(markup,)
except Exception as e:
pass
list_map = { 2 : 'active',
3 : 'gainer',
4 : 'loser'
}
# Iterate over 3 tables on CNN hot stock web-site
for x in range(2, 5):
table = soup('table')[x]
for row in table.findAll('tr')[1:]:
timestamp = datetime.now()
col = row.findAll('td')
ticker = col[0].a.string
price = Decimal(col[1].span.string)
change = Decimal(col[2].span.span.string)
pctChange = col[3].span.span.string
log_data = {'datetime' : timestamp,
'list' : list_map[x],
'ticker' : ticker,
'price' : price,
'change' : change,
'pctChange' : pctChange
}
print log_data
# Commit to DB
i.execute(log_data)
TABLE:
cnn_hot_stocks = Table('cnn_hot_stocks', metadata, # log of stocks data on cnn hot stocks lists
Column('datetime', DateTime, primary_key=True),
Column('list', String), # loser/gainer/active
Column('ticker', String),
Column('price', Numeric),
Column('change', Numeric),
Column('pctChange', String),
)
My reading of the documentation is that you have to use numeric instead of decimal.
PostgreSQL has no type named decimal (it's an alias for numeric but not a very full-featured one), and SQL Alchemy seems to expect numeric as the type it can use for abstraction purposes.