I'm using SQL Alchemy (1.0) ORM with a PostgreSQL database. Let's say I have a line in my class
serialid = Column(Integer, Sequence('journal_seq'), unique=True)
I realize that then there is a special "table" in my database, that holds the current value of the last one or next available integer for a serialid. Can I, from ORM, get the value of that integer (without incrementing it - otherwise I can just call next_value)? And is there a guarantee that the next serialid will have exactly that value?
I'd like to make an journal item with a serialid, and also make another item refering to that same serialid, but I'd like two of those to be committed (or rollbacked) together - and until I commit the journal item, I don't know what its serialid is. Maybe there is a cleaner way of doing it without knowing the current sequence value. A relationship would be great, but I don't know how to set it up.
(I know there is a question that asks the same thing when you control the SQL directly. I'd like to do the same from SQL Alchemy ORM.)
Related
I have a plain text index that sucks data from MySQL and inserts it into Manticore in a format I need (e.g. converting datetime strings to timestamp, CONCATing some fields etc.
I then want to create a second plain text index based off this data to group it further. This will save me having to either re-run the normalisation that's done to the first index on INSERT or make it easier for me to query in the future.
For example, my first index is a list of all phone calls that have been made / received (telephone number, duration, agent). The second index should group by Year-Month-Date in such a way that I can see how many calls each agent made on that day. This means I end up with idx_phone_calls and idx_phone_calls_by_date.
Currently, I generate the first index from MySQL, then get Manticore to query itself (by setting the MySQL host to localhost. It works, but it feels as though I should be able to query Manticore directly from within the index. However, I'm struggling to find if that's possible.
Is there a better way to do it?
Well Sphinx/Manticore, has its own GROUP BY function. So maybe can just run the final query against the original index anyway, avoid the need for the second index.
Sphinx's Aggregation (in some way) is more powerful than MySQL, and can do some 'super aggregation' functions (like with WITHIN GROUP ORDER BY)
But otherwise there is no direct way to create an off another (eg there is no CREATE TABLE idx_phone_calls_by_date SELECT ... FROM idx_phone_calls ... )
Your 'solution' of directing indexer to query the data from searchd is good. In general this should be pretty efficent, particully on localhost, there is little overhead. Maintains the logical seperation of searchd being for queries, indexer being for well building indexes.
I write lots of code using sql alchemy on top of postgres 9.3. I often have to do an insert, after checking that the record does not already exist. To do so, I do the following
c = session.query(ClassName).filter(ClassName.id=new.id).count()
if c==0:
session.add(new)
session.commit()
This is sort of tedious. Is there any way to set up sql alchemy + postgres to handle that checking automatically? I'm not necessarily looking for a unique-ness index in postgres (which will throw an error if the record already exists) so much as an "add" operation that knows what to do if a record is already there.
Why not define your own
"add" operation that knows what to do if a record is already there.
?
def addIfNotExist(session, new):
if not c:
session.add(new)
session.commit()
else:
pass #put other code here if needed
addIfNotExist(session, new)
Without putting a unique index on id, this is the most direct thing I can think of, as there isn't (to my knowledge) a built in way of doing what you want to do
I have a generic code that is used to retrieve DDL information from a Firebird database (FB2.1). It generates SQL code like
SELECT * FROM MyTable where 'c' <> 'c'
I cannot change this code. Actually, if that matters, it is inside Report Builder 10.
The fact is that some tables from my database are becoming a litle too populated (>1M records) and that query is starting to take too long to execute.
If I try to execute
SELECT * FROM MyTable where SomeIndexedField = SomeImpossibleValue
it will obviously use that index and run very quickly.
Well, it wouldn´t be that hard to the database find out that that is an impossible matcher and make some sort of optimization and avoid testing it against each row.
Is there any way to make my firebird database to optimize that search?
As the filter condition is a negative proposition (and also doesn't refer a column to search, but only a value to compare to another value), Firebird need to do a full table scan (without use any index) to confirm that aren't any record that meet your criteria.
If you can't change you need to wait for the upcoming 3.0 version, that will implement the Boolean data type, and therefore should start to evaluate "constant" fake comparisons in advance (maybe the client library will do this evaluation before send the statement to the server?).
Classic issue, new framework -- thus problem.
PostgreSQL + Scala + ScalaQuery. I have Master table with serial (autincrement) id and Slave table also with serial id.
I need to insert one master record and several slaves. I have to do it within transaction (to have ability to cancel all), so I cannot run a query after inserting master to find out id. As far as I see SQ "insert" method does not return any reference to inserted master record.
So how to do it?
SQ Examples cover this however without autoincremented field, so such solution (pre-set ids) is not applicable here.
If I understand it correctly this is not possible for now in automatic way. If one is not afraid, this can be done this way. Obtaining the id of last insert (per each master record insertion):
postgreSQL function for last inserted ID
Then using it in SQ:
http://groups.google.com/group/scalaquery/browse_thread/thread/faa7d3e5842da82e
This code shows the MySql way. I'm posting it to the list for
posterity's sake.
val scopeIdentity = SimpleFunction.nullaryLong
val inserted = Actions.insert(
"cat", "eats", "dog)
//Print out the count of inserted records. println(inserted )
//Print out the primary key for the last inserted record.
println(Query(scopeIdentity).first)
//Regards //Bryan
But since for auto incremented fields you have to use projections excluding autoinc fields, and then inserting tuples instead of named record types, there is a question if it is not worth to hold breath until SQ will support this directly.
Note I am SQ newbie, I might just misinform you.
This is probably a super simple question, but I'm struggling to come up with the right keywords to find it on Google.
I have a Postgres table that has among its contents a column of type text named content_type. That stores what type of entry is stored in that row.
There are only about 5 different types, and I decided I want to change one of them to display as something else in my application (I had been directly displaying these).
It struck me that it's funny that my view is being dictated by my database model, and I decided I would convert the types being stored in my database as strings into integers, and enumerate the possible types in my application with constants that convert them into their display names. That way, if I ever got the urge to change any category names again, I could just change it with one alteration of a constant. I also have the hunch that storing integers might be somewhat more efficient than storing text in the database.
First, a quick threshold question of, is this a good idea? Any feedback or anything I missed?
Second, and my main question, what's the Postgres command I could enter to make an alteration like this? I'm thinking I could start by renaming the old content_type column to old_content_type and then creating a new integer column content_type. However, what command would look at a row's old_content_type and fill in the new content_type column based off of that?
If you're finding that you need to change the display values, then yes, it's probably a good idea not to store them in a database. Integers are also more efficient to store and search, but I really wouldn't worry about it unless you've got millions of rows.
You just need to run an update to populate your new column:
update table_name set content_type = (case when old_content_type = 'a' then 1
when old_content_type = 'b' then 2 else 3 end);
If you're on Postgres 8.4 then using an enum type instead of a plain integer might be a good idea.
Ideally you'd have these fields referring to a table containing the definitions of type. This should be via a foreign key constraint. This way you know that your database is clean and has no invalid values (i.e. referential integrity).
There are many ways to handle this:
Having a table for each field that can contain a number of values (i.e. like an enum) is the most obvious - but it breaks down when you have a table that requires many attributes.
You can use the Entity-attribute-value model, but beware that this is too easy to abuse and cause problems when things grow.
You can use, or refer to my implementation solution PET (Parameter Enumeration Tables). This is a half way house between between 1 & 2.