How can I listen for the creation of a specific model and create a new one (on a different table) based on this? - postgresql

I have a User model with a referral_key attribute. I'd like to create a ReferralKeyRecord upon creation of a user. I've read tons of documentation and StackExchange to no avail.
This answer uses after_insert(), but I am not trying to alter or validate the class which is being inserted; I am trying to add a new object from a completely different model—and session.add() isn't supported.
This answer is closer to what I want, but the accepted answer (ultimately) uses after_flush(), which is far too general. I don't want to listen to events thrown whenever the DB is updated somehow. I want it to fire off when a specific model is created.
And something like the following...
#event.listens_for(User, 'after_flush')
def create_referral_record(mapper, connection, target):
session.add(ReferralRecord(key=instances.referral_key))
session.commit()
... results in No such event 'after_flush' for target '<class 'models.User'>. I've looked through SQLAlchemy's documentation (the core events and the ORM events) and see no events that indicate a specific model has been created. The closest thing in Mapper events is the after_insert method, and the closest thing in Session events is the after_flush() method. I imagine this is a pretty common thing to need to do, and would thus be surprised if there wasn't an easy event to listen to. I assume it'd be something like:
#event.listens_for(User, 'on_creation')
def create_referral_record(session, instance):
record = ReferralRecord(key=instance.referral_key)
session.add(record)
session.commit()
Does anyone know better than I?

Or why not create the Referral inside the User constructor?
from sqlalchemy.orm import Session, relationship, Mapper
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, ForeignKey, create_engine, event
Base = declarative_base()
class User(Base):
__tablename__ = 'user'
def __init__(self):
self.referral = Referral()
id = Column(Integer(), primary_key=True)
referral = relationship('Referral', uselist=False)
class Referral(Base):
__tablename__ = 'referral'
id = Column(Integer(), primary_key=True)
user_id = Column(Integer(), ForeignKey('user.id'), nullable=False)
engine = create_engine('sqlite:///:memory:')
Base.metadata.create_all(engine)
session = Session(bind=engine)
session.add(User())
session.commit()
print(session.query(User).all())
print(session.query(Referral).all())

You can use the after_flush session event, and inside the event handler you can access the session's new objects (using session.new).
Example:
from sqlalchemy.orm import Session, relationship, Mapper
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, ForeignKey, create_engine, event
Base = declarative_base()
class User(Base):
__tablename__ = 'user'
id = Column(Integer(), primary_key=True)
class Referral(Base):
__tablename__ = 'referral'
id = Column(Integer(), primary_key=True)
user_id = Column(Integer(), ForeignKey('user.id'), nullable=False)
engine = create_engine('sqlite:///:memory:')
Base.metadata.create_all(engine)
session = Session(bind=engine)
#event.listens_for(session, 'after_flush')
def session_after_flush(session, flush_context):
for obj in session.new:
if isinstance(obj, User):
session.add(Referral(user_id=obj.id))
session.add(User())
session.commit()
print(session.query(User).all())
print(session.query(Referral).all())
Running this outputs:
[<__main__.User object at 0x00000203ABDF5400>]
[<__main__.Referral object at 0x00000203ABDF5710>]

Related

How to extend django's default User in mongodb?

I'm using mongodb as database and trying to extend the django's inbuilt user model.
here's the error I'm getting:
django.core.exceptions.ValidationError: ['Field "auth.User.id" of model container:"<class \'django.contrib.auth.models.User\'>" cannot be of type "<class \'django.db.models.fields.AutoField\'>"']
Here's my models.py:
from djongo import models
from django.contrib.auth.models import User
class Profile(models.Model):
user = models.EmbeddedField(model_container=User)
mobile = models.PositiveIntegerField()
address = models.CharField(max_length=200)
pincode = models.PositiveIntegerField()
Using EmbeddedField is not a good idea, because it will duplicate user data in the database. You will have some user in the Users collection and the same data will be embedded in the Profile collection elements.
Just keep the user id in the model and query separately:
class Profile(models.Model):
user_id = models.CharField() #or models.TextField()
mobile = models.PositiveIntegerField()
address = models.CharField(max_length=200)
pincode = models.PositiveIntegerField()
It is simple as defined in the documentation.
So, first, use djongo models as the model_container, and I suppose the User model is the Django model, not the djongo model.
And the second thing, make your model_cotainer model abstract by defining in the Meta class as given below.
from djongo import models
class Blog(models.Model):
name = models.CharField(max_length=100)
class Meta:
abstract = True
class Entry(models.Model):
blog = models.EmbeddedField(
model_container=Blog
)
headline = models.CharField(max_length=255)
Ref: https://www.djongomapper.com/get-started/#embeddedfield

EmbeddedDocumentSerializer runs query for every ReferenceField

I have following models and serializer the target is when serializer runs to have only one query:
Models:
class Assignee(EmbeddedDocument):
id = ObjectIdField(primary_key=True)
assignee_email = EmailField(required=True)
assignee_first_name = StringField(required=True)
assignee_last_name = StringField()
assignee_time = DateTimeField(required=True, default=datetime.datetime.utcnow)
user = ReferenceField('MongoUser', required=True)
user_id = ObjectIdField(required=True)
class MongoUser(Document):
email = EmailField(required=True, unique=True)
password = StringField(required=True)
first_name = StringField(required=True)
last_name = StringField()
assignees= EmbeddedDocumentListField(Assignee)
Serializers:
class MongoUserSerializer(DocumentSerializer):
assignees = AssigneeSerializer(many=True)
class Meta:
model = MongoUser
fields = ('id', 'email', 'first_name', 'last_name', 'assignees')
depth = 2
class AssigneeSerializer(EmbeddedDocumentSerializer):
class Meta:
model = Assignee
fields = ('assignee_first_name', 'assignee_last_name', 'user')
depth = 0
When checking the mongo profiler I have 2 queries for the MongoUser Document. If I remove the assignees field from the MongoUserSerializer then there is only one query.
As a workaround I've tried to use user_id field to store only ObjectId and changed AssigneeSerializer to:
class AssigneeSerializer(EmbeddedDocumentSerializer):
class Meta:
model = Assignee
fields = ('assignee_first_name', 'assignee_last_name', 'user_id')
depth = 0
But again there are 2 queries. I think that the serializer EmbeddedDocumentSerializer fetches all the fields and queries for ReferenceField and
fields = ('assignee_first_name', 'assignee_last_name', 'user_id')
works after the queries are made.
How to use ReferenceField and not run a separate query for each reference when serializing?
I ended up with a workaround and not using ReferenceField. Instead I am using ObjectIdField:
#user = ReferenceField("MongoUser", required=True) # Removed now
user = ObjectIdField(required=True)
And changed value assignment as follows:
- if assignee.user == MongoUser:
+ if assignee.user == MongoUser.id:
It is not the best way - we are not using ReferenceField functionality but it is better than creating 30 queries in the serializer.
Best Regards,
Kristian
It's a very interesting question and I think it is related to Mongoengine's DeReference policy: https://github.com/MongoEngine/mongoengine/blob/master/mongoengine/dereference.py.
Namely, your mongoengine Documents have a method MongoUser.objects.select_related() with max_depth argument that should be large enough that Mongoengine traversed 3 levels of depth: MongoUser->assignees->Assignee->user and cached all the related MongoUser objects for current MongoUser instance. Probably, we should call this method somewhere in our DocumentSerializers in DRF-Mongoengine to prefetch the relations, but currently we don't.
See this post about classical DRF + Django ORM that explains, how to fight N+1 requests problem by doing prefetching in classical DRF. Basically, you need to override the get_queryset() method of your ModelViewSet to use select_related() method:
from rest_framework_mongoengine.viewsets import ModelViewSet
class MongoUserViewSet(ModelViewSet):
def get_queryset(self):
queryset = MongoUser.objects.all()
# Set up eager loading to avoid N+1 selects
queryset.select_related(max_depth=3)
return queryset
Unfortunately, I don't think that current implementation of ReferenceField in DRF-Mongoengine is smart enough to handle these querysets appropriately. May be ComboReferenceField will work?
Still, I've never used this feature yet and didn't have enough time to play with these settings myself, so I'd be grateful to you, if you shared your findings.

Lift-mapper - inserting items to database

I am trying to add item to H2 database. My code is:
class Test extends LongKeyedMapper[Test] with IdPK {
def getSingleton = Test
object name extends MappedString(this, 100)
}
and Test.create.name("some_name").id(2).save, but I always get java.lang.Exception: Do not have permissions to set this field. What can I do wrong? Connection is of course open and I have permission to data from database.
IdPK extends MappedLongIndex which is not writable by default, that's why it restricts you from setting the field. Usually you would let the DB generate an PK ID automatically for you via autoincrement field (postgres, mysql), trigger + sequence (oracle), etc. So in most common scenarios you don't need to set this field. To be able to still set it add an override like this on your field:
override def writePermission_? = true

sqlalchemy NoSuchTableError when table exists in database

I am using PostgreSQL+Psycopg2, SQLAlchemy. I've already created my database "new_db" using pgAdminIII tool and a new schema in it as "new_db_schema". Under this schema I've all the tables I need. My code looks like this.
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy import Column, String, Integer, Boolean
engine_text = 'postgresql+psycopg2://root:12345#localhost:5432/db_name'
database_engine = create_engine(engine_text, echo = True)
Base = declarative_base(database_engine)
class Some_table(Base):
__tablename__ = 'some_table' # This table exist in database.
__table_args__ = {'autoload':True}
col1 = Column(String, primary_key=True)
col2 = Column(Boolean)
def __init__(self, col1_name, col2_name):
self.col1 = col1_name
self.col2 = col2_name
if __name__ == "__main__":
a = Some_table('blah', 'blah')
Now when i try to run the following code, I get sqlalchemy.exc.NoSuchTableError: some_table.
Since I already have the database setup with all the tables, I would like to autoload while creating classes. Am I missing something here? I need to write such classes for all the tables present in the database. Any help would be greatly appreciated.
Thanks
You can either:
Use schema-qualified table names: __tablename__ = 'new_db_schema.some_table'. You have to use them everywhere: in string arguments to ForeignKey etc.
Alter SEARCH_PATH in the database: SET search_path TO new_db_schema;. This SQL command has session scope, so you have to issue it at the start of every connection using SQLAlchemy event system.
Like this:
from sqlalchemy import event
def init_search_path(connection, conn_record):
cursor = connection.cursor()
try:
cursor.execute('SET search_path TO new_db_schema;')
finally:
cursor.close()
engine = create_engine('postgresql+psycopg2://...')
event.listen(engine, 'connect', init_search_path)

tastypie - List related resources keys instead of urls

When I have a related Resource, I would like to list foreign keys, instead of a url to that resource. How is that possible aside from dehydrating it?
I'm not sure that it's possible without dehydrating the field. I usually have utility functions that handle conversion the dehydration of foreign key and many-to-many relationships, something like this:
#api_utils.py
def many_to_many_to_ids(bundle, field_name):
field_ids = getattr(bundle.obj, field_name).values_list('id', flat=True)
field_ids = map(int, field_ids)
return field_ids
def foreign_key_to_id(bundle, field_name):
field = getattr(bundle.obj, field_name)
field_id = getattr(field, 'id', None)
return field_id
And apply them to the fields like so:
#api.py
from functools import partial
class CompanyResource(CommonModelResource):
categories = fields.ManyToManyField(CompanyCategoryResource, 'categories')
class Meta(CommonModelResource.Meta):
queryset = Company.objects.all()
dehydrate_categories = partial(many_to_many_to_ids, field_name='categories')
class HotDealResource(CommonModelResource):
company = fields.ForeignKey(CompanyResource, 'company')
class Meta(CommonModelResource.Meta):
queryset = HotDeal.objects.all()
dehydrate_company = partial(foreign_key_to_id, field_name='company')