I have a database and the tables in this database are interconnected.
I am using seam and EJB to process data inside these tables in the database.
My backend database is postgres.
Now what I am trying to do is that I want to delete data from one table but I am getting a postgres sql error which tells me that I am violating rules.
I understand that I can delete this database logically
- A situation where I have to delete the database and set a flag.
But I don't know how to do this. I know this is simple but pardon me. I dont know it. Any help will be appreciative. Below is the code that I am using. Thank you for your help.
public void delete() throws java.sql.SQLException {
System.out.println("I got here FIRST");
user =em.find(Subscriber.class, subscriber.getId()); //ADDED LATER
users.remove(subscriber.getId());
em.remove(subscriber);
userList();
}
From what I can see at the first glance, is that you perhaps want to delete the user you are querying?
Thus, change em.remove(subscriber); to em.remove(user); //which you load from the find method
Update
Without know what kind of flag you want to check against, let me demonstrate how you can do this:
Lets assume User has a boolean field called disabled, and you want to only remove disabled users.
if(user.isDisabled())
em.remove(user);
So you only remove users if flag is true.
Related
Problem
I'm building a web app, where each user needs to have segregated data (due to confidentiality), but with exactly the same data structures/tables.
Looking around I think this concept is called multi-tenants? And it seems as though a good solution is 1 schema per tenant.
I think sqlalchemy 1.1 implemented some support for this with
session.connection(execution_options={
"schema_translate_map": {"per_user": "account_one"}})
However this seems to assume the schema and tables are already created.
I'm not sure how many tenants I'm going to have, so I need to create the schema, and the tables within them, on the fly, when the user's account is created.
Solution
What I've come up with feels like a bit of a hack, which is why I'm posting here to see if there's a better solution.
To create schemas on the fly I'm using
if not engine.dialect.has_schema(engine, user.name):
engine.execute(sqlalchemy.schema.CreateSchema(user.name))
And then directly afterwards I'm creating the tables using
table = TableModel()
table.__table__.schema = user.name
table.__table__.create(db.session.bind)
With TableModel defined as
class TableModel(Base):
__tablename__ = 'users'
__table_args__ = {'schema': 'public'}
id = db.Column(
db.Integer,
primary_key=True
)
...
I'm not too sure why to inherit from Base vs db.Model - db.Model seems to automatically create the table in public, which I want to avoid.
Bonus question
Once the schema are created, if, down the line, I need to add tables to all the schema - what's the best way to manage that? Does flask-migrations natively handle that?
Thanks!
If anyone sees this in the future, this solution seems to broadly work, however I've recently run into a problem.
This line
table.__table__.schema = user.name
seems to create some odd behaviour where the value of user.name seems to persist in order areas of the app, so if you switch user, the table from the previous user is incorrectly queried.
I'm not totally sure why this happens, and still investigating how to fix it.
I am wondering what the best practice would be for updating a record using JPA? I currently have devised my own pattern, but I suspect it is by no means the best practice. What I do is essentially look to see if the record is in the db, if I don't find it, I call the enityManager.persist(object<T>) method. if it does exist I call the entityManager.Merge(Object<T>) method.
The reason that I ask, is that I found out that the the merge method looks to see if the record is in the database allready, and if it is not in the db, then it proceeds to add it, if it is, it makes the changes necessary. Also, do you need to nestle the merge call in getTransaction().begin() and getTransaction.commit()? Here is what I have so far...
try{
launchRet = emf.find(QuickLaunch.class, launch.getQuickLaunchId());
if(launchRet!=null){
launchRet = emf.merge(launch);
}
else{
emf.getTransaction().begin();
emf.persist(launch);
emf.getTransaction().commit();
}
}
If the entity you're trying to save already has an ID, then it must exist in the database. If it doesn't exist, you probably don't want to blindly recreate it, because it means that someone else has deleted the entity, and updating it doesn't make much sense.
The merge() method persists an entity that is not persistent yet (doesn't have an ID or version), and updates the entity if it is persistent. You thus don't need to do anything other than calling merge() (and returning the value returned by this call to merge()).
A transaction is a functional atomic unit of work. It should be demarcated at a higher level (in the service layer). For example, transfering money from an account to another needs both account updates to be done in the same transaction, to make sure both changes either succeed or fail. Removing money from one account and failing to add it to the other would be a major bug.
I have a complex reporting application that allows clients to login and view reports for their client data. There are several sections of the application where there are database calls, using various controllers. I need to make sure that client A doesn't get client B's information via header manipulation.
The system authenticates, and assignes them a clientID and roleID. If your roleID >1, that means you work for the company hosting the data, and you can see all client info. I want to create a catch-all that basically works like this:
if($roleID > 1) {
...send query to database
}else {
if(...does this query select a record with clientID other than my $auth->clientID){
do not execute query
}else {
execute query
}
}
The problem is, I want this to run for every query that goes to the server... how can I place this code as a "roadblock" between the application and the DB? I already use Zend_Profiler to look at queries, so I know it is somehow possible, but cannot discern this from the Profiler code...
I can always write an authentication function and pass selected queries that way, but this catch-all would be easier to implement across all of the calls and would be future proof. Any help is appreciated.
it's application design fault.
you shoud use 'service architecture' - the only one entry point for queries would be a service. and any checks inside it.
If this is something you want run on every query, I'd suggest extending Zend_Db_Select and overwrite either the query() or assemble() functions to add in your logic. You'll also want to add a way for it to be aware of your $auth object.
Another option is to extend your database adapter so you can intercept the queries directly. IMO, you should try and do this at the application level though.
Depending on your database server, you can put a trace on the DB side.
Here's an example for Oracle:
http://orafaq.com/wiki/SQL_Trace
Say that I have a User table in my ReadDatabase (use SQL Server). In a regulare read/write database I can put like a index on the table to make sure that 2 users aren't addedd to the table with the same emailadress.
So if I try to add a user with a emailadress that already exist in my table for a diffrent user, the sql server will throw an exception back.
In Cqrs I can't do that since if I decouple the write to my readdatabas from the domain model, by puting it on an asyncronus queue I wont get the exception thrown back to me, and I will return "OK" to the UI and the user will think that he is added to the database, when infact he will never be added to the read database.
I can do a search in the read database checking if there is a user already in my database with the emailadress, and if there is one, then thru an exception back to the UI. But if they press the save button the same time, I will do 2 checks to the database and see that there isn't any user in the database with the emailadress, I send back that it's okay. Put it on my queue and later it will fail (by hitting the unique identifier).
Am I suppose to load all users from my EventSource (it's a SQL Server) and then do the check on that collection, to see if I have a User that already has this emailadress. That sounds a bit crazy too me...
How have you people solved it?
The way I can see is to not using an asyncronized queue, but use a syncronized one but that will affect perfomance really bad, specially when you have many "read storages" to write to...
Need some help here...
Searching for CQRS Set Based Validation will give you solutions to this issue.
Greg Young posted about the business impact of embracing eventual consistency http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/
Jérémie Chassaing posted about discovering missing aggregate roots in the domain http://thinkbeforecoding.com/post/2009/10/28/Uniqueness-validation-in-CQRS-Architecture
Related stack overflow questions:
How to handle set based consistency validation in CQRS?
CQRS Validation & uniqueness
0x80040237 Cannot insert duplicate key.
I'm trying to write an import routine for MSCRM4.0 through the CrmService.
This has been successful up until this point. Initially I was just letting CRM generate the primary keys of the records. But my client wanted the ability to set the key of a our custom entity to predefined values. Potentially this enables us to know what data was created by our installer, and what data was created post-install.
I tested to ensure that the Guids can be set when calling the CrmService.Update() method and the results indicated that records were created with our desired values. I ran my import and everything seemed successful. In modifying my validation code of the import files, I deleted the data (through the crm browser interface) and tried to re-import. Unfortunately now it throws and a duplicate key error.
Why is this error being thrown? Does the Crm interface delete the record, or does it still exist but hidden from user's eyes? Is there a way to ensure that a deleted record is permanently deleted and the Guid becomes free? In a live environment, these Guids would never have existed, but during my development I need these imports to be successful.
By the way, considering I'm having this issue, does this imply that statically setting Guids is not a recommended practice?
As far I can tell entities are soft-deleted so it would not be possible to reuse that Guid unless you (or the deletion service) deleted the entity out of the database.
For example in the LeadBase table you will find a field called DeletionStateCode, a value of 0 implies the record has not been deleted.
A value of 2 marks the record for deletion. There's a deletion service that runs every 2(?) hours to physically delete those records from the table.
I think Zahir is right, try running the deletion service and try again. There's some info here: http://blogs.msdn.com/crm/archive/2006/10/24/purging-old-instances-of-workflow-in-microsoft-crm.aspx
Zahir is correct.
After you import and delete the records, you can kick off the deletion service at a time you choose with this tool. That will make it easier to test imports and reimports.