JPA model update [1.1.1] - jpa

I'm sure this is something very simple, but for the life of me can't
find the correct keywords on google.
Basically I've updated a couple models since my last deployment. Dev is set up with the jpa.ddl default setting of create-drop. Now I read that prod isn't supposed to run with jpa.ddl=update, so does that mean I have to manually script a schema change? I couldn't find any documentation saying the correct way. I am also using playapps, so the database is set up there. I set up ssl, so I should have sql access via command line. However, I was having difficulty figuring out the syntax for modifying the db. I'm so used to a gui environment such as phpMySQL or microsoft's sql server. The errors specifically I'm getting are the following two (when running the application after uploading to prod).
Unsuccessful: alter table PhotoSlide add index FK57E3FABF5C905145 (aPhoto_id), add constraint FK57E3FABF5C905145 foreign key (aPhoto_id) references StorePhoto (id)
Cannot add or update a child row: a foreign key constraint fails (play/#sql-2e29_32, CONSTRAINT FK57E3FABF5C905145 FOREIGN KEY (aPhoto_id) REFERENCES StorePhoto (id))

you can achieve that by using the migration module.More details you can find #
http://www.playframework.org/modules/migrate
The flow will be like the below:
You push new code to prod --> you run migrations --> restart the server
More documentation at :
https://github.com/dcardon/play-migrate/blob/master/documentation/manual/home.textile
With the latest version play supports migrations :
http://www.playframework.org/documentation/1.2.4/evolutions

Related

Change fields in table under /models folders does not update table schema

I have trouble in changing fields in my database table. I rewrite them in sequelize definition model files like follows: model that defines table structure
But when I add a column to this table, the error tells me I have to add a missing column attribute which I deleted before. I am pretty new to databases and ORM. Please give me some advice, thank you!!
migration file is also modified. And then I used command Sequelize db:migrate
Is there anything I miss?
I am pretty new too but I'd say you also have to reflect the changes in the migration file and then do a sequelize db:migrate to make them take effect in the database.
Actually the error is come from elsewhere try to access the non-existed attributes, I tried to migrate my database using the command (sequelize db:migrate or npx sequlize-cli db:migrate) they did work, if the schema did not update. check if sync() is used. More info could be found here. https://sequelize.org/master/manual/model-basics.html#extending--a-href-----class-lib-model-js-model-html--model--a-

Missing table pubsub_state in ejabberd

I have ejabberd working well but my logs are filling up with references to a missing table pubsub_state. I have a bunch of other tables with that prefix but not that one. Where can I find the definition so that I can add it?
I am using PostgreSQL as my back end.
The definition is in pg.sql but it's strange it wasn't created when you created the other tables...
https://github.com/processone/ejabberd/blob/master/sql/pg.sql#L243

Talend Open Studio : creating table in MySQL dynamically

I am trying to find out how to do dynamic creation of tables in MySQL using Talend.
In other ETL tools such as Pentaho they have a specific component called "metadata" to do this.
So my use case is the following:
1) Create database manually in MySQL
2) Use Talend to read CSV Header info, and use this as the fields of a table to be created in MySQL using Talend.
I have searched and could not find anything for this specific feature online.
[Note : using Talend Open Studio for Big Data Version: 6.4.1]
UPDATE:
I have made progress on this, but running into issues trying to generate a primary key on the MySQL output using the NumericSequence function.
My data does not have natively a primary key, but wish to include one:
I have a screenshot of the tMap attached here:
Also I am getting a compile/build error. It seems it does not like the primary key generation it seems.
See attached image.
And here also is the tMySQLOutput settings for Primary Key:
More Work Done:
I have changed the name of the Primary Key in the tMySQL component to match the name of the tMap output area component, i get the same compile error.
I will attach this error here:
To create dynamically MySQL tables in Talend, you can use one of the "Action on table" options in tMysqlOutput components (in your case, "Create table...").
https://help.talend.com/reader/4I8tDQGtrOPDl5MXAS3Q~w/aDNKleHXlevILu9pnbCoNg
Don't forget to define correctly PK fields for further inserts, updates, deletes...
Then, if necessary, through your favorite Database Tool (MySQL Workbench, DBeaver or other), you can retrieve the DDL (and DML) script(s).
I hope this answers to your problem/question.

Using "GO" in a SSDT post-deployment script

My SQL Server 2008 R2 database gets deployed from a SSDT project I created. One strange requirement in this case is to change one of the primary key columns to or from IDENTITY depending on where it's deployed. I'm not wild about the requirement but that's not my question.
What I have done is set up a post-deployment script that should run conditionally, depending on where the deployment is occurring. So far, so good.
In my script I am doing an ALTER TABLE ADD command to set up a new column, populate it and rename it to the old column. After this I try to do an UPDATE to the new column and it glitches because the column doesn't exist. I put a GO statement right under the ALTER TABLE ADD command and when I test-run the script in isolation, this solves the problem, everything works great.
-- Add a clone of the ID column with no Identity constraint
ALTER TABLE Communication ADD CommunicationIdNoIdentity INT NOT NULL;
GO
UPDATE Communication
SET CommunicationIdNoIdentity = CommunicationID;
However it appears this is not legal in the context of a post-deployment script. With the GO in place I get a build error right there:
Error: SQL72007: The syntax check failed 'Unexpected end of file occurred.' in the batch near
How can I get around this? The BuildAction on the file is set to "None" which I think is correct.
UPDATE: Although the error is emanating from the child script, I finally got it back to the parent. If I do this:
IF #IsDeploymentToDatacenter = 'TRUE'
:r .\FixIdColumnInCommunicationTable.SQL
it works. If I do this (what I had originally), it fails with the errors I described, among others:
IF #IsDeploymentToDatacenter = 'TRUE'
BEGIN
:r .\FixIdColumnInCommunicationTable.SQL
END
I would be tempted to handle it a different way, you could have the identity in the project but use a deployment contributor to stop it being deployed in the environments you don't need it.
You should be able to use my generic one or write one yourself:

Dynamic auditing of data with PostgreSQL trigger

I'm interested in using the following audit mechanism in an existing PostgreSQL database.
http://wiki.postgresql.org/wiki/Audit_trigger
but, would like (if possible) to make one modification. I would also like to log the primary_key's value where it could be queried later. So, I would like to add a field named something like "record_id" to the "logged_actions" table. The problem is that every table in the existing database has a different primary key fieldname. The good news is that the database has a very consistent naming convention. It's always, _id. So, if a table was named "employee", the primary key is "employee_id".
Is there anyway to do this? basically, I need something like OLD.FieldByName(x) or OLD[x] to get value out of the id field to put into the record_id field in the new audit record.
I do understand that I could just create a separate, custom trigger for each table that I want to keep track of, but it would be nice to have it be generic.
edit: I also understand that the key value does get logged in either the old/new data fields. But, what I would like would be to make querying for the history easier and more efficient. In other words,
select * from audit.logged_actions where table_name = 'xxxx' and record_id = 12345;
another edit: I'm using PostgreSQL 9.1
Thanks!
You didn't mention your version of PostgreSQL, which is very important when writing answers to questions like this.
If you're running PostgreSQL 9.0 or newer (or able to upgrade) you can use this approach as documented by Pavel:
http://okbob.blogspot.com/2009/10/dynamic-access-to-record-fields-in.html
In general, what you want is to reference a dynamically named field in a record-typed PL/PgSQL variable like 'NEW' or 'OLD'. This has historically been annoyingly hard, and is still awkward but is at least possible in 9.0.
Your other alternative - which may be simpler - is to write your audit triggers in plperlu, where dynamic field references are trivial.