Good morning,
I looked in the forum here and could not find the answer. If I overlooked it, I apologize...
I just joined an existing project team using WSO2 Identity Manager 5.6 and API Gateway.
I understand that WSO2 Identity Manager is made up of several components, among which openLDAP (which contains a Berkeley database) and a postgreSQL database.
The current backup / restore procedures simply 'tar' the whole directory which contains all files related to WSO2 (including directories which contain database files), without stopping WSO2.
I'm a bit doubtful about this type of process for backing up. Is that the right thing to do?
If not, what would the right procedure be?
If I understand correctly, postgreSQL is only used for WSO2 'internal state data' storage, so backing it up may not be useful. So I'm thinking that maybe an export of openLDAP (slapcat command) be enough.
Backing openLDAP is probably not enough. Depending on how the WSO2 components (IS + APIM) are installed, you may also have H2 DBs for the local registry, Solr indexes for the UI, Velocity templates for API deployments, and/or Synapse XMLs for the APIs.
I recommend you to first compare your installation directories and files with the vanilla zips so you know how is it configured before changing your backup process.
Related
I develop Spring Boot Rest API project using JDBC and the database is PostgreSQL. I added authorization with Keycloak. I wanna use User Federation because I would like to use Users in my PostgreSQL DB. How can I use it and other ways not to use User Federation?
I have faced the same problem recently. I have different clients with different RDBMS, so I have decided to address this problem so that I could reuse my solution across multiple clients.
I published my solution as a multi RDBMS implementation (oracle, mysql, postgresl, sqlserver) to solve simple database federation needs, supporting bcrypt and several types of hashes.
Just build and deploy this solution on keycloak and configure it through the admin console providing jdbc connection string, login, password, the required SQL queries and the type of hash used.
Feel free to clone, fork or do whatever you need to solve your issue.
GitHub repo:
https://github.com/opensingular/singular-keycloak-database-federation
I'm doing similar development but with Oracle and JSF.
I created a project with three classes:
one implementing UserStorageProvider, UserLookupProvider and CredentialInputValidator
one implementing UserStorageProviderFactory
one extending AbstractUserAdapter
Then I created another project which creates an ear file containing the jar file generated in the previous project plus the driver jar file (of PostgreSQL in your case) inside a lib folder.
Finally the ear file is copied in the /opt/jboss/keycloak/standalone/deployments/ folder of the Keycloak server and it gets autodeployed as a SPI. It's necessary to add this provider in the User federation section of the administration application of Keycloak.
In the Application Map feature of Azure ApplicationInsights, it seems that the PostgreSql db dependency is not shown by default, whereas Azure storage queues, blobs are shown, and so are other http dependencies. This doc by Microsoft doesn't explain why either.
Does anyone know why and when this feature will be available?
I believe it's bc for .Net, PostgreSql is not an auto-collected dependency. You will have to manually wire it up, according to this article.
I'm developing an application with a postgresql db. Also it stores files in file system and keep their address in the db. I want an open source solution for backing up the app state including database and file storages.
Mandatory requirements:
supports backing up postgresql db when it is running.
support becking up a folder
support compression
Optional requirements:
Can view, create and restore backups in a web console.(Important)
support plugins or custom backup/restore tasks
support other data storages like mysql
support retention
I've seen project like barman or amanda but It seems each one solve some part of the problem.
Should I develop the solution myself?
The application is developed in java, if it matters.
So we're using the new SSDT Microsoft released, pretty cool stuff. We are keeping a database project under version control with all the schemas, and an offline database for development and we can later deploy on SQL Azure database. We;re using EF in development, so my question is where would the edmx fit in, should we update the edmx file from the offline database or from the online SQL Azure directly, whats the best practice on this?
I would say that in your case "the production database is the truth", so I would update from SQL Azure. There's no right answer tho really.
Incidentally, in the early betas of SSDT it was possible to have a reference from an EDMX to a SSDT project thus your source code became the truth (which, in my opinion, is preferable) and the EDMX knew it was always working against "the truth". Unfortunately they ditched this feature and there are no signs of it returning.
For the EF to work correctly the EDMX file has to be in-synch with the database you are connecting to. It's hard to answer your question without knowing the development process you follow but I would imagine you use Sql Azure in production and develop against an on-premises database. Therefore one copy of the Edmx file will be used on production server. In the development environment you have a "living" copy of the edmx file that is changed as needed when the local database changes. When you get to the point you when you are ready to ship you deploy your app (include the edmx file) to a production environment that uses Sql Azure.
If, in your development environment, you update the edmx file from the SQL Azure then stuff will break or will not work correctly if the schema of the database in Azure is different from schema of your local database.
We want to restore the database that we have got from the client as backup in our development environment, we are unable to restore the database successfully, can any one help us to know the steps involved in this restore process? Thanks in Advance.
Vijay, if you plan to make a new database out of checkpoints (+journals) made on another (physical) server, then I must disappoint you - it is going to be a painful process. Follow these instructions http://docs.actian.com/ingres/10.0/migration-guide/1375-upgrading-using-upgradedb . The process is basically the same as upgradedb . However, if architecture of the development server is different (say backup has been made on a 32bit system, and development machine is, say POWER6-based) then it is impossible to make your development copy of the database using this method.
On top of all this, this method of restoring backups is not officially supported by Actian.
My recommendation is to use the 'unloaddb' tool on the production server, export the database in some directory, SCP that directory to your development server, and then use the generated 'copy.in' file to create the development database. NOTE: this is the way supported by Actian, and you may find more details on this page: http://docs.actian.com/ingres/10.0/migration-guide/1610-how-you-perform-an-upgrade-using-unloadreload . This is the preferred way of migrating databases across various platforms.
It really depends on how the database has been backed up and provided to you.
In Ingres there is a snapshot (called a checkpoint) that can be restored into a congruent environment, but that can be quite involved.
There is also output from copydb and unloaddb commands which can be reloaded into another database. Things to look out for here are a change in machine architecture or paths that may have been embedded into the scripts.
Do you know how the database was backed up?