I am using postgresql8.3 and include symmetris ds 1.5.1 in my application.But the replication is done fine for client to server . but the replication is not done from server to client.
I am newer to use the symmetric ds .Can anyone plese let me know the checklist of using symmetric ds for cheching that my symmetric ds is well configure or not.
Thank You very Much
Your description is very general, and it's not too easy to come up with any suggestions based on it. It could be that you have only set up one way replication, it could be that you have set up the root to both push and pull but not manually created a row for the client in NODE_SECURITY or it could be something different.
I suggest you first verify your configuration by looking at the SymmetricDS user guide. From there, i'd have a look at the log files (SymmetricDS usually gives some sort of sensible hint, although not always) and see if those say something. Last, I'd try out the SymmetricDS forum as you most likely will find more people there who are able to answer detailed technical questions.
Good luck!
Related
I am a beginner of CAS. I want to reset Principal attributeMap after loging successfully, and no solution in similar questions. Can anyone help me? Thanks for your advice!
CAS Version:6.1.6
I want to reset Principal attributeMap after logging successfully, and no solution in similar questions.
The reason you can't "find solution in similar questions" is because,
It cannot be done without a great deal of coding.
It's a bad idea. You cannot change the verified subject identify after it has been verified. Once the credentials are verified and the attributes are collected, that collection is final.
Rather than asking what is possible, it would be best if you described why you want to do this, and then folks can help you with alternatives once your use case and objectives are clearer.
I’m wondering if it’s possible to setup Keycloak In High-Availability. If yes could you give some advices ?
Yes it`s possible
Have you considered to check Keycloak documentation regarding this topic?
https://www.keycloak.org/docs/latest/server_installation/index.html#_clustering
https://www.keycloak.org/docs/latest/server_installation/#_operating-mode (e.g. Standalone Clustered Mode)
If you need additional help, please add more information to your question. But it would be nice if you read the documentation first :-)
In the next days, I will have a DB with more than 400 GB of information. I would like to know what could be a good option to split the database and log register files. Also : is it necessary to create distinct file groups?
Thanks.
This is one of those "it depends" questions, what is the exact issue you are trying to solve here? A 400Gb file is not really a problem until it becomes a problem.
If you're experiencing issues with IO throughput, then you might get performance improvements by splitting the data into different files and putting them on separate drives. Putting the log file on a different set of drives is also recommended to improve performance, but if you're not having IO performance problems, then why bother?
There is a lot of talk about best practices etc with regard to setting up SQL server and there are a few things that as a general rule are good to follow, but if you have something already set up and working and users aren't shouting, then why make the work for yourself by changing things?
Whenever you make a change like this in SQL server, make sure you know what problem you are trying to solve, make sure what you're doing is likely to improve or solve that problem and then take measurements to verify that what you have done has actually improved things.
On the postgreSQL's wiki, on the "Replication, Clustering, and Connection Pooling" page ( http://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling) , it shows the following example on replication's requirements:
"Your users take a local copy of the database with them on laptops when they leave the office, make changes while they are away, and need to merge those with the main database when they return. Here you'd want an asynchronous, lazy replication approach, and will be forced to consider how to handle conflicts in cases where the same record has been modified both on the master server and on a local copy"
And that's pretty much my case. But, unfortunatelly, on the same page, it says: "(...) A great source for this background is in the Postgres-R Terms and Definitions for Database Replication. The main theoretical topic it doesn't mention is how to resolve conflict resolution in lazy replication cases like the laptop situation, which involves voting and similar schemes."
What I want to know, is where can I find material on how to resolve this kind of situation, and wich would be the best way to do this on PostgreSQL.
I will have to check into RubyRep but it seems like Bucardo might be a more widely supported option.
Gabriel Weinberg has an EXCELLENT tutorial on his site for how he uses Bucardo. The guy runs his own search engine called DuckDuckGo and there are quite a few tips and tricks that are optimized for his use cases.
http://www.gabrielweinberg.com/blog/2011/05/replicating-postgresql-with-bucardo.html
Just answering my own question, if anyone ever finds it: I'm using Rubyrep http://www.rubyrep.org/ and it's working.
I have a discussion-db, and I need a great amount of test data, for different sized samples. Please, see the ready SELECT, JOIN and CREATE-queries, please scroll down in the link.
How can I automatically generate test data to the db?
How to generate test data in different sized samples?
Is there some ready tool?
Here are a couple of suggestions for free tools that generate test data:
Databene Benerator: supports many JDBC-capable database brands, uses XML format compatible with DbUnit, GPL license.
Super Smack: originally a load-test tool for MySQL, it also supports PostgreSQL and it includes a generator of mock data.
A current version of Super Smack appears to be available here
I asked a similar question here on StackOverflow in February, and the two choices above seemed like the best options.
I know this question is super dated, but I was looking for the answer to this exact question today and I came across this:
http://wiki.postgresql.org/wiki/Sample_Databases
Out of the options listed (including built in tools like pgbench), pgFoundry has several compelling options that works perfectly for the test cases I am working on.
I thought it might help someone like me, so there it is.
I'm not sure how to get automatically generated data and insert it into the database (I'm sure you could pull it off with a python script or something), but if you're just looking for endless blabbering to stick into a db, this should be helpful.
I'm not a postres person, but in many other DBs I've used, a simple mechanism for generating large quantities of test data is a cross join. The technique is particularly useful for generating large quantities of test data.
Here's a nice blog post on it (SQL Server specific though).