Connection of PostgreSQL database with corda - postgresql

How to connect PostgreSQL database using pg admin to corda instead of H2 database ?
What are the changes to be done in node.conf file before the nodes are up ?

As mentioned in the comments, all you need is adding the followings node.conf file after you have generated your node.
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://[HOST]:[PORT]/postgres"
dataSource.user = [USER]
dataSource.password = [PASSWORD]
}
database = {
transactionIsolationLevel = READ_COMMITTED
}
And please remember to wrap all the string values with double quotes (eg. "Username", "Password")

Related

Is there a working example of Akka.net Persistence with MongoDb out there anywhere?

I am attempting to configure Akka.Net with journal persistence to MongoDb but it is throwing an exception that I can't quite figure out. Is there a reference example out there anywhere I can look at to see how this is supposed to work? I would have expected the examples in the unit tests to fill this need for me but the tests are missing for the MongoDb implementation of persistence. :(
Here's the error I am getting:
Akka.Actor.ActorInitializationException : Exception during creation --->
System.TypeLoadException : Method 'ReplayMessagesAsync' in type
'Akka.Persistence.MongoDb.Journal.MongoDbJournal' from assembly
'Akka.Persistence.MongoDb, Version=1.0.5.2, Culture=neutral, PublicKeyToken=null'
does not have an implementation.
and here is my HOCON for this app:
---Edit - Thanks for the tip Horusiath; based on that I updated to this HOCON and the Sqlite provider works but the MongoDb one is still giving an error.
<![CDATA[
akka {
actor {
provider = "Akka.Remote.RemoteActorRefProvider, Akka.Remote"
}
remote {
helios.tcp {
port = 9870 #bound to a specific port
hostname = localhost
}
}
persistence {
publish-plugin-commands = on
journal {
#plugin = "akka.persistence.journal.sqlite"
plugin = "akka.persistence.journal.mongodb"
mongodb {
class = "Akka.Persistence.MongoDb.Journal.MongoDbJournal, Akka.Persistence.MongoDb"
connection-string = "mongodb://localhost/Akka"
collection = "EventJournal"
}
sqlite {
class = "Akka.Persistence.Sqlite.Journal.SqliteJournal, Akka.Persistence.Sqlite"
plugin-dispatcher = "akka.actor.default-dispatcher"
connection-string = "FullUri=file:Sqlite-journal.db?cache=shared;"
connection-timeout = 30s
schema-name = dbo
table-name = event_journal
auto-initialize = on
timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
}
}
snapshot-store {
#plugin = "akka.persistence.snapshot-store.sqlite"
plugin = "akka.persistence.snapshot-store.mongodb"
mongodb {
class = "Akka.Persistence.MongoDb.Snapshot.MongoDbSnapshotStore, Akka.Persistence.MongoDb"
connection-string = "mongodb://localhost/Akka"
collection = "SnapshotStore"
}
sqlite {
class = "Akka.Persistence.Sqlite.Snapshot.SqliteSnapshotStore, Akka.Persistence.Sqlite"
plugin-dispatcher = "akka.actor.default-dispatcher"
connection-string = "FullUri=file:Sqlite-journal.db?cache=shared;"
connection-timeout = 30s
schema-name = dbo
table-name = snapshot_store
auto-initialize = on
}
}
}
}
]]>
</hocon>
So, back to my original question: Is there are working MongoDb sample that I can examine to learn how this is intended to work?
Configuration requires providing fully qualified type names with assemblies. Try to specify class as "Akka.Persistence.MongoDb.Journal.MongoDbJournal, Akka.Persistence.MongoDb" (you probably don't need double quotes either, as it's not inline string).
An old thread, but here's a large sample I put together years ago using Akka.Persistence.MongoDb + Clustering: https://github.com/Aaronontheweb/InMemoryCQRSReplication

Slick 3.0.1 limit connections to db

I'm looking at doing something as simple as limiting the number of connections that Slick 3.0.1 has to a postgres db.
This doesn't work since after a while the number of connections goes to 18 for example.
source-db = {
dataSourceClass = "org.postgresql.ds.PGSimpleDataSource"
properties = {
url = "jdbc:postgresql://..."
user = "..."
password = "..."
}
numThreads = 1
maxConnections = 5
}
If you are in a play application you are probably using HikariCP. To change the settings you need to add something like this to the configuration:
hikaricp {
minimumIdle = 2
maximumPoolSize = 5
}

Logfile Class error "You cannot execute this operation since the object has not been created."

We wanted to automate few management operations for new SQL Server installation, so we started looking into
LogFile Class
But this class doesn't let us run the ALTER() method to change log file location. Also doesn't let us add a new file and drop and existing file. Anyone know the internals of this class :) ?
NOTE: I know we can run a SQL query and run ALTER DATABASE MODIFY FILE and copy files and restart db. This question is specific to this class.
I also tried to alter an existing file instead of creating a new one and dropping existing one , and it throws the same error.
ERROR
"{"Drop failed for LogFile 'DBAUtility_log'. "}"
{"You cannot execute this operation since the object has not been created."}
class Program
{
static void Main(string[] args)
{
Server srv = new Server("xx");
Database db = default(Database);
db = srv.Databases["DBAUtility"];
//LogFile LF = new LogFile(srv.Databases.ItemById(0),'DBAUtility_log');
//Console.WriteLine("DB:", srv.Databases.Count());
Console.WriteLine(srv.Name);
Console.WriteLine("DBName" + srv.Databases.ItemById(5).ToString());
LogFile lf = new LogFile();
Console.WriteLine("LF:" + lf.ToString());
lf.Parent = db;
lf.Name = "DBAUtility_NEWLOG";
lf.FileName = "M:\\DBFiles\\SQLlog\\1\\DBAUtility_1.ldf";
lf.Create();
LogFile lf2 = new LogFile();
lf2.Parent = db;
lf2.Name = "DBAUtility_log";
lf2.FileName = "C:\\Install\\DBAUtility_1.ldf";
lf2.Drop(); //ERROR HERE
}
}
Create a log file for this database
$LogFileName = $db.name + '_Log'
$LogFile = New-Object ('Microsoft.SqlServer.Management.SMO.LogFile') ($db, $LogFileName)
$db.LogFiles.Add($LogFile)
$LogFile.FileName = $LogFileName + '.ldf'
$LogFile.Size = $logfilesize * 1024
$LogFile.GrowthType = 'KB'
$LogFile.Growth = $logfilegrowth * 1024
$LogFile.MaxSize = -1
$db.Create()

Saving to new cluster returns error

I'm creating cluster dynamically in xtend/Java
for (int i : 0 ..< DistributorClusters.length) {
val clusterName = classnames.get(i) + clusterSuffix;
database.command(
new OCommandSQL('''ALTER CLASS «classnames.get(i)» ADDCLUSTER «clusterName»''')).execute();
}
Then I create I add the oRole and Grant the security to the new oRole
val queryOroleCreation = '''INSERT INTO orole SET name = '«clusterSuffix»', mode = 0, inheritedRole = (SELECT FROM orole WHERE name = 'Default')''';
val ODocument result = database.command(new OCommandSQL(queryOroleCreation)).execute();
for (int i : 0 ..< classnames.length) {
database.command(
new OCommandSQL(
'''GRANT ALL ON database.cluster.«classnames.get(i)»«clusterSuffix» TO «clusterSuffix»''')).
execute();
}
Finally I try to save a JsonObject to one of the newly created cluster. I checked in the database and the cluster exists.
val doc = new ODocument();
doc.fromJSON(jsonToSave.toString());
val savedDoc = database.save(doc, "ClassName"+clusterSuffix);
database.commit();
But Orient returns the following error :
SEVERE: java.lang.IllegalArgumentException: Cluster name 'cluster:ClassNameclusterSuffix' is not configured
My Question :
What causes that exception? And can you add values to new cluster created?
Edit
The doc object contains reference to other classes. i.e:
{
#class:"Customer",
#version:0,
name:"Kwik-E-Mart",
user : {
#class:"User",
#version:0,
username: "Apu",
firstName:"Apu",
lastName:"Nahasapeemapetilon"
}
}
The user gets created in the default cluster, but the customer throws the exception.
You should remove the "cluster:" part. The second parameter of the method is "Name of the cluster where to save", it doesn't need any special prefix.
So:
val savedDoc = database.save(doc, "ClassName"+clusterSuffix);
should just work
I find out that using a query works fine source.
The following code worked on the first try:
val query = '''INSERT INTO ClassNameCLUSTER «"ClassName"+clusterSuffix» CONTENT «jsonToSave.toString()»'''
val ODocument savedDoc = database.command(new OCommandSQL(query)).execute();

How to set up Slave Database configuration in vBulletin?

How to set up Slave Database configuration in vBulletin ? I set up like this:
$config['Database']['dbtype'] = 'mysql';
$config['Database']['dbname'] = 'xyz';
$config['Database']['tableprefix'] = 'vbulletin1_';
$config['Database']['technicalemail'] = 'xyz#abc.com';
$config['Database']['force_sql_mode'] = false;
$config['MasterServer']['servername'] = 'xyz';
$config['MasterServer']['port'] = 3306;
$config['MasterServer']['username'] = 'x';
$config['MasterServer']['password'] = 'xxxx';
$config['MasterServer']['usepconnect'] = 0;
$config['SlaveServer']['servername'] = 'abc';
$config['SlaveServer']['port'] = 3306;
$config['SlaveServer']['username'] = 'a';
$config['SlaveServer']['password'] = 'xxxx';
$config['SlaveServer']['usepconnect'] = 0;
This is depends from your slave DB credentials only. And "Slave DB" means that you have replicated DB on your host (vBulletin can't make this, it should be done automatically by your web server). So, if you have not a replicated DB, you should not setup Slave DB.
A Master-Slave setup is for performance. You send write queries to the master server, and most read queries to the slave server. It helps improve performance because write queries lock tables/rows depending on the database table type, and reads do not.
vBulletin forum