How to set up Slave Database configuration in vBulletin? - vbulletin

How to set up Slave Database configuration in vBulletin ? I set up like this:
$config['Database']['dbtype'] = 'mysql';
$config['Database']['dbname'] = 'xyz';
$config['Database']['tableprefix'] = 'vbulletin1_';
$config['Database']['technicalemail'] = 'xyz#abc.com';
$config['Database']['force_sql_mode'] = false;
$config['MasterServer']['servername'] = 'xyz';
$config['MasterServer']['port'] = 3306;
$config['MasterServer']['username'] = 'x';
$config['MasterServer']['password'] = 'xxxx';
$config['MasterServer']['usepconnect'] = 0;
$config['SlaveServer']['servername'] = 'abc';
$config['SlaveServer']['port'] = 3306;
$config['SlaveServer']['username'] = 'a';
$config['SlaveServer']['password'] = 'xxxx';
$config['SlaveServer']['usepconnect'] = 0;

This is depends from your slave DB credentials only. And "Slave DB" means that you have replicated DB on your host (vBulletin can't make this, it should be done automatically by your web server). So, if you have not a replicated DB, you should not setup Slave DB.
A Master-Slave setup is for performance. You send write queries to the master server, and most read queries to the slave server. It helps improve performance because write queries lock tables/rows depending on the database table type, and reads do not.
vBulletin forum

Related

"consecutive SC failures" on gem5 simple config script

I am new to gem5 and I ran into a problem while trying to write a simple multi-core system configuration script. my script is based on the example scripts given on: http://learning.gem5.org/book/part1/cache_config.html
When i try to add more than one dcache to the system (for each different core) im getting an infinite loop of this warning message:
warn: 186707000: context 0: 10000 consecutive SC failures.
incremented by 10000 each time.
I tried looking in gem5's given configuration scripts se.py and CacheConfig.py but I still cant understand what im missing here. I know that I can just simulate this configuration using se.py but I tried to do this by myself as practice and to get a deeper understanding of the gem5 simulator.
some additional info: im running gem5 in se mode and trying to simulate a simple multicore system using riscv cores.
this is my code:
import m5
from m5.objects import *
from Caches import *
#system config
system = System(cpu = [TimingSimpleCPU(cpu_id=i) for i in xrange(4)])
system.clk_domain = SrcClockDomain()
system.clk_domain.clock = '1GHz'
system.clk_domain.voltage_domain = VoltageDomain()
system.mem_mode = 'timing'
system.mem_ranges = [AddrRange('512MB')]
system.cpu_voltage_domain = VoltageDomain()
system.cpu_clk_domain = SrcClockDomain(clock = '1GHz',voltage_domain= system.cpu_voltage_domain)
system.membus = SystemXBar()
system.l2bus = L2XBar()
multiprocess =[Process(cmd = 'tests/test-progs/hello/bin/riscv/linux/hello', pid = 100 + i) for i in xrange(4)]
#cpu config
for i in xrange(4):
system.cpu[i].icache = L1ICache()
system.cpu[i].dcache = L1DCache()
system.cpu[i].icache_port = system.cpu[i].icache.cpu_side
system.cpu[i].dcache_port = system.cpu[i].dcache.cpu_side
system.cpu[i].icache.mem_side = system.l2bus.slave
system.cpu[i].dcache.mem_side = system.l2bus.slave
system.cpu[i].createInterruptController()
system.cpu[i].workload = multiprocess[i]
system.cpu[i].createThreads()
system.l2cache = L2Cache()
system.l2cache.cpu_side = system.l2bus.master
system.l2cache.mem_side = system.membus.slave
system.system_port = system.membus.slave
system.mem_ctrl = DDR3_1600_8x8()
system.mem_ctrl.range = system.mem_ranges[0]
system.mem_ctrl.port = system.membus.master
root = Root(full_system = False , system = system)
m5.instantiate()
print ("Begining Simulation!")
exit_event = m5.simulate()
print ('Exiting # tick {} because {}' .format(m5.curTick() , exit_event.getCause()))

Connection of PostgreSQL database with corda

How to connect PostgreSQL database using pg admin to corda instead of H2 database ?
What are the changes to be done in node.conf file before the nodes are up ?
As mentioned in the comments, all you need is adding the followings node.conf file after you have generated your node.
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://[HOST]:[PORT]/postgres"
dataSource.user = [USER]
dataSource.password = [PASSWORD]
}
database = {
transactionIsolationLevel = READ_COMMITTED
}
And please remember to wrap all the string values with double quotes (eg. "Username", "Password")

Database for numerical data from physics simulation

I work in theoretical physics and I do lot of computer simulations. An important part of my duty is the analysis of the results. I make simulations and store the numerical results in a file with some simple name. Typically I have lot of data files with very similar name and after a while I do not remember what kind of parameters the file corresponds to. I was thinking that maybe there exists a better way to store numerical results from a simulation e.g. some database (SQL, MongoDB etc.) where I could put some comments about parameters of the program, names, date etc. - a sort of a library with numerical data. I just have everything in a one place well organized. Do you know of anything like this? How do you store you numerical data from computer simulations?
More details
Typical procedure looks like this. Let say we want to simulate time evolution of the three body problem. We have three bodies of different masses interacting with Newton forces. I want to test how these objects move in space depending on: relative mass value, initial position - 6 parameters. I run simulation for one choice of parameters and save it in file: three_body_m1=0p1_m2=0p3_(the rest).dat - all double precision in total 1+3*3 (3d) columns of data in one file. Then I lunch gnuplot, python etc. and visualize them. In principle there is no relation between the data from different simulations, but I can use them to make comparison plot.
Within same nodejs context, you can,
Stream big xyz data file to server using socket.io-stream + fs modules and save filename+parameters to database using mongodb module.(max 1-page of coding but more for complex server talking)
If data fits in ram and if you don't have to save immediately, you can use redis module to send everything to server cache easily(as key-value pairs such as data->xyzData and parameters->simulationParameters and user->name_surname) and read from it high speed. If you need data as file by other processes in server, you can stram to a ramdisk instead and have most of RAM bandwidth as a file cache.(needs more ram ofcourse but fast)
mongodb is slow(even with optimizations) for saving millions of particles xyz data but is most easiest and quickest install for parameter saving and sharing.
Using all could be better.
Saving: stream file to physical disk using socket.io-stream and fs. Send parameters to mongodb.
Loading: check redis if user is registered, check if data is in cache, if yes, get it, if no, stream from physical disk and also save some of it to redis at the same time.
Editing: check if cache exists, if yes then edit it. Another serverside process can update physical disk from that cache, if no then update physical disk directly.
The communication scheme could be:
data server talks to cache server if there is any pending writes/reads/edits, consumes jobs from there.
compute server talks to cache server for producing read/write/edit jobs or consuming compute jobs.
clients can talk to cache server for reading only.
admins can also place their own data or produce compute jobs or read stuff.
compute server, data server and cache server can be on same computer easily or moved to other computers thanks to nodejs's awesomeness and countless modules of it such as redis, socket.io-stream, fs, ejs, express(for clients for example), etc.
a cache server can offload some data to another cache server and have a redirection to it(or some mapping of data to it)
a cache server can communicate N number of data servers and M number of compute servers at the same time as long as RAM holds.
You have slow network? You can use gzip module to compress the data on-the-fly with just 3-5 lines of extra code(at both ends)
You don't have money?
Nodejs works on raspberry pi (as data server maybe?)
Nvidia GTX660 can work with an Intel galileo (compute server?) using nodejs with some extra native modules for opencl(could be hard to implement)(also connecting(and powering) gpu and galileo may not be easy but should be much faster than a cluster of raspberry pi boards for fp32 number crunching)
bypass cache, RAM is expensive for now.
data server cluster
\
\
\ client
\ client /
\ / /
\ / /
mainframe cache and database server ----- compute cluster
| \
| \
support cache server admin
A very simple example to send some files to another computer(or same):
var pipeline_n = 8;
var fs = require("fs");
// server part accepting files
{
var io = require('socket.io').listen(80);
var ss = require('socket.io-stream');
var path = require('path');
var ctr = 0;
var ctr2 = 0;
io.of('/user').on('connection', function (socket) {
var z1 = new Date();
for (var i = 0; i < pipeline_n; i++) {
ss(socket).on('data'+i.toString(), function (stream, data) {
var t1 = new Date();
stream.pipe(fs.createWriteStream("m://bench_server" + ctr + ".txt"));
ctr++;
stream.on("finish", function (p) {
var len = stream._readableState.pipes.bytesWritten;
var t2 = new Date();
ctr2++;
if (ctr2 == pipeline_n) {
var z2 = new Date();
console.log(len * pipeline_n);
console.log((z2 - z1));
console.log("throughput: " + ((len * pipeline_n) / ((z2 - z1)/1000.0))/(1024*1024)+" MB/s");
}
});
});
}
});
}
//client or another server part sending a file
//(you can change it to do parts of same file instead of same file n times),
//just a dummy file sending code to stress other server
for (var i = 0; i < pipeline_n; i++)
{
var io = require('socket.io-client');
var ss = require('socket.io-stream');
var socket = io.connect('http://127.0.0.1/user');
var stream = ss.createStream();
var filename = 'm://bench.txt'; // ramdrive or cluster of hdd raid
ss(socket).emit('data'+i.toString(), stream, { name: filename });
fs.createReadStream(filename).pipe(stream);
}
Here is testing insert vs bulk insert performance of mongodb(this could be a wrong way to benchmark but is simple, just uncomment-in the part you want to benchmark)
var mongodb = require('mongodb');
var client = mongodb.MongoClient;
var url = 'mongodb://localhost:2019/evdb2';
client.connect(url, function (err, db) {
if (err) {
console.log('fail:', err);
} else {
console.log('success:', url);
var collection = db.collection('tablo');
var bulk = collection.initializeUnorderedBulkOp();
db.close();
//benchmark insert
//var t = 0;
//t = new Date();
//var ctr = 0;
//for (var i = 0; i < 1024 * 64; i++)
//{
// collection.insert({ x: i + 1, y: i, z: i * 10 }, function (e, r) {
// ctr++;
// if (ctr == 1024 * 64)
// {
// var t2 = 0;
// db.close();
// t2 = new Date();
// console.log("insert-64k: " + 1000.0 / ((t2.getTime() - t.getTime()) / (1024 * 64)) + " insert/s");
// }
// });
//}
// benchmark bulk insert
//var t3 = new Date();
//for (var i = 0; i < 1024 * 64; i++)
//{
// bulk.insert({ x: i + 1, y: i, z: i * 10 });
//}
//bulk.execute();
//var t4 = new Date();
//console.log("bulk-insert-64k: " + 1000.0/((t4.getTime() - t3.getTime()) / (1024 * 64)) + " insert/s");
// db.close();
}
});
be sure to setup mongodb and or redis servers before this. Also "npm install module_name" necessary modules from nodejs command prompt.

Slick 3.0.1 limit connections to db

I'm looking at doing something as simple as limiting the number of connections that Slick 3.0.1 has to a postgres db.
This doesn't work since after a while the number of connections goes to 18 for example.
source-db = {
dataSourceClass = "org.postgresql.ds.PGSimpleDataSource"
properties = {
url = "jdbc:postgresql://..."
user = "..."
password = "..."
}
numThreads = 1
maxConnections = 5
}
If you are in a play application you are probably using HikariCP. To change the settings you need to add something like this to the configuration:
hikaricp {
minimumIdle = 2
maximumPoolSize = 5
}

Logfile Class error "You cannot execute this operation since the object has not been created."

We wanted to automate few management operations for new SQL Server installation, so we started looking into
LogFile Class
But this class doesn't let us run the ALTER() method to change log file location. Also doesn't let us add a new file and drop and existing file. Anyone know the internals of this class :) ?
NOTE: I know we can run a SQL query and run ALTER DATABASE MODIFY FILE and copy files and restart db. This question is specific to this class.
I also tried to alter an existing file instead of creating a new one and dropping existing one , and it throws the same error.
ERROR
"{"Drop failed for LogFile 'DBAUtility_log'. "}"
{"You cannot execute this operation since the object has not been created."}
class Program
{
static void Main(string[] args)
{
Server srv = new Server("xx");
Database db = default(Database);
db = srv.Databases["DBAUtility"];
//LogFile LF = new LogFile(srv.Databases.ItemById(0),'DBAUtility_log');
//Console.WriteLine("DB:", srv.Databases.Count());
Console.WriteLine(srv.Name);
Console.WriteLine("DBName" + srv.Databases.ItemById(5).ToString());
LogFile lf = new LogFile();
Console.WriteLine("LF:" + lf.ToString());
lf.Parent = db;
lf.Name = "DBAUtility_NEWLOG";
lf.FileName = "M:\\DBFiles\\SQLlog\\1\\DBAUtility_1.ldf";
lf.Create();
LogFile lf2 = new LogFile();
lf2.Parent = db;
lf2.Name = "DBAUtility_log";
lf2.FileName = "C:\\Install\\DBAUtility_1.ldf";
lf2.Drop(); //ERROR HERE
}
}
Create a log file for this database
$LogFileName = $db.name + '_Log'
$LogFile = New-Object ('Microsoft.SqlServer.Management.SMO.LogFile') ($db, $LogFileName)
$db.LogFiles.Add($LogFile)
$LogFile.FileName = $LogFileName + '.ldf'
$LogFile.Size = $logfilesize * 1024
$LogFile.GrowthType = 'KB'
$LogFile.Growth = $logfilegrowth * 1024
$LogFile.MaxSize = -1
$db.Create()