DBIx::Class::Manual::Intro
suggests connecting to the database as follows
my $schema = MyApp::Schema->connect(...)
explicitly providing connection details such as the password.
I want to connect to the same database from multiple different scripts, and it would be unwise to code the same connection parameters into each of the programs separately.
What is the "official" way to create a connection method with fixed connection details?
I realize that I can write something like this
package MyApp::Schema;
use base qw/DBIx::Class::Schema/;
sub my_connect {
$_[0]::SUPER->connect(...);
}
1;
Is this approach recommended?
I realize that providing different connection details may be useful for testing scripts, but in reality we do not yet use testing scripts, so this is currently irrelevant for our team.
Put your connection details in a config file, create a utility to return the connection and read the config details like you showed, or as a factory type function. Make the config dependant on the environment and you'll have testing capabilities for free.
Related
This question is for pg-promise, its recommended usage pattern & based on following assumption,
It does-not make sense to create more than a single pgp instance, if they are connecting to same DB(also enforced by the good warning message of "Creating a duplicate database object for the same connection.")
Given:
I have 2 individual packages which need DB connection, currently they take connection string in constructor from outside and create connection object inside them, which leads to the warning of duplicate connection object and is fair as they both talk to same DB and there is a possibility for optimisation here(since i am in control of those packages).
Then: To prevent this, i thought of implementing dependency injection, for which i pass a resolve function in libraries constructor which gives them the DB connection object.
Issue: There are some settings which are at top level like parsers and helpers and transaction modes which may be different for each of these packages what is the recommendation for such settings or is there a better patterns to address these issues.
EG:
const pg = require('pg-promise');
const instance = pg({"schema": "public"});
instance.pg.types.setTypeParser(1114, str => str);//UTC Date which one library requires other doesnt
const constring = "";
const resolveFunctionPackage1 = ()=>instance(constring);
const resolveFunctionPackage2 = ()=>instance(constring);
To sum up: What is the best way to implement dependency injection for pg-promise?
I have 2 individual packages which need DB connection, currently they take connection string in constructor from outside and create connection object inside them
That is a serious design flaw, and it's is never gonna work well. Any independent package that uses a database must be able to reuse an existing connection pool, which is the most valuable resource when it comes to connection usage. Head-on duplication of a connection pool inside an independent module will use up existing physical connections, and hinder performance of all other modules that need to use the same physical connection.
If a third-party library supports pg-promise, it should be able to accept instantiated db object for accessing the database.
And if the third-party library supports the base driver only, it should at least accept an instantiated Pool object. In pg-promise, db object exposes the underlying Pool object via db.$pool.
what happens when they want to set conflicting typeparsers?
There will be a conflict, because pg.types is a singleton from the underlying driver, so it can only be configured in one way. It is an unfortunate limitation.
The only way to avoid it, is for reusable modules to never re-configure the parsers. It should only be done within the actual client application.
UPDATE
Strictly speaking, one should avoid splitting a database-access layer of an application into multiple modules, there can be a number of problems to follow that.
But specifically for separation of type parsers, the library supports setting custom type parsers on the pool level. See example here. Note that the update is just for TypeScript, i.e. in JavaScript clients it has been working for awhile.
So you still can have your separate module create its own db object, but I would advise that you limit its connection pool size to the minimum then, like 1:
const moduleDb = pgp({
// ...connection details...
max: 1, // set pool size to just 1 connection
types: /* your custom type parsers */
});
I am new to SSIS, I have created variables for connection string (Both source and destination). While generating the Config file, which property I need to select. Could you please help me with this?
It's not necessary to create variables for a connection string.
There are a few things you will need to provide to us to give you an exact answer.
The type of database you are connecting to.
What type of authentication you use to connect to it.
If you take the below image when setting up a connection manager for an OLE DB you simply need to provide the server name. Then which type of authentication it is.
If the connection is successful you should be able to select a database you wish to connect to. You can also test the connect to make sure the connection is working successfully.
Let me know if you have any other issues.
Thanks
Gav
I have a small application written in Go that connects to a PostgreSQL database on another server, utilizing database/sql and lib/pq. When I start the application, it goes through and establishes that all the database tables and indexes exist. As part of this process, it issues a SET search_path TO preferredschema,public command. Then, for the remainder of the database access, I do not have to specify the schema.
From what I've determined from debugging it, when database/sql reconnects (no network is perfect), the application begins failing because the search path isn't set. Is there a way to specify commands that should be executed when it reconnects? I've searched for an event that might be able to be leveraged, but have come up empty so far.
Thanks!
From the fine manual:
Connection String Parameters
[...]
In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html.
Then if we go over to the PostgreSQL documentation, you'll see various ways of setting connection parameters such as config files, SET commands, command line switches, ...
While the desired behavior isn't exactly spelled out, it is suggested that you can put anything you'd SET right into the connection string:
connStr := "dbname=... user=... search_path=preferredschema,public"
// -----------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
and since that's all there is for configuring the connection, it should be used for every connection (including reconnects).
The Connection String Parameters section of the pq documentation also tells you how to quote and escape things if whatever preferredschema really is needs it or if you have to grab a value at runtime and add it to the connection string.
We have a postgres database which a lot of scripts connect to. Crucially, there is not a username per-script; there are a (small) number of usernames which are shared around the place.
When doing troubleshooting or performance optimising, it would be very useful to know which server SQL process corresponds (or corresponded, past-tense) to which script.
I am thinking of something like:
host=db-server dbname=whatever clientID=script1.py
I suspect the answer is "no", but my google-fu is weak.
You can explore using the "application_name" parameter. Depending on what your code is doing you can log it.
Is it possible to setup HSQLDB in a way, so that the files with the db information are written into memory instead of using actual files? I want to use hsqldb to export some data structures together with hibernate mappings. Is is, however, not possible to write temporary files, so that I need to generate the files in-memory and return a stream with their contents as a response.
Setting hsqldb to use nio seems not to be a solution, because there is no way to get hold of those files before they get written onto the filesystem.
What I'm thinking of is a protocol handler for hsqldb, but I didn't find a suitable solution yet.
Just to describe in other words: A hack solution would be to pass hsqldb a stream or several streams. It would then during its operation write data into those streams. After all data is written, the user of the db could then use those streams to send it back over the network.
Yes, of course, we use it all the time for integration testing.
use as url : jdbc:hsqldb:mem:aname
see here for more details
DbUnit offers a handy database dump method as part of their package :
// database connection
Class driverClass = Class.forName("org.hsqldb.jdbcDriver");
Connection jdbcConnection = DriverManager.getConnection(
"jdbc:hsqldb:sample", "sa", "");
IDatabaseConnection connection = new DatabaseConnection(jdbcConnection);
// full database export
IDataSet fullDataSet = connection.createDataSet();
FlatXmlDataSet.write(fullDataSet, new FileOutputStream("full.xml"));
see DbUnit FAQ for more details. Of course there are routines to restore the data, as that is actually the puropose of the package : prepare a test database for integration testing. Usually we do this with an annotation, but you'll have to use tha API for that.