I have a small application written in Go that connects to a PostgreSQL database on another server, utilizing database/sql and lib/pq. When I start the application, it goes through and establishes that all the database tables and indexes exist. As part of this process, it issues a SET search_path TO preferredschema,public command. Then, for the remainder of the database access, I do not have to specify the schema.
From what I've determined from debugging it, when database/sql reconnects (no network is perfect), the application begins failing because the search path isn't set. Is there a way to specify commands that should be executed when it reconnects? I've searched for an event that might be able to be leveraged, but have come up empty so far.
Thanks!
From the fine manual:
Connection String Parameters
[...]
In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html.
Then if we go over to the PostgreSQL documentation, you'll see various ways of setting connection parameters such as config files, SET commands, command line switches, ...
While the desired behavior isn't exactly spelled out, it is suggested that you can put anything you'd SET right into the connection string:
connStr := "dbname=... user=... search_path=preferredschema,public"
// -----------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
and since that's all there is for configuring the connection, it should be used for every connection (including reconnects).
The Connection String Parameters section of the pq documentation also tells you how to quote and escape things if whatever preferredschema really is needs it or if you have to grab a value at runtime and add it to the connection string.
Related
I am new to SSIS, I have created variables for connection string (Both source and destination). While generating the Config file, which property I need to select. Could you please help me with this?
It's not necessary to create variables for a connection string.
There are a few things you will need to provide to us to give you an exact answer.
The type of database you are connecting to.
What type of authentication you use to connect to it.
If you take the below image when setting up a connection manager for an OLE DB you simply need to provide the server name. Then which type of authentication it is.
If the connection is successful you should be able to select a database you wish to connect to. You can also test the connect to make sure the connection is working successfully.
Let me know if you have any other issues.
Thanks
Gav
DBIx::Class::Manual::Intro
suggests connecting to the database as follows
my $schema = MyApp::Schema->connect(...)
explicitly providing connection details such as the password.
I want to connect to the same database from multiple different scripts, and it would be unwise to code the same connection parameters into each of the programs separately.
What is the "official" way to create a connection method with fixed connection details?
I realize that I can write something like this
package MyApp::Schema;
use base qw/DBIx::Class::Schema/;
sub my_connect {
$_[0]::SUPER->connect(...);
}
1;
Is this approach recommended?
I realize that providing different connection details may be useful for testing scripts, but in reality we do not yet use testing scripts, so this is currently irrelevant for our team.
Put your connection details in a config file, create a utility to return the connection and read the config details like you showed, or as a factory type function. Make the config dependant on the environment and you'll have testing capabilities for free.
I have my instance running and am able to connect remotely however I'm stuck on where to set this parameter to false since it states that the default is set to true:
failIndexKeyTooLong
Setting the 'failIndexKeyTooLong' is a three-step process:
You need to go to the command console in the Tools menu item for the admin database of your database instance. This command will only work on the admin database, pictured here:
Once there, pick any command from the list and it will give you a short JSON text for that command.
Erase the command they provide (I chose 'ping') and enter the following JSON:
{
"setParameter" : 1,
"failIndexKeyTooLong" : false
}
Here is an example to help:
Note if you are using a free plan at MongoLab: This will NOT work if you have a free plan; it only works with paid plans. If you have the free plan, you will not even see the admin database. HOWEVER, I contacted MongoLab and here is what they suggest:
Hello,
First of all, welcome to MongoLab. We'd be happy to help.
The failIndexKeyTooLong=false option is only necessary when your data
include indexed values that exceed the maximum key value length of
1024 bytes. This only occurs when Parse auto-indexes certain
collections, which can actually lead to incorrect query results. Parse
has updated their migration guide to include a bit more information
about this, here:
https://parse.com/docs/server/guide#database-why-do-i-need-to-set-failindexkeytoolong-false-
Chances are high that your migration will succeed without this
parameter being set. Can you please give that a try? If for any reason
it does fail, please let us know and we can help you on potential next
steps.
Our Dedicated and Shared Cluster plans
(https://mongolab.com/plans/pricing/) do provide the ability to toggle
this option, but because our free Sandbox plans are running on shared
server processes, with other Sandbox users, this parameter is not
configurable.
When launching your mongodb server, you can set this parameter to false :
mongod --setParameter failIndexKeyTooLong=false
I have wrote an article that help you to Setting up Parse-Server and all its dependencies on your own server:
https://medium.com/#jcminarro/run-parse-server-on-your-own-server-using-digitalocean-b2a7d66e1205
Is there a command to know if the kdb server is busy running a query? Even better, knowing what is the percentage completion of the query being run?
So far I've been looking at the top screen on linux to know which server to use...
Unfortunately, not directly. The reason is due to the single threaded nature of a KDB process. In practice, this is easily worked around by adding some basic logging to your server. So whenever a query comes in just log to a file the time the query came in and when the result was returned to the user.
Take a look at the .z.pg and the .z.ps functions which are called to handle synchronous or asynchronous requests, respectively. By default they are just set to "value", which means evaluate the string and return the result. Just replace this with your own function to log events to a file or a log server.
Besides above solution, a more simple way is: keep checking the port.
Normally all queries will be running against port, and kdb server can launched multiple ports for different purpose.
Details:
Use below code to query again port, if the port is busy, null res will return. And you can further kill the port and restart it or whatever the requirement is.
The code will send out 1 to the port and calculate.
.server.testQuery:{[inPort]
res:#[{hopen(x;3000)};`$":",":" sv string `,inPort;0N];
if[not null res;hclose res];
:res
};
We have a postgres database which a lot of scripts connect to. Crucially, there is not a username per-script; there are a (small) number of usernames which are shared around the place.
When doing troubleshooting or performance optimising, it would be very useful to know which server SQL process corresponds (or corresponded, past-tense) to which script.
I am thinking of something like:
host=db-server dbname=whatever clientID=script1.py
I suspect the answer is "no", but my google-fu is weak.
You can explore using the "application_name" parameter. Depending on what your code is doing you can log it.