I currently use the following code to connect to a database in my Perl script:
my $dsn = 'dbi:ODBC:MYDATABASE';
my $database = 'uat_env';
my $user = 'user';
my $auth = 'password';
my $dbh = DBI->connect($dsn, $user, $auth, {
RaiseError => 1,
AutoCommit => 1
}) or die("Couldn't connect to database");
$dbh->do('use '.$database);
Now the port has changed from 1433 to 40450.
I am having trouble changing the port in the DSN. I thought this change would work but I am receiving a "DSN not found" error:
my $dsn = 'dbi:ODBC:MYDATABASE;Port=40450';
Any idea why this isn't working?
There are two formats for a DBI data source string for ODBC. You can say either
dbi:ODBC:DSN=MYDATABASE
or you can abbreviate that to
dbi:ODBC:MYDATABASE
which is what you have. If you use just the DSN then you can't add any more parameters, so your dbi:ODBC:MYDATABASE;Port=40450 is looking for DSN MYDATABASE;Port=40450 which clearly doesn't exist
The proper way to do this is to set up a new DSN which has a copy of all the parameters of MYDATABASE, but with a different port name
At a guess, I would say you may be able to write
dbi:ODBC:DSN=MYDATABASE;Port=40450
but I can't be sure and I have no way of testing
If your requirements are simple then you can supply all of the parameters instead of a DSN, like this
dbi:ODBC:Driver={SQL Server};Server=11.22.33.44;Port=40450
but you will have to supply the correct driver if you aren't using a SQL Server ODBC connection, and other parameters may be necessary
You should start by examining the values in the MYDATABASE DSN and go from there
How can I change settings in pg_hba.conf and postgresql.conf either from the command-line or programatically (especially from fabric or fabtools)?
I already found set_config, but that does not seem to work for parameters which require a server restart. The parameters to change are listen_addresses in postgresql.conf and a new line in pg_hba.conf, so connections from our sub-network will be accepted.
This is needed to write deployment scripts using fabric. It is not an option to copy template-files which then override the existing *.conf files, because the database server might be shared with other applications which bring their own configuration parameters. Thus, the existing configuration must be altered, not replaced.
Here is the currently working solution, incorporating the hint from a_horse_with_no_name. I paste a snippet from our fabfile.py (it uses require from fabtools, and it runs against Ubuntu):
db_name = env.variables['DB_NAME']
db_user = env.variables['DB_USER']
db_pass = env.variables['DB_PASSWORD']
# Require a PostgreSQL server.
require.postgres.server(version="9.4")
require.postgres.user(db_user, db_pass)
require.postgres.database(db_name, db_user)
# Listen on all addresses - use firewall to block inadequate access.
sudo(''' psql -c "ALTER SYSTEM SET listen_addresses='*';" ''', user='postgres')
# Download the remote pg_hba.conf to a temp file
tmp = tempfile.NamedTemporaryFile()
with open(tmp.name, "w") as f:
get("/etc/postgresql/9.4/main/pg_hba.conf", f, use_sudo=True)
# Define the necessary line in pg_hba.conf.
hba_line = "host all all {DB_ACCEPT_IP}/0 md5".format(**env.variables)
# Search the hba_line in the existing pg_hba.conf
with open(tmp.name, "ra") as f:
for line in f:
if hba_line in line:
found = True
break
else:
found = False
# If it does not exist, append it and upload the modified pg_hba.conf to the remote machine.
if not found:
with open(tmp.name, "a") as f:
f.write(hba_line)
put(f.name, "/etc/postgresql/9.4/main/pg_hba.conf", use_sudo=True)
# Restart the postgresql service, so the changes take effect.
sudo("service postgresql restart")
The aspect I don't like with this solution is that if I change DB_ACCEPT_IP, this will just append a new line and not remove the old one. I am sure a cleaner solution is possible.
I have a script which create a Data Source using a DB2 JDBC Provider in Websphere Application Server 8.5. So I am fighting with an error while running the script and I need some help pls.
My script:
def createDB2(list):
print 'Creating DB2 Data Source...'
for dataSource in list:
datasourceName=dataSource[0]
dsJNDIName=dataSource[1]
compAuthAlias=dataSource[2]
providerName=dataSource[3]
dataStoreHelperClassName=dataSource[4]
description=dataSource[5]
serverName=dataSource[6]
databaseMaxConnections=dataSource[7]
databaseMinConnections=dataSource[8]
databaseconnTimeout=dataSource[9]
databasereapTime=dataSource[10]
databaseunusedTimeout=dataSource[11]
databaseagedTimeout=dataSource[12]
#Creare sursa de date
dataSourceId = AdminJDBC.createDataSourceAtScope( scope, providerName, datasourceName, dsJNDIName, dataStoreHelperClassName, serverName, [['componentManagedAuthenticationAlias',compAuthAlias],['containerManagedPersistence','true'],['description',description]] )
connectionPoolList = AdminConfig.list('ConnectionPool', dataSourceId)
connectionPoolList = AdminUtilities.convertToList(connectionPoolList)
connectionPoolId = connectionPoolList[0]
AdminConfig.modify(connectionPoolId, [["maxConnections", databaseMaxConnections], ["minConnections", databaseMinConnections], ["connectionTimeout", databaseconnTimeout], ["reapTime", databasereapTime], ["unusedTimeout", databaseunusedTimeout], ["agedTimeout", databaseagedTimeout]])
print 'Saving configuration...'
AdminConfig.save()
print "Configuration saved."
My input list:
[datasourceName, JNDIName, AuthAlias, providerName, dataStoreHelperClassName, description, srvName, maxConnections, minConnections, connTimeout, reapTime, unusedTimeout, agedTimeout]
I am using the same script to create an Oracle Data Source with no errors. The difference that I know between theese process is the serverName. For DB2 is a ServerName and for Oracle is an URL. Is there another difference that I don't know? Does anyone see an error or a mistake in my code?
My error:
Exception: com.ibm.ws.scripting.ScriptingException com.ibm.ws.scripting.ScriptingException: com.ibm.ws.scripting.ScriptingException: WASX8018E: Cannot find a match for option value [databaseName, java.lang.String, TestSRV] for step configureResourceProperties
WASX7017E: Exception received while running file "createDataSource.py"; exception information: com.ibm.ws.scripting.ScriptingException: WASX8018E: Cannot find a match for option value [databaseName, java.lang.String, TestSRV] for step configureResourceProperties
If you need more information leave a comment pls. Thanks in advance!
EDIT 03.03.2015
I found some examples in a RedBook from IBM.
Examples scripts for the DB2 database type:
The following example script includes optional attributes in a string format:
AdminJDBC.createDataSourceAtScope("Cell=IBM-F4A849C57A0Cell01,Node=IBM-F4A849C57A0Node01,Server=server1", "MyTestJDBCProviderName", "newds2", "newds2/jndi", "com.ibm.websphere.rsadapter.DB2UniversalDataStoreHelper", "db1", " category=myCategory, componentManagedAuthenticationAlias=CellManager01/AuthDataAliase, containerManagedPersistence=true, description=’My description’, xaRecoveryAuthAlias=CellManager01/xaAliase", "serverName=localhost, driverType=4,portNumber=50000")
The following example script includes optional attributes in a list format:
AdminJDBC.createDataSourceAtScope("Cell=IBM-F4A849C57A0Cell01,Node=IBM-F4A849C57A0Node01,Server=server1", "MyTestJDBCProviderName", "newds2", "newds2/jndi", "com.ibm.websphere.rsadapter.DB2UniversalDataStoreHelper", "db1", [[’category’, ’myCategory’], [’componentManagedAuthenticationAlias’, ’CellManager01/AuthDataAliase’], [’containerManagedPersistence’, ’true’], [’description’, ’My description’], [’xaRecoveryAuthAlias’, ’CellManager01/xaAliase’]] , [[’serverName’, ’localhost’], [’driverType’, 4], [’portNumber’, 50000]])
EDIT 16.04.2015
I am using the built in function createDataSourceAtScope and I have another example:
def createDataSourceAtScope( scope, JDBCName, datasourceName, jndiName, dataStoreHelperClassName, dbName, otherAttrsList=[], resourceAttrsList=[], failonerror=AdminUtilities._BLANK_ ):
I have to call the function like above. Did anyone see the problem? :)
The built-in scripts are in:dmgrProfile/scriptLibraries/resources/JDBC/V70
I still don't know how to fix my problem. If anyone has an ideea please leave a comment or an answer. Thank you very much!
I know it's too late, but I struggled with the same problem for Websphere on Docker. Then, I would like to share my solution.
Command to debug the scripts on ibmcom/websphere-traditional:8.5.5.18
/opt/IBM/WebSphere/AppServer/bin/wsadmin.sh -conntype None -f exportConfig.py
Jython script
import os
import sys
newjdbc = AdminConfig.getid('/JDBCProvider:"DB2 Universal JDBC Driver Provider"/')
ds = AdminTask.createDatasource(newjdbc, '[-name NameDataSource -jndiName jdbc/NameDataSource -description "DB2 Universal Driver Datasource" -dataStoreHelperClassName com.ibm.websphere.rsadapter.DB2UniversalDataStoreHelper -containerManagedPersistence true -componentManagedAuthenticationAlias db2inst1 -configureResourceProperties [[databaseName java.lang.String SAMPLE][portNumber java.lang.Integer 50000][serverName java.lang.String 172.17.0.3]]]')
AdminConfig.create('MappingModule', ds, '[[authDataAlias db2inst1] [mappingConfigAlias "DefaultPrincipalMapping"]]')
AdminConfig.save()
I am using Dancer::Plugin::Database to connect with database from my dancer application. It works fine for single connection. When i tried for multiple connection i got error. How can i add multiple connection.
I added the following code in my config.yml file:
plugins:
Database:
connections:
one:
driver: 'mysql'
database: 'employeedetails'
host: 'localhost'
port: 3306
username: 'remya'
password: 'remy#'
connection_check_threshold: 10
dbi_params:
RaiseError: 1
AutoCommit: 1
on_connect_do: ["SET NAMES 'utf8'", "SET CHARACTER SET 'utf8'" ]
log_queries: 1
two:
driver: 'mysql'
database: 'employeetree'
host: 'localhost'
port: 3306
username: 'remya'
password: 'remy#'
connection_check_threshold: 10
dbi_params:
RaiseError: 1
AutoCommit: 1
on_connect_do: ["SET NAMES 'utf8'", "SET CHARACTER SET 'utf8'" ]
log_queries: 1
Then i tried to connect with database using the following code :
my $dbh=database('one');
my $sth=$dbh->prepare("select * from table_name where id=?");
$sth->execute(1);
I got compilation error, "Unable to parse Configuration file"
Please suggest a solution.
Thanks in advance
YAML requires consistent indentation for the keys of a hash. Remove four spaces from before "two:" and it should parse.
Update: I see there's been some editing of indentation; going back to the original question produces a parsing error in a different place and shows a mixture of tabs and spaces being used; try to consistently use only tabs or only spaces. You can test your file and find what line is producing the first error like so:
$ perl -we'use YAML::Syck; LoadFile "config.yml"'
Syck parser (line 19, column 16): syntax error at -e line 1, <> chunk 1.
Also make sure that your keys are all ending up in the right hash (the mixture of tabs and spaces seems to allow this coming out wrong but still parsing successfully) with:
perl -we'use YAML::Syck; use Data::Dumper; $Data::Dumper::Sortkeys=$Data::Dumper::Useqq=1; print Dumper LoadFile "config.yml"'
i ran into this errors while trying to modify pinax database model
i am using eclipse pydev
i have this error on the pydev
Exception Type: TemplateSyntaxError at /
Exception Value: Caught an exception while rendering: (1146, "Table 'test1.announcements_announcement' doesn't exist")
please how do i correct this
UPDATE: i asked this question and left unresolved some months back and you what ran into the bug again this week and typed the error message in google hit the page with the question and unanswered so i think i have to answer it and hope it help someone in the future have the same problem.
some the problem is that the sqlite path is out of place so django or this case pinax can not find it so to resolve that change the absolute path to sqlite like it
DATABASE_ENGINE = 'sqlite3' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'ado_mssql'.
DATABASE_NAME = os.path.join(PROJECT_ROOT,'dev.db' ) # Or path to database file if using sqlite3.
DATABASE_USER = '' # Not used with sqlite3.
DATABASE_PASSWORD = '' # Not used with sqlite3.
DATABASE_HOST = '' # Set to empty string for localhost. Not used with sqlite3.
DATABASE_PORT = '' # Set to empty string for default. Not used with sqlite3.
i hope that help
chnge the sqlite3 path like this
DATABASE_ENGINE = 'sqlite3' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'ado_mssql'. DATABASE_NAME = os.path.join(PROJECT_ROOT,'dev.db' ) # Or path to database file if using sqlite3. DATABASE_USER = '' # Not used with sqlite3. DATABASE_PASSWORD = '' # Not used with sqlite3. DATABASE_HOST = '' # Set to empty string for localhost. Not used with sqlite3. DATABASE_PORT = '' # Set to empty string for default. Not used with sqlite3.
If your database model is missing a column, run
python manage.py syncdb
from the command line. This ensures that your models match the underlying database representation.