I followed the instructions of https://github.com/karatelabs/karate/wiki/Docker to run the karate-chrome Docker image and worked it fine.
But when i try to connect karate with my local server of postgres i have the following error:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
I have the postgres server running with the host and port correct (localhost:5432). And I'm using the following configuration for the JDBC API:
Background:
* def config = { username: 'postgres', password: 'pass', url: 'jdbc:postgresql://localhost:5432/database_name_here', driverClassName: 'org.postgresql.Driver' }
* def DbUtils = Java.type('Testapi.DbUtils')
* def db = new DbUtils(config)
Anyone have any suggestions to solve this problem?. Thank you in advance.
Note: When i use a mysql online server everything runs fine (with its respective configuration).
Related
problem
I'm trying to connect to mongodb with nestjs(^8.2.3) and typeorm(^0.2.28)
In test environment, connecting to mongodb standalone server is working. For your information, node mongodb library version is ^3.6.2.
production sample code(nestjs server)
I referred the typeorm code to write mongodb options
import { TypeOrmModule } from '#nestjs/typeorm';
import { MongoConnectionOptions } from 'typeorm/driver/mongodb/MongoConnectionOptions';
export const configForOrmModule = TypeOrmModule.forRootAsync({
imports: [],
useFactory: async () => {
const mongodbConfig: MongoConnectionOptions = {
type: 'mongodb',
username,
// for replicaSet (production)
hostReplicaSet: 'server1.example.com:20723,server2.example.com:20723,server.example.com:20723',
replicaSet: 'replicaSetName'
port: Number(port),
password: encodeURIComponent(password),
database,
authSource,
synchronize: true,
useUnifiedTopology: true,
entities: [Something],
};
return mongodbConfig;
},
inject: [],
});
But in production environment, when nestjs server try to connect to mongodb replicaSet, the server get this server selection loop error message over and over again like below. Interesting thing was the domain that the server tried to connect was different from replicaSet hosts(ex. another-hostname not included in server1.example.com:20723,server2.example.com:20723,server.example.com:20723). (+ edited: the another hostname is actual physical server indicated by the dns(server.example.com))
[39m01/28/2022, 2:39:16 AM [31m ERROR[39m [38;5;3m[TypeOrmModule] [39m[31mUnable to connect to the database. Retrying (3)...[39m
MongoServerSelectionError: getaddrinfo ENOTFOUND <another-hostname>
at Timeout._onTimeout (/home/node/app/node_modules/mongodb/lib/core/sdam/topology.js:430:30)
at listOnTimeout (node:internal/timers:557:17)
at processTimers (node:internal/timers:500:7)
what I’ve tried but these not worked
remove useUnifiedTopology: true option
downgrade mongodb library version to 3.5.11 (I've read in mongodb community there are something bug with topology after 3.6 version)
use host option not the hostReplicaSet
if you need more information, please tell me. thank you for your helping.
It was kubernates DNS issue. The hostReplicaSet server1.example.com:20723,... is resolved to host1 (physical server name. without example.com) but, k8s doesn't know it. so connection was failed.
there are two options
update kubernates /etc/hosts setting to add host1 -> host1.example.com
or update mongodb hostname host1 -> host1.example.com
I have a Postgres db over the AWS and currently we connect that using postico client by providing below information-
DB Host
DB Port
DB Username
DB Password
DB Name
SSH Host (it is domain)
SSH Port
SSH Private Key
During this time I used to my organisation VPN. But Now I have to connect the same with python code and I believe when I can connect the with postico I should be through code as well. I have used below code but unable to connect to db and fetch records so anyone can give idea or sample code?-
def __init__(self, pgres_host, pgres_port, db, ssh, ssh_user, ssh_host, ssh_pkey):
# SSH Tunnel Variables
self.pgres_host = pgres_host
self.pgres_port = pgres_port
if ssh == True:
self.server = SSHTunnelForwarder(
(ssh_host, 22),
ssh_username=ssh_user,
ssh_private_key=ssh_pkey,
remote_bind_address=(pgres_host, pgres_port),
)
server = self.server
server.start() #start ssh server
self.local_port = server.local_bind_port
print(f'Server connected via SSH || Local Port: {self.local_port}...')
elif ssh == False:
pass
def query(self, db, query, psql_user, psql_pass):
engine = create_engine(f'postgresql://{psql_user}:{psql_pass}#{self.pgres_host}:{self.local_port}/{db}')
print (f'Database [{db}] session created...')
print(f'host [{self.pgres_host}]')
self.query_df = pd.read_sql(query,engine)
print ('<> Query Sucessful <>')
engine.dispose()
return self.query_df
pgres = Postgresql_connect(pgres_host=p_host, pgres_port=p_port, db=db, ssh=ssh, ssh_user=ssh_user, ssh_host=ssh_host, ssh_pkey=ssh_pkey)
print(ssh_pkey)
query_df = pgres.query(db=db, query='Select * from table', psql_user=user, psql_pass=password)
print(query_df)
connect just as you would locally after creating an ssh tunnel
https://www.howtogeek.com/168145/how-to-use-ssh-tunneling/
I have tried everyting to connect my Chainlink node up to my postgresql database with no luck. I have scoured the interwebs for answers to no avail...
Here is the error message I am receiving:
[ERROR] failed to initialize database, got error failed to connect to `host=/tmp user=root database=`: dial error (dial unix /tmp/.s.PGSQL.5432: connect: no such file or directory)
Here is my .env file:
ROOT=/chainlink
LOG_LEVEL=debug
ETH_CHAIN_ID=42
MIN_OUTGOING_CONFIRMATIONS=2
LINK_CONTRACT_ADDRESS=0xa36085F69e2889c224210F603D836748e7dC0088
CHAINLINK_TLS_PORT=0
SECURE_COOKIES=false
GAS_UPDATER_ENABLED=true
ALLOW_ORIGINS=*
ETH_URL=wss://kovan.infura.io/ws/v3/id...
DATABASE_URL=https://chainlink-db-url://postgres:Password#chainlink-kovan:5432
I have tried every configuration of the connection string. Also I am able to connect to the db via pgAdmin no problem and the dbs are publicaly accessible.
The postgresql database is on AWS.
Please change the syntax of your DATABASE_URL to:
DATABASE_URL=postgresql://"username":"password"#"public-ip-pg-server":5432/"database-name"
just change:
"username" : you need to configure a new user, because the default/admin user postgres will not work for it.
"password" : password of the user
"public-ip-pg-server" : the public ip address of your postgresql-server
"database-name" : the name of your database
PS: delete all " in your syntax (;
Here is the link to the official documentation: https://docs.chain.link/docs/connecting-to-a-remote-database/
I'm trying to connect my deno application to mongodb but I get error.
import {MongoClient} from "https://deno.land/x/mongo#v0.21.2/mod.ts";
const client = await new MongoClient();
await client.connect("mongodb+srv://deno:i4vY8AtCEhr6ReqB#sample.7jp1l.mongodb.net/deno?retryWrites=true&w=majority");
const db = client.database("notes");
export default db;
everything seem to be fine but when I run the app, I get this error.
error: Uncaught (in promise) Error: MongoError: "Connection failed: failed to lookup address information: nodename nor servname provided, or not known"
throw new MongoError(`Connection failed: ${e.message || e}`);
^
at MongoClient.connect (client.ts:93:15)
at async mongodb.ts:4:1
2 problems that I see:
The code snippet above only works with Mongo installed in local machine.
The connection string use DNS Seed List, but the current library couldn't resolve to a list of hosts
To make it works with Mongo Atlas, you need to call a connect method with difference parameters and find a correct (static) host instead a (dynamic) DNS Seed List:
const client = new MongoClient();
const db = await client.connect({
db: '<your db or collection with work with>',
tls: true,
servers: [
{
host: '<correct host - the way to get the host - see bellow>',
port: 27017,
},
],
credential: {
username: '<your username>',
password: '<your password>',
mechanism: 'SCRAM-SHA-1',
},
});
How to get the correct host:
Open your Cluster in Mongo Atlas
Select Connect button
Select Connect to application option
Select Driver: Node.js and Version: 2.2.12 or later
Here you will see a list of host follow # character
Thank you #nthung.vlvn for the clue. Indeed the host needs to be a primary shard. It fixed the lookup address information, but I had another error, that my credentials are incorrect. I had to add db "admin" to credential:
credential: {
username: '<your username>',
password: '<your password>',
db: "admin",
mechanism: 'SCRAM-SHA-1',
}
That is weird, because I do not have admin db in my Atlas. Anyway it started to work.
I am trying to connect to Mongo Atlas using just Elixir mongo driver.
I wish there were enough help or code snippets around for making these external sharded connections with replica sets. Here's the error I've been receiving:
Mongo.Protocol (#PID<0.303.0>) failed to connect: ** (Mongo.Error) tcp connect: connection refused - :econnrefused
Connection start_link:
conn = Mongo.start_link(
database: "admin",
seeds: [
"server-shard-01:27017",
"server-shard-02:27017",
"server-shard-03:27017"
],
set_name: "test-shard-0",
username: "myuser",
password: "mypassword",
auth_source: "admin",
port: 27017,
type: "replica_set_primary",
ssl: true
)
I couldn't find any Erlang libraries helping here either. This could actually be due to the underlying Erlang libraries.
So alternatively we have implemented a Ruby code that does the writes to MongoDB and it runs as a separate container. Even though Elixir would let us run Ruby this is still not the best performing.
I would like to know if anyone else has found a solution for this yet?