NoHostAvailable connecting to Cassandra from python in Databricks environment - pyspark

from cassandra.cluster import Cluster
hostname = ['contact_point_name']
port = '10350'
cluster = Cluster(hostname, control_connection_timeout=None, port = port)
session = cluster.connect()
Error: NoHostAvailable: ('Unable to connect to any servers', {'23.96.242.234:10350': OperationTimedOut('errors=Timed out creating connection (5 seconds), last_host=None')})

The error you posted indicates that it couldn't connect to the cluster at all.
The possible causes are:
There is no network connectivity to the node(s).
The node is not listening on port 10350 on IP 23.96.242.234
Cassandra is listening for client connections on IP rpc_address and port native_transport_port (default is 9042). Confirm that you have the correct details and ensure there's connectivity between your machine and the cluster using Linux tools such as telnet or nc. Cheers!

Related

Synapse Notebook throws timeout error while connecting to AWS RDS SQL Server

I am working in the Synapse Workspace and trying to connect to AWS RDS from the Synapse Notebook.
Whenever I try to connect, it throws the below timeout error -
The TCP/IP connection to the host my-host, port 1433 has failed.
Error: "connect timed out.
Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port.
Make sure that TCP connections to the port are not blocked by a firewall.
To check whether I can ping the host from the Synapse Notebook - I tried the below code -
import subprocess
temp = subprocess.Popen(
['ping', '-c 1', 'my-host'], stdout = subprocess.PIPE)
output = str(temp.communicate())
print(output)
and this throws
ping statistics ---\n1 packets transmitted, 0 received, 100% packet loss
I get that this is the timeout error and the notebook cannot reach the server.
What is surprising is, if I try to connect to the same AWS RDS Server by creating a linked service from the Synapse pipeline, it connects successfully.
On my source AWS RDS, do I need to open the firewall for Synapse notebooks specifically? Is there any endpoint that I should mention in my notebook?
Also, Isn't it handled at the resource group level?
Any help is appreciated.
Thank you,
Sanket Kelkar
If you have already configured your database to listen on TCP/IP traffic on port 1433 then it could be any of following three reasons:
JDBC connection string might be incorrect.
Firewall is blocking the incoming connection. Make sure that it is publicly accessible. You can check this when you check the availability.
AWS RDS SQL database is not running. Ensure that "available" is shown as the status.
make sure you specify the port 1433 while creating SQL server.
Check to see if your DB instance can be accessed by the inbound rules of your VPC security group. For more information, see Can't connect to Amazon RDS DB instance.

Allow docker container to accept traffic from public IPs

My docker container running on AWS EC2 is configured to allow traffic from only 172.17.0.0:5432. I'd like to change this to allow traffic from public IP addresses? Do I use 0.0.0.0:5432->5432/tcp?
How do I change this configuration? I'm ssh'd into aws ec2.
Context, I am running postgres on docker container / image in aws ec2. However, my connection request fails as the traffic is blocked from remote machines.
conn = psycopg2.connect(
host="204.xxx.xxx.xxx",
port="5432",
database="name_db",
user="postgres",
password="xxxxxxxxxx"
)
OperationalError: could not connect to server: Connection refused
Is the server running on host "204.xxx.xxx.xxx" and accepting
TCP/IP connections on port 5432?
have you created the appropriate rules for the AWS server that hosts the docker container?

How to Connect using Port Forwarding Database Postgrsql on Openshift 3

I Have a problem on Connect from Port Forwarding Database on Openshift :
Running Pods Postgresql :
I Try Connect to Container running the database to check process and psql command, then it works :
Next, I Try Port Forwarding for Try Connection from outside Openshift Cluster:
Then I Try Connect from Outside Cluster to connect Postgresql have Error: Connection Refuse
Im Using IP Based or Hostname / FQDN Not Working and Error Still Exist
And When I Try Check Firewall port it has been opened port 5432/TCP :
Anyone Can Help Me With This problem ?
Thanks
Note: Before I have Been Looking Documentation but Not Working Resolve the Problem
Source Documentation:
https://www.openshift.com/blog/openshift-connecting-database-using-port-forwarding
"psql: could not connect to server: Connection refused" Error when connecting to remote database
The oc port-forward command is forwarding from only your loopback interfaces.
If you are running your client on the same machine where the cluster is running, then use localhost as your "Host".
If you are running your client on a different machine, they you need more network redirection to get this to work. Please see this post for more information as well as work-arounds for your problem: Access OpenShift forwarded ports from remote host

How to configure HAProxy to forward requests to Mongo database

Is it possible to setup/use HAProxy to forwards requests to a mongo database? If so can someone provide a basic example of how to set this up in the haproxy.cfg file?
I tried this, but this doesn't work:
listen mongo
bind 10.123.45.6:27017
mode tcp
balance roundrobin
server mongo1 10.456.78.9:27017
Where 10.123.45.6 is the IP of instance w/ HAProxy installed.
Where 10.456.78.9 is the IP of instance w/ mongodb installed.
Screenshot when trying to invoke client via command line:
C:\Program Files\MongoDB\Server\3.6\bin>mongo "mongodb://10.123.45.6:27017"
MongoDB shell version v3.6.11
connecting to: mongodb://10.123.45.6:27017/?gssapiServiceName=mongodb
2020-01-23T15:53:41.707-0800 W NETWORK [thread1] Failed to connect to 10.123.45.6:27017 after 5000ms milliseconds, giving up.
2020-01-23T15:53:41.707-0800 E QUERY [thread1] Error: couldn't connect to server 10.123.45.6:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:263:13
#(connect):1:6
exception: connect failed
Your settings are right, however I am adding an example for you
listen port_27017
bind :27017
mode tcp
server mongodb-port 10.156.78.9:27017
It should connect, but just to confirm from where are trying to connect? Is it within the local network or somewhere from the cloud. Because as per RFC 1918 all the 10.0.0.0/8 belong to private network, if you are outside the network and trying to access the 10.123.45.6 it won't work.
And if it is within network, and try to tail the log of HAProxy and see if it is able to connect to the Mongo or not.
If it is outside, you need to connect to HAProxy using it's public IP address rather than private IP.

How to ping to my local machine from AWS EC2 instance?

I have started an ubuntu instance on AWS EC2
e.g. [ec2-user#ip-XXX-XX-XX-XX ~]$
Inside this instance, I am running a socket program for sending the data to my local system.
The program is running properly, but not able to connect to my local IP.
I am trying to ping my local system also from AWS ec2 user, but it is also not working.But I am able to ping google(8.8.8.8).
e.g. [ec2-user#ip-xxx-xx-xx-xx ~]$ ping xxx.xxx.xx.xx(my local IP)
I have set all security groups(inbound), like All Trafic,All TCP and so on.
Sorry for bad English.
Thank You
Your computer (PC) cannot be pinged from an AWS hosted machine
This is probably because the VM on your computer is using NAT outbound to talk to the LAN, which goes to an Internet router, which sends the packets to AWS
The reverse route (inbound to your PC) does not exist so starting a ping echo request from a AWS machine will not work
It is possible to get around this by opening a pass through on your router but generally this is not a great idea
However if you want to make a socket connection securely there is a way
First, start a ssh session with remote port forwarding. In the Linux ssh client this is using the -R option.
For example, if your local system is running a listening service on port 80 and your remote system has the address of 54.10.10.10 then
ssh -R 8080:localhost:80 ec2-user#54.10.10.10
Will establish a circuit such that connections to the "localhost" on the remote ec2 server on port 8080 are connected to the "localhost" on port 80 of your local machine
If you are not using a ssh cli program, most ssh clients have a facility of this sort.
Note that it is necessary to keep the ssh session open to be able to use the connections