Specify cassandra's port username, password, and host info while running cadence - cadence-workflow

While running cadence with cassandra externally, how can we provide the specific port, username, password of the cassandra.
With default port : 9042 and cassandra's authentication disable, we can run the cadence using below command : docker run -e CASSANDRA_SEEDS=10.x.x.x e ubercadence/server
How to specify different port, username, password?

I believe you can simply put ":9043" in the seed address as a different port.

Related

Apache Superset remote connection to PostgreSQL database: Can't determine Superset IP

I'm running a Superset instance via Docker on a MacBook Air (2019, v11.5.2, Intel i5). I'm trying to set up a remote connection to a PostgreSQL database via an AWS endpoint. I entered the credentials via the dynamic form, as such:
HOST: {dbalias}.{xyz}.us-east-1.rds.amazonaws.com
PORT: 5432
DATABASE: {dbname}
USERNAME: {username}
PASSWORD: {password}
I'm sure that my credentials are valid because I used them to connect from both Databox and DBeaver. But when I try to connect here, Superset tells me that port 5432 is closed. A little digging tells me that this is likely a firewall issue.
I know that our database is set up to only allow connections from allowed IPs, and of course my machine's IP is whitelisted, so I assume I need to whitelist the IP that is sending the connection request (i.e., Superset). However, I cannot seem to find that information. Indeed, even Superset's PostgreSQL connection instructions seem to be incomplete vis-a-vis connecting to an AWS endpoint.
Assuming I've diagnosed the problem correctly (which is by no means a guarantee), the key question is: Where can I find my Superset instance's IP to add to my PostgreSQL IP whitelist? Relatedly, would this IP change next time I launch Superset from Docker, or will it persist?
Many thanks for any consideration.

Connect dbt to Postgres using SSH bastion

We are looking to connect dbt to Postgres using SSH bastion.
I followed the comments left under this issue, but I get a timeout error.
A few questions:
How should the profiles.yml be configure to connect via SSH? I added ssh-host but that did not get it working.
Is there any other configurations that I'd need to set up?
I just hacked my way through figuring this out and the steps listed in the above comment were very helpful for someone with zero experience in this realm who still needs to use dbt with a bastion host. Here is specifically how I did this and some helpful resources I came across. Hopefully others will find these examples helpful.
You register a public SSH key with the remote location, tied to a
private key that lives on your machine
Github has a helpful guide for how to do this: https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent
Add keys to ~/.ssh/config: Adding an RSA key without overwriting.
I also had to add IgnoreUnknown UseKeychain to ~/.ssh/config
You use a CLI tool (e.g. ssh, autossh) to "forward" a local port to
the remote location (bastion host)
To forward the local port to the bastion host, save your user/bastion host/db host into environment variables. I used Postgres so it looked like this.
ssh -l $BASTION_USER $BASTION_HOST -p 22 -N -C -L "5432:${POSTGRES_HOST}:5432";
In profiles.yml, instead of putting the host/port of a remote
database, you put localhost and the number of the "forwarding" port
Then in my ~/.dbt/profiles.yml looks includes this:
dev:
type: postgres
threads: 1
host: localhost
port: 5432
user: POSTGRES_USER
pass: POSTGRES_PWD
dbname: POSTGRES_DB_NAME
schema: dbt_tmp
Voila! Your connection is forwarded to the bastion host, authenticated
via SSH, and passed along to the database
At that point I ran dbt debug against my target and it connected with all checks passed.
I think you need to follow Jeremy's instructions from this comment:
The basic idea, as I remember it:
You register a public SSH key with the remote location, tied to a
private key that lives on your machine
You use a CLI tool (e.g. ssh,
autossh) to "forward" a local port to the remote location (bastion
host)
In profiles.yml, instead of putting the host/port of a remote
database, you put localhost and the number of the "forwarding" port
Voila! Your connection is forwarded to the bastion host, authenticated
via SSH, and passed along to the database
To be fair, he was also asking for definitive walkthroughs and included the caveat that this has had varying levels of success based on the particulars of the client, host, environment etc.

Connecting DBeaver to remote PostgreSQL DB via Unix socket

I recently installed https://dbeaver.io/ on a Windows PC and wish to access a database on a remote Linux server from it.
My Linux username is my_username and I also have a system user psql_user. I also have two existing PostgreSQL databases with the same name as their respective user. Typically, only the psql_user is used and is access by a php-fpm pool listening to a Unix socket and running as user psql_user, and as such have configured /var/lib/pgsql/12/data/pg_hba.conf as:
# TYPE DATABASE USER ADDRESS METHOD
local all all peer
host all all 127.0.0.1/32 ident
host all all ::1/128 ident
local replication all peer
host replication all 127.0.0.1/32 ident
host replication all ::1/128 ident
With the above configuration, after ssh'ing onto the server, I can access the my_username database by executing psql and can also access the psql_user database by executing sudo -u psql_user psql and do not need to use a password for either.
But now, how to connect from the remote Windows PC?
To attempt to do so, I first created ssh keys without passphrases on the Windows PC for both my_username and psql_user and added the public key to each Linux user's authorized_keys (had to manually create /home/psql_user/ because it is a systems user). I can can successfully PuTTY to the server as either using the ssh keys.
Next, on the DBeaver connection settings SSH tab, I checked "Use SSH Tunnel", entered the username and private key location and the Test tunnel configuration successfully shows connected with the client version as SSH-2.0-JSCH-01.54 and server version as SSH-2.0-OpenSSH_7.4. I also made no changes to the Advanced portion of this tab such as local and remote hosts and ports, and have also left the "You can use variables in SSH parameters" at their default values.
Using my server IP in the main tab, Authentication "Database Native", and leave password empty, I test the connection but get The connection attempt failed. syslog reports that connection to the IP on port 5432 failed which makes sense because I am set up using Unix sockets.
So, then I change the server IP on the main tab to 127.0.0.1 (or localhost) and try again but get FATAL: Ident authentication failed for user "my_username". Okay, a little closer, but not quite there.
I think it might be because DBeaver is passing the port so I attempt to disable this part by got to the Edit Driver tab and changing jdbc:postgresql://{host}[:{port}]/[{database}] to jdbc:postgresql://{host}/[{database}], but now get Connection to 127.0.0.1:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Not sure where to go next. When I PuTTY into the Linux machine, all is good but not when connecting remotely using DBeaver, and thought it would be the same if I am using SSH to connect DBeaver to the server. How can this be accomplished?
As pointed out in the other answer, DBeaver's SSH tunnel option doesn't support sockets currently. It is always TCP port based, so only connections using the host options in pg_hba.conf can be made (I've placed a feature request for SSH socket forwarding in DBeaver).
Here's how to set up forwarding of a local TCP port to a remote Unix socket. This allows you to use peer authentication over the Unix socket, so you don't have to provide a password for the PostgreSQL role:
ssh username#dbserver.example.com -L 5555:/var/run/postgresql/.s.PGSQL.5432 -fN
While I think that ssh tunnelling can be set up to connect to a unix socket rather than a port, I don't think dbeaver offers a way to do that, so you would have to set it up separately.
Although ident should also work if your server runs the identd service. I think most linux don't do that by default, but just apt install oidentd or whatever the equiv would be on your package manager should fix that.
The easier solution would be to just change the method from ident to md5 or scram, and assign a password (which dbeaver offers to memorize).

Serving my postgres database online

I want to have a postgres database on a computer that I can use from multiple (external) computers. It will act as a trial server for me, leaving it on whenever I need it.
I researched how to do it and found out I had to forward the service postgres to the internet. Postgres is on port 5432. I logged in my router which has a forwarding option. I opened up the port 5432, but cant add postgres to the list of services.
Is there a reason for that?
Actually. I found that I just have to adapt the pg_hba.conf file (just started trying). I am running windows. Any advise is welcome, this is not my expertise. I dont understand why it would work if I just adapt the pg_hba.conf. For games or other services, like a game, I have to open a port in the router. Or should I do both?
From Postgres documentation - Client authentication is controlled by a configuration file, which traditionally is named pg_hba.conf and is stored in the database cluster's data directory. (HBA stands for host-based authentication.)
Each record specifies a connection type, a client IP address range (if relevant for the connection type), a database name, a user name, and the authentication method to be used for connections matching these parameters.
So it is absolutely required to set up your pg_hba.conf for it to allow access to other computers. You will also need to setup router and firewall settings for allowing incoming connections to port 5432.
Here is what you need to do
on postgres.conf change listen_address to:
listen_addresses = '*'
and on pg_hba add this to the end of the file
host all all 0.0.0.0/0 md5
And also make sure the port is forwarded to the machine running Postgres from your router

google cloud tomcat stack mysql access

I created a tomcat stack instance on google cloud and it comes with mysql 5.5. I can access it through terminal by ssh then enter command line mysql -u username -p. However, I am trying to access the database through mysqlworkbench but I could not get the connection.
So, I guess I need to use connection method "Standard TCP/IP over SSH" and I need to confirm if I have the parameters right.
SSH Hostname: the IP of my instances <br/>
SSH Username: my google account username? this is the one i use to login when setting up the gcloud ssh<br/>
SSH Password: the password for the account<br/>
SSH Key file: ~/.ssh/google_compute_engine<br/>
MySQL Hostname: i put tomcat_rmre which is the hostname for the database when I do "show variables" in the terminal<br/>
MySQL Server Port: the port in "Show variables"<br/>
Username: username of the database<br/>
password: password of the database
When I test connection, I get Lost connection to MySQL server at 'reading initial communication packet', system error: 0
The Tomcat Click to Deploy solution does not use Cloud SQL, it just includes MySQL running on Google Compute Engine.
If you are not able to connect to the MySQL instance remotely via the "Standard (TCP/IP)" connection method it is probably because of one of:
You may have forgotten to open TCP port 3306 in your GCE firewall.
MySQL may not be listening to remote connections. Your my.cnf should contain bind-address = 0.0.0.0 and should not have skip-networking.