I'm trying to configure a SSH connection from Airflow UI on google cloud composer environment to an on premise posgresql server
Where I should store my private key ?
How to pass to SSH connection config the private key location ?
First, you will need to add an SSH connection under:
Airflow -> Admin -> Connections -> Connection Type (SSH)
That will allow you to use this connection in an operator to access the remote instance. Add your key to the Extra field (check key_file & host_key).
Documentation here: https://airflow.apache.org/docs/apache-airflow-providers-ssh/stable/connections/ssh.html
Adding the file under the same GCS bucket having the dags will make it reachable by the Airflow workers. You can have a new directory under gads and name it keys if you want.
Then you will need to design your pipeline (dag) to be able to get your private key from the remote instance.
You can use the SSHExecuteOperator or any other operator based on your design.
Check this question for more helpful details:
Airflow: How to SSH and run BashOperator from a different server
Related
Goal: Not have to configure AWS CLI credentials (~/.aws) every time I create a new devcontainer with CodeSpace.
I know I can't bind mount ~/.aws like I could with a local devcontainer. Is there any other mechanism that will allow me to inherit AWS credentials (or GitHub CLI credentials, etc.) from my VSCode host machine?
I am working with nest.js to build an API . I created a serverless RDS aurora for postgresSQl to use it as a database.
This is my aurora(postgresSQL) database instance (Connectivity and Security)
This is my database Configuration
This is my security group detail
Then I try connect by using endpoint,database,user etc, by using prisma in nest.js :
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = "postgresql://postgres:password#med.cluster-cnonikf1pbgi.ap-southeast-1.rds.amazonaws.com:5432/Medi?schema=public&ssl=true"
}
But when I run this command:
npx prisma migrate dev --name init
I got an error like this:
Error: P1001: Can't reach database server at `med.cluster-cnonikf1pbgi.ap-southeast-1.rds.amazonaws.com`:`5432`
Please make sure your database server is running at `med.cluster-cnonikf1pbgi.ap-southeast-1.rds.amazonaws.com`:`5432`.
I was able to connect directly to my Aurora Cluster without the need of a special gateway or ec2 instance. This worked for me:
Make sure you have "Public access" set to "Publicly accessible".
You should see this option when created the db but you can also modify it once the db has already been created by going to RDS -> Databases -> Select a db instance and not the cluster (the cluster does not seem to provide this option) -> Click "Modify" button in top right -> scroll down to the "Connectivity" Section -> Expand it and you'll see the option to change this setting.
Ensure the VPC "security group" that you have assigned to your DB grants external access to your DB. The same "Connectivity" section from step 1 also shows the VPC security group that your DB is using. Take note of it's name. You can view the details of your security group by going to the "VPC" service config page: VPC -> security groups -> click on your security group -> examine the inbound rules -> if needed create a new rule by click in "edit inbound rules" -> add rule. If you want to give access to just your IP you can choose "My IP", which will prefill your current IP address.
Some resources I found helpful:
Connecting from internet into VPC
Trouble Shooting Connectivity
You cannot connect to a server less aurora cluster outside of the VPC it is running in. You tried to access the DB from your local machine, right?
For local development you must create an EC2 instance in the same VPC of the aurora cluster and connect with SSH to the EC2 instance to connect then to the database. To you local database management tools you can also setup SSH port forwarding.
I would like to build a Google Cloud PostgreSQL database using the instructions here
I was able to successfully create the Postgres databases with appropriate tables and views locally.
What do I need to do in order to get the data on Google Cloud PostgreSQL? My goal is to have remote access to this data.
You have 2 options, The first one is use the Cloud SQL proxy as is described here. As the shared links say, the Cloud SQL Proxy provides secure access to your instances without the need for Authorized networks or for configuring SSL.
On the other hand, the second option is only to configure access to your instance under Authorized networks using or not SSL. The complete steps are listed here
You could connect to Cloud SQL from a local test environment using cloud sql proxy. See quickstart-proxy-test.
The workflow is:
Your Application(Running Locally) => cloud sql proxy (Running locally) => GCP remote Cloud SQL service
Need to create release pipeline using multi-configuration that needs to run steps on multiple servers using SSH. (Each server should be a value in the multi-configuration).
The SSH service connection parameter of the task uses a variable (which is multi-configured with the names of the service connections)
When running the release jobs, the SSH task fails with "Error: Endpoint auth data not present: 7dfbca54-6025-4265-866c-9abd76b02e81,7b595350-166f-4e45-996c-795793315182"
If my Multi-configuration only have one value, it works.
From the error message, looks like the multi-configuration variable is not splitted. Although, the Service connection ids are detected and replaced in the variable.
Is this a bug or am I doing something wrong?
config of multi-configuration
multi-configuration variable
SSH task using multi-configuration variable
list of SSH service connection
I can reproduce your issue. I set up a variable called ServerName with hughl-api20s,hughl-api21s as value. When I use $(ServerName) in the SSH server connection field of the ssh task, I also get Error: Endpoint auth data not present. I think this could be a bug.
As a workaround you can run two agent jobs and select a specific service connection in each agent job.
You can report a problem to the product team on this issue in the Develop Community Forum.
I have an SSH keypair: private lives on my local Mac, public lives on several AWS cloud machines.
From my Mac, I can SSH to a cloud instance, call it "deploy server". From there, I need to deploy my application to several instances (I cannot deploy locally).
I authenticate to the other instances with my private key. I can do this by either leaving my private key on the deploy server (insecure), or SSH Agent Forwarding (probably not much better).
Moreover, the deploy takes a while, so I do it in a gnu screen or tmux session; then I just detach and end the SSH session with the deploy server meaning I cannot use SSH Agent Forwarding (as I believe it requires the SSH connection to remain open).
What other options are available to me?
You can use a deploy key. That is a server specific key that has read only access to the repository.
To use this, you need to:
Generate a private key for the server (ssh-keygen on the server)
Set it at the github repo as a deploy key (https://github.com/<user>/<repo>/settings/keys). That will grant read only permissions to the repo. You have a checkbox if you also need write access to it.
Read more on this github help guide. There you can see more methods for deploying from a server accessing a repository.