How can I find the right inbound rule for my Github action to deploy on my AWS EC2 server? - github

I just created the action on my project and configured everything over there, but unfortunately I'm getting a message like this into the 'deploy file' section> ssh: connect to host ec2-MYIP.us-east-2.compute.amazonaws.com port 22: Operation timed out
Good thing is that I know what's happening. I have to allow as an Inbound Rule the following:
Type: SSH / Protocol: TCP / Post range: 22 / Source: ::/0;
As you can see here, it works fine without limiting the source IP >
But obviously I don't want to do that for security reasons, so I need to find out the source I need to put there.
I've tried a lot of Github IP addresses already, but all of them were unsuccessful.
Does anyone here know what's the right source for it to work in a protected way or how can I find it?
Action I am using > https://github.com/wlixcc/SFTP-Deploy-Action

The IP addresses of GitHub hosted runners are documented here: https://docs.github.com/en/free-pro-team#latest/actions/reference/specifications-for-github-hosted-runners#ip-addresses
Windows and Ubuntu runners are hosted in Azure and have the same IP address ranges as Azure Data centers.
[...]
Microsoft updates the Azure IP address ranges weekly in a JSON file that you can download from the Azure IP Ranges and Service Tags - Public Cloud website. You can use this range of IP addresses if you require an allow-list to prevent unauthorized access to your internal resources.

An improved answer over riQQ's: Dynamically retrieve the Github Action runner's IP address during your workflow using the public-ip action and
update your EC2 server's security group ingress rules before and after your SSH steps.
Your EC2 instance will never be exposed to public IP addresses on your SSH port.
Note: You will need to also set AWS credentials on your runner with permissions to update the associated EC2 security group.
Your workflow should look something like this:
deploy:
name: deploy
runs-on: ubuntu-latest
env:
AWS_INSTANCE_SG_ID: <your-ec2-security-group-id>
steps:
- name: configure aws credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: <your-ci-aws-access-key>
aws-secret-access-key: <your-ci-aws-secret-key>
aws-region: <your-ec2-aws-region>
- name: get runner ip address
id: ip
uses: haythem/public-ip#v1.2
- name: whitelist runner ip address
run: |
aws ec2 authorize-security-group-ingress \
--group-id $AWS_INSTANCE_SG_ID \
--protocol tcp \
--port 22 \
--cidr ${{ steps.ip.outputs.ipv4 }}/32
- name: ssh into your ec2 and do whatever
run: |
...do whatever you need to do...
- name: revoke runner ip address
run: |
aws ec2 revoke-security-group-ingress \
--group-id $AWS_INSTANCE_SG_ID \
--protocol tcp \
--port 22 \
--cidr ${{ steps.ip.outputs.ipv4 }}/32

Related

Connecting to a service running directly on a GitHub Actions Runner

I want to create a GitHub Actions workflow to automatically test web based software. I need to manually log in with my local web browser for verification. How do I map the runner's public IP and expose the correct ports (let's say 3000), so that I can connect directly to the service from my local machine and verify some functions in the browser?
Here is a conceptually related workflow that is similar enough to my situation:
name: Start NPM Server
on:
push:
branches: [ main ]
jobs:
start-server:
runs-on: ubuntu-latest
steps:
- name: Check IP
run: curl https://api.ipify.org
- name: Checkout code
uses: actions/checkout#v2
- name: Install dependencies
run: npm install
- name: Start server
run: |
npm start
sleep 10000
I have found the external IP, but I haven't found a way to actually create the connection and access it from my web browser. Is it even feasible for me to open the port 3000, connect to the external ip that I find from the "Check IP" step, and do that verification manually?

Github actions secret of gcp service account not parsed correctly

I have created a GCP service account having the roles/storageAdmin role.
I have tested it locally as follows:
$ gcloud auth activate-service-account --key-file=myfile.json
$ gcloud auth configure-docker
$ docker push gcr.io/my-project-id/echoserver:1.0.1
I then create a repo-level secret with the contents of this file named GCR_SECRET and run the following action
- name: build and push to staging gcr
id: stg_img_build
uses: RafikFarhad/push-to-gcr-github-action#v4
with:
gcloud_service_key: ${{ secrets.GCR_SECRET }}
registry: gcr.io
project_id: $STAGING_GCR_PROJECT
image_name: ${{ github.event.inputs.image_name }}
image_tag: ${{ github.event.inputs.image_tag }}
This fails as follows:
Error response from daemon: Get "https://gcr.io/v2/": unknown: Unable to parse json key.
What could be causing this?
I encourage you to consider Workload Identity Federation as this will enable you to federate auth using a Google Service Account to GitHub Actions.
See Enabling keyless auth from GitHub Actions.
If you want to use RafikFarhad/push-to-gcr-github-action, note the requirement to base64 encode the key before persisting it to the repo.

is it possible to use static ip when using github actions

Now I am using github actions as my project CI, I want to do some unit test when using github actions build my project. When I using unit test, the project must using database, now my database have a white list and only IP in white list could connect my database, but now when I run unit test in GitHub Actions, I did not know the GitHub Actions IP address. Is it possible to use a static ip or any other way to solve the problem? I am not want to any IP could connect my database, it may have a security problem. any suggestion?
This is currently only possible with a self-hosted runner on a VM you can control the IP address of.
See also:
About self-hosted runners.
Alternatively, your GitHub action workflow may be able to adjust the firewall settings as part of the run.
Or you could use something like SQL Server LocalDB or SQLLite to connect to the database locally on the runner. Or spin up a temporary DB in a cloud environment, open it up to the runner and throw it away afterwards.
Or you could use a VPN client to connect to actions runner to your environment. You can install anything you want on the runner.
You can dynamically retrieve the GitHub Actions runner's IP address during your workflow using the public-ip action and update your RDS instance's security group ingress rules before and after your unit test steps.
This will allow you to use GitHub's hosted runners with your workflow instead of hosting your own.
Note: You will need to also set AWS credentials on your runner with permissions to update the associated security group. Also, you need to make sure the RDS instance is in a public subnet with an Internet Gateway attached and security group attached to it.
Your workflow should look something like this:
deploy:
name: deploy
runs-on: ubuntu-latest
env:
AWS_INSTANCE_SG_ID: <your-rds-subnet-sg-id>
steps:
- name: configure aws credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: <your-ci-aws-access-key>
aws-secret-access-key: <your-ci-aws-secret-key>
aws-region: <your-rds-aws-region>
- name: get runner ip addresses
id: ip
uses: haythem/public-ip#v1.2
- name: whitelist runner ip address
run: |
aws ec2 authorize-security-group-ingress \
--group-id $AWS_INSTANCE_SG_ID \
--protocol tcp \
--port 22 \
--cidr ${{ steps.ip.outputs.ipv4 }}/32
- name: connect to your rds instance and run tests
run: |
...run tests...
- name: revoke runner ip address
run: |
aws ec2 revoke-security-group-ingress \
--group-id $AWS_INSTANCE_SG_ID \
--protocol tcp \
--port 22 \
--cidr ${{ steps.ip.outputs.ipv4 }}/32
Ideally though you would run your integration tests in an EC2 within the same VPC as your RDS instance to avoid publicly exposing your RDS instance.
This is in beta (September 1, 2022) but it is possible to assign static IP address to runners:
Fixed IP ranges to provide access to runners via allow list services
Setup a fixed IP range for your machines by simply ticking a check box, this provides an IP range that can be allow listed in internal systems and in GitHub’s allow list to keep using Actions while making your GitHub environment more secure.
More details here
If your database happens to be Redis or PostgreSQL, GitHub Actions includes a built-in feature called Service Containers to spin up an ephemeral database in CI for testing purposes.
These databases are short-lived: after your job that uses it completes, the service container hosting the database is destroyed. You can either run the database in a container or directly on the virtual machine if desired.
For more info, see Creating PostgreSQL service containers in the GitHub Actions docs.
If you happen to be using another database, you can install do some more manual legwork to install and run it yourself.

Azure Devops MS-hosted agent IP address

We use Azure DevOps with Microsoft-hosted agents, and because we would like to apply authorized IP ranges for our AKS we need the agent IP addresses.
To automate the process in our release pipeline we have included Azure CLI task with the command :
AGENT_IP=$(curl -s https://ipinfo.io/json | jq -r .ip)
az aks update --resource-group xxx --name yyy --api-server-authorized-ip-ranges ${AGENT_IP}
All the AGENT_IPs, we are getting from the command line, are not listed in the weekly json file.
Even the operation is executed successfully and the AGENT_IP is included in the "apiServerAccessProfile.authorizedIpRanges" section, sometimes we are not able to deploy our microservice to the AKS and we are getting an error: "Unable to connect to the server: dial tcp xx.xx.xx.xx:443: i/o timeout".
However sometimes the deployment is successful, even though the AGENT_IP is not listed in the weekly json.
Why the IP addresses I am getting, are not in the weekly json file ?
Randomly I am able to deploy to AKS ?
Please read these docs:
Allowed address lists and network connections
Agent IP ranges
I got IP address using this script:
Invoke-RestMethod -Uri ('http://ipinfo.io/'+(Invoke-WebRequest -uri "http://ifconfig.me/ip").Content)
And for build pipelines I got IP address which was outside of any IP range from weekly file for AzureCloud.westeurope. (In my case it was 168.63.69.117, 137.135.240.152). However for relese pipeline I got IP which are in IP ranges from weekly file:
52.157.67.128 - IP Range 52.157.64.0/18
40.118.28.211 - IP Range 40.118.0.0/17
But I noticed that build agents are located in Ireland and this is North Europe region. And yes, IP addresses matches IP ranges from North Europe:
137.135.240.152 - IP Range 137.135.128.0/17
168.63.69.117 - IP range 168.63.64.0/20
I have no idea why this works like that since I have West Europe region in my settings.
But to sum up:
build pipelines - North Europe region
release pipelines - West Europe region

Can't connect to iot.west.amazonaws.com

I am trying to setup an IoT AWS on my raspberry pi and when I try and describe my certificate it says it couldn't reach the endpoint URL iot.west.amazonaws.com. I can ping amazonaws.com which is the actual domain. I've tried changing my DNS to Google's public one (8.8.8.8) but still no luck. Am I doing something wrong?
The official AWS Regions and Endpoints for AWS IoT are:
Region Endpoint
us-east-2 iot.us-east-2.amazonaws.com
us-east-1 iot.us-east-1.amazonaws.com
us-west-2 iot.us-west-2.amazonaws.com
ap-southeast-1 iot.ap-southeast-1.amazonaws.com
ap-southeast-2 iot.ap-southeast-2.amazonaws.com
ap-northeast-1 iot.ap-northeast-1.amazonaws.com
ap-northeast-2 iot.ap-northeast-2.amazonaws.com
eu-central-1 iot.eu-central-1.amazonaws.com
eu-west-1 iot.eu-west-1.amazonaws.com
eu-west-2 iot.eu-west-2.amazonaws.com
cn-north-1 iot.cn-north-1.amazonaws.com