I am trying to setup an IoT AWS on my raspberry pi and when I try and describe my certificate it says it couldn't reach the endpoint URL iot.west.amazonaws.com. I can ping amazonaws.com which is the actual domain. I've tried changing my DNS to Google's public one (8.8.8.8) but still no luck. Am I doing something wrong?
The official AWS Regions and Endpoints for AWS IoT are:
Region Endpoint
us-east-2 iot.us-east-2.amazonaws.com
us-east-1 iot.us-east-1.amazonaws.com
us-west-2 iot.us-west-2.amazonaws.com
ap-southeast-1 iot.ap-southeast-1.amazonaws.com
ap-southeast-2 iot.ap-southeast-2.amazonaws.com
ap-northeast-1 iot.ap-northeast-1.amazonaws.com
ap-northeast-2 iot.ap-northeast-2.amazonaws.com
eu-central-1 iot.eu-central-1.amazonaws.com
eu-west-1 iot.eu-west-1.amazonaws.com
eu-west-2 iot.eu-west-2.amazonaws.com
cn-north-1 iot.cn-north-1.amazonaws.com
Related
I am trying to create an end point for an API to be deployed into existing GKE cluster by following the instructions in Getting started with Cloud Endpoints for GKE with ESPv2
I clone the sample code in the repo and modified the content of openapi.yaml:
# [START swagger]
swagger: "2.0"
info:
description: "A simple Google Cloud Endpoints API example."
title: "Endpoints Example"
version: "1.0.0"
host: "my-api.endpoints.my-project.cloud.goog"
I then deployed it via the command:
endpoints/getting-started (master) $ gcloud endpoints services deploy openapi.yaml
Now I can see that it has been created:
$ gcloud endpoints services list
NAME TITLE
my-api.endpoints.my-project.cloud.goog
I also have postgreSQL service account:
$ gcloud iam service-accounts list
DISPLAY NAME EMAIL DISABLED
my-postgresql-service-account my-postgresql-service-acco#my-project.iam.gserviceaccount.com False
In the section Endpoint Service Configuration of documentation it says to add the role to the attached service account for the endpoint service as follows, but I get this error:
$ gcloud endpoints services add-iam-policy-binding my-api.endpoints.my-project.cloud.goog
--member serviceAccount:my-postgresql-service-acco#my-project.iam.gserviceaccount.com
--role roles/servicemanagement.serviceController
ERROR: (gcloud.endpoints.services.add-iam-policy-binding) User [myusername#mycompany.com] does not have permission to access services instance [my-api.endpoints.my-project.cloud.goog:getIamPolicy] (or it may not exist): No access to resource: services/my-api.my-project.cloud.goog
The previous lines show the service exits, I guess? Now I am not sure how to resolve this? What permissions do I need? who can give me permission and what permissions he should have? how can I check? Is there any other solution?
The issue got resolved after I was assigned the role of "Project_Admin". It was not ideal as it was giving too much permission to me. The role "roles/endpoints.portalAdmin" was also tried but did not help.
Now I am using github actions as my project CI, I want to do some unit test when using github actions build my project. When I using unit test, the project must using database, now my database have a white list and only IP in white list could connect my database, but now when I run unit test in GitHub Actions, I did not know the GitHub Actions IP address. Is it possible to use a static ip or any other way to solve the problem? I am not want to any IP could connect my database, it may have a security problem. any suggestion?
This is currently only possible with a self-hosted runner on a VM you can control the IP address of.
See also:
About self-hosted runners.
Alternatively, your GitHub action workflow may be able to adjust the firewall settings as part of the run.
Or you could use something like SQL Server LocalDB or SQLLite to connect to the database locally on the runner. Or spin up a temporary DB in a cloud environment, open it up to the runner and throw it away afterwards.
Or you could use a VPN client to connect to actions runner to your environment. You can install anything you want on the runner.
You can dynamically retrieve the GitHub Actions runner's IP address during your workflow using the public-ip action and update your RDS instance's security group ingress rules before and after your unit test steps.
This will allow you to use GitHub's hosted runners with your workflow instead of hosting your own.
Note: You will need to also set AWS credentials on your runner with permissions to update the associated security group. Also, you need to make sure the RDS instance is in a public subnet with an Internet Gateway attached and security group attached to it.
Your workflow should look something like this:
deploy:
name: deploy
runs-on: ubuntu-latest
env:
AWS_INSTANCE_SG_ID: <your-rds-subnet-sg-id>
steps:
- name: configure aws credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: <your-ci-aws-access-key>
aws-secret-access-key: <your-ci-aws-secret-key>
aws-region: <your-rds-aws-region>
- name: get runner ip addresses
id: ip
uses: haythem/public-ip#v1.2
- name: whitelist runner ip address
run: |
aws ec2 authorize-security-group-ingress \
--group-id $AWS_INSTANCE_SG_ID \
--protocol tcp \
--port 22 \
--cidr ${{ steps.ip.outputs.ipv4 }}/32
- name: connect to your rds instance and run tests
run: |
...run tests...
- name: revoke runner ip address
run: |
aws ec2 revoke-security-group-ingress \
--group-id $AWS_INSTANCE_SG_ID \
--protocol tcp \
--port 22 \
--cidr ${{ steps.ip.outputs.ipv4 }}/32
Ideally though you would run your integration tests in an EC2 within the same VPC as your RDS instance to avoid publicly exposing your RDS instance.
This is in beta (September 1, 2022) but it is possible to assign static IP address to runners:
Fixed IP ranges to provide access to runners via allow list services
Setup a fixed IP range for your machines by simply ticking a check box, this provides an IP range that can be allow listed in internal systems and in GitHub’s allow list to keep using Actions while making your GitHub environment more secure.
More details here
If your database happens to be Redis or PostgreSQL, GitHub Actions includes a built-in feature called Service Containers to spin up an ephemeral database in CI for testing purposes.
These databases are short-lived: after your job that uses it completes, the service container hosting the database is destroyed. You can either run the database in a container or directly on the virtual machine if desired.
For more info, see Creating PostgreSQL service containers in the GitHub Actions docs.
If you happen to be using another database, you can install do some more manual legwork to install and run it yourself.
I just created the action on my project and configured everything over there, but unfortunately I'm getting a message like this into the 'deploy file' section> ssh: connect to host ec2-MYIP.us-east-2.compute.amazonaws.com port 22: Operation timed out
Good thing is that I know what's happening. I have to allow as an Inbound Rule the following:
Type: SSH / Protocol: TCP / Post range: 22 / Source: ::/0;
As you can see here, it works fine without limiting the source IP >
But obviously I don't want to do that for security reasons, so I need to find out the source I need to put there.
I've tried a lot of Github IP addresses already, but all of them were unsuccessful.
Does anyone here know what's the right source for it to work in a protected way or how can I find it?
Action I am using > https://github.com/wlixcc/SFTP-Deploy-Action
The IP addresses of GitHub hosted runners are documented here: https://docs.github.com/en/free-pro-team#latest/actions/reference/specifications-for-github-hosted-runners#ip-addresses
Windows and Ubuntu runners are hosted in Azure and have the same IP address ranges as Azure Data centers.
[...]
Microsoft updates the Azure IP address ranges weekly in a JSON file that you can download from the Azure IP Ranges and Service Tags - Public Cloud website. You can use this range of IP addresses if you require an allow-list to prevent unauthorized access to your internal resources.
An improved answer over riQQ's: Dynamically retrieve the Github Action runner's IP address during your workflow using the public-ip action and
update your EC2 server's security group ingress rules before and after your SSH steps.
Your EC2 instance will never be exposed to public IP addresses on your SSH port.
Note: You will need to also set AWS credentials on your runner with permissions to update the associated EC2 security group.
Your workflow should look something like this:
deploy:
name: deploy
runs-on: ubuntu-latest
env:
AWS_INSTANCE_SG_ID: <your-ec2-security-group-id>
steps:
- name: configure aws credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: <your-ci-aws-access-key>
aws-secret-access-key: <your-ci-aws-secret-key>
aws-region: <your-ec2-aws-region>
- name: get runner ip address
id: ip
uses: haythem/public-ip#v1.2
- name: whitelist runner ip address
run: |
aws ec2 authorize-security-group-ingress \
--group-id $AWS_INSTANCE_SG_ID \
--protocol tcp \
--port 22 \
--cidr ${{ steps.ip.outputs.ipv4 }}/32
- name: ssh into your ec2 and do whatever
run: |
...do whatever you need to do...
- name: revoke runner ip address
run: |
aws ec2 revoke-security-group-ingress \
--group-id $AWS_INSTANCE_SG_ID \
--protocol tcp \
--port 22 \
--cidr ${{ steps.ip.outputs.ipv4 }}/32
Will kubernetes service connections in azure devops work with an AKS cluster that is bound to AAD via openidconnect? Logging into such clusters goes through an openidconnect flow that involves a device login + browser. How is this possible w/ azure devops k8s service connections?
Will kubernetes service connections in azure devops work with an AKS
cluster that is bound to AAD via openidconnect?
Unfortunately to say, no, this does not support until now.
According to your description, what you want to connect with in Azure Devops Kubernetes service connection is Azure Kubernetes service. This means you would select Azure Subscription in Choose authentication. BUT, this connection method is using Service Principal Authentication (SPA) to authenticate, which does not yet supported for the AKS that is bound with AAD auth.
If you connect your AKS cluster as part of your CI/CD deployment in Azure Devops, and attempt to get the cluster credentials. You would get a warning response which inform you to log in since the service principal cannot handle it:
WARNING: To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code *** to authenticate.
You should familiar with this message, it needs you open a browser to login in to complete the device code authentication manually. But this could not be achieve in Azure Devops.
There has a such feature request raised on our forum which request us expand this feature to Support non-interactive login for AAD-integrated clusters. You can vote and comment there to advance the priority of this suggestion ticket. Then it could be considered into the develop plan by our Product Manager as soon as possible.
Though it could not be achieved directly. But there has 2 work around can for you refer now.
The first work around is change the Azure DevOps authenticate itself from AAD client to the server client.
Use az aks get-credentials command and specify the parameter --admin with it. This can help with bypassing the Azure AD auth since it can let you connect and retrieve the admin credentials which can work without Azure AD.
But, I do not recommend this method because subjectively, this method is ignoring the authentication rules set in AAD for security. If you want a quick method to achieve what you want and not too worry about the security, you can try with this.
The second one is using Kubernetes service accounts
You can follow this doc to create a service account. Then in Azure Devops, we could use this service account to communicate with AKS API. Here you also need to consider about the authorized IP address ranges in AKS.
After the service account created successfully, choose Service account in the service connection of Azure Devops:
Server URL: Get it from the AKS instance(API server address) in Azure portal, then do not forget append the https:// before it while you input it into this service connection.
Secret: Generate it by using command:
kubectl get secret -n <name of secret> -o yaml -n service-accounts
See this doc: Deploy Vault on Azure Kubernetes Service (AKS).
Then you can use this service connection in Azure Devops tasks.
I've installed a kubernetes cluster (using Google's Container Engine) and I noticed a service listening on port 443 on the master server. Tried to access it but it requires username and password, so any ideas what these credentials are?
You can read the cluster config using kubectl. This will contain the username and password for the UI.
kubectl config view
As of April 29 use:
gcloud container clusters describe [clustername]
This will give you some YAML (see here) containing also the username and password.
The user/password are stored in the API.
If you do:
gcloud preview container --zone <zone> clusters list
You should be able to see the user name and password for your cluster.
Note that the HTTPS cert that it uses is currently signed by an internal CA (stored in your home directory) so for a web browser, you will need to manually accept the certificate. We're working on making this more clean.
You can also type
$ kubectl proxy
which will serve up the UI at http://localhost:8001/ui
Use the below command to find the Kubernetes auto-generated password.
$Kubectl config view --minify