Kubernetes only allow pull from specific private registry - kubernetes

I have a private registry and I want to allow worker nodes (running on Azure Kubernetes Services) to be able to pull images only from this registry.
Is there a way to allow worker nodes to only pull images from a specific private registry?
I would be surprised if the only way to achieve that is through complex firewall rules.

As far as I know Kubernetes does not have a feature which you are referring to.
You can read about Pull an Image from a Private Registry, which describes how to create a secret which holds the authorization token and how to use it.
On the other hand, I was able to find something in Docker called Content trust.
Content trust allows operations with a remote Docker registry to enforce client-side signing and verification of image tags. Content trust provides the ability to use digital signatures for data sent to and received from remote Docker registries. These signatures allow client-side verification of the integrity and publisher of specific image tags.
Currently, content trust is disabled by default. To enable it, set the DOCKER_CONTENT_TRUST environment variable to 1
Link to the documentation is available here, also you can read about it inside Docker blog A secure supply chain for Kubernetes, Part 2

If you have an private registry your docker tags of images inside it must have prefixed by the registry hostname. If not present, the command uses Docker’s public registry located at registry-1.docker.io by default. As example your registry hostname is docker.mycompany.net the tag of the image should be docker.mycompany.net/{image-name}:{image-version}. Documentation on docker tag can be found here.
So you just have to use that full tag of the image with the hostname prefix in your container specs as example to above scenario it should be like this.
containers:
- name: my-container
image: docker.mycompany.net/{image-name}:{image-version}
ports:
- containerPort: 80

Related

Kubernetes and Wildcard DNS records

Is it possible to setup my ingress with a wildcard so I can fetch images with repositoryname.hostname where repositoryname is the wildcard?
I have Artifactory running on my k8s cluster. To download a remote repository in Artifactory 6.x one has to type the following:
http://<host>:<port>/artifactory/<remote-repository-name>/<artifact-path>
Is there any way I can prefix <remote-repository-name> behind <host> so that the address would be:
http://<remote-repository-name>.<host>:<port>/artifactory/<artifact-path>
or
http://<artifact-path><remote-repository-name>.<host>:<port>/artifactory
I searched everywhere in the Artifactory docs for some kind of wildcard DNS record (or subdomain). Only thing I could find was something about docker and reverse proxies, at my work though, we'd like to be able to pull any kind of repository this way.
Artifactory supports sub-domain access method, which can be setup in the Admin > Artifactory > HTTP Settings. More on this you can find here.

is it possible to use static ip when using github actions

Now I am using github actions as my project CI, I want to do some unit test when using github actions build my project. When I using unit test, the project must using database, now my database have a white list and only IP in white list could connect my database, but now when I run unit test in GitHub Actions, I did not know the GitHub Actions IP address. Is it possible to use a static ip or any other way to solve the problem? I am not want to any IP could connect my database, it may have a security problem. any suggestion?
This is currently only possible with a self-hosted runner on a VM you can control the IP address of.
See also:
About self-hosted runners.
Alternatively, your GitHub action workflow may be able to adjust the firewall settings as part of the run.
Or you could use something like SQL Server LocalDB or SQLLite to connect to the database locally on the runner. Or spin up a temporary DB in a cloud environment, open it up to the runner and throw it away afterwards.
Or you could use a VPN client to connect to actions runner to your environment. You can install anything you want on the runner.
You can dynamically retrieve the GitHub Actions runner's IP address during your workflow using the public-ip action and update your RDS instance's security group ingress rules before and after your unit test steps.
This will allow you to use GitHub's hosted runners with your workflow instead of hosting your own.
Note: You will need to also set AWS credentials on your runner with permissions to update the associated security group. Also, you need to make sure the RDS instance is in a public subnet with an Internet Gateway attached and security group attached to it.
Your workflow should look something like this:
deploy:
name: deploy
runs-on: ubuntu-latest
env:
AWS_INSTANCE_SG_ID: <your-rds-subnet-sg-id>
steps:
- name: configure aws credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: <your-ci-aws-access-key>
aws-secret-access-key: <your-ci-aws-secret-key>
aws-region: <your-rds-aws-region>
- name: get runner ip addresses
id: ip
uses: haythem/public-ip#v1.2
- name: whitelist runner ip address
run: |
aws ec2 authorize-security-group-ingress \
--group-id $AWS_INSTANCE_SG_ID \
--protocol tcp \
--port 22 \
--cidr ${{ steps.ip.outputs.ipv4 }}/32
- name: connect to your rds instance and run tests
run: |
...run tests...
- name: revoke runner ip address
run: |
aws ec2 revoke-security-group-ingress \
--group-id $AWS_INSTANCE_SG_ID \
--protocol tcp \
--port 22 \
--cidr ${{ steps.ip.outputs.ipv4 }}/32
Ideally though you would run your integration tests in an EC2 within the same VPC as your RDS instance to avoid publicly exposing your RDS instance.
This is in beta (September 1, 2022) but it is possible to assign static IP address to runners:
Fixed IP ranges to provide access to runners via allow list services
Setup a fixed IP range for your machines by simply ticking a check box, this provides an IP range that can be allow listed in internal systems and in GitHub’s allow list to keep using Actions while making your GitHub environment more secure.
More details here
If your database happens to be Redis or PostgreSQL, GitHub Actions includes a built-in feature called Service Containers to spin up an ephemeral database in CI for testing purposes.
These databases are short-lived: after your job that uses it completes, the service container hosting the database is destroyed. You can either run the database in a container or directly on the virtual machine if desired.
For more info, see Creating PostgreSQL service containers in the GitHub Actions docs.
If you happen to be using another database, you can install do some more manual legwork to install and run it yourself.

I want to create a user in kubernetes with username and password. I tried googling but could find only creating user using cert key

I am newbie to K8s and still testing things. I have got prometheus running outside my cluster. I am using admin creds to hit kube api server to get metrics in to my prometheus which at the moment is working fine.
I want to create another user only to scrape metrics. While searching, i could not find any documentation on creating a user with user id and password.
Also, we are managing our repo in gitlab with pipeline. Is it possible to create user using yaml config instead of kubectl as given in the documentation.
Thanks
Eswar
According to Prometheus docs:
Prometheus does not directly support basic authentication (aka "basic auth") for connections to the Prometheus expression browser and HTTP API. If you'd like to enforce basic auth for those connections, we recommend using Prometheus in conjunction with a reverse proxy and applying authentication at the proxy layer.
In the link above there is a step-by-step guide in how to set up a nginx reverse proxy in front of Prometheus.

How to host multiple sites on Azure web app for Containers using docker compose

I would like to use a docker compose file to deploy multiple public end points for our Linux hosted site.
We already have a deployed site that has images stored on a private ACR and is hosted on an Azure App Service (using Web App for Containers). It is deployed via Azure DevOps and works well.
We would however, like to use the same site to host an additional component, an api so that we would then end up with these endpoints:
https://www.example.com - the main site
https://www.example.com/api - the api
We would like to avoid a second app service or a subdomain if possible. The architecture we prefer is to use the same https certificate and ports (443) to host the api. The web site and api share a similar code base.
In the standard app service world, we could easily have deployed a virtual directory to the main app which is simple enough.
This model though seems to be more complicated when using containers.
How can we go about this? I've already had a look at this documentation: https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-multi-container-app. However, in this example, the second container is a private one - which doesn't get exposed.
Should we use a docker compose file (example please)? Or alternatively, is there a way we can use the Azure DevOps task to deploy to a viritual directory in the way that i would like. This is the task we are using for the single container deployment:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-rm-web-app-containers?view=azure-devops
For your requirements, the Web App For Container is also a type of Web App service, and as you see, it only can expose one container to the outside(the Internet) and others are private. So if you want to use the multi-containers Web App to deploy the images to access multiple endpoints such as the main site and the API site, then it's impossible to make your purpose come true.
According to the feature of the Web App that it only exposes one container to the outside, what you can do to achieve your purpose is that creates only one image and route to the endpoints yourself in the code or through a tool, such as the Nginx. Then deploy it to the Web App for Container. Only in this way, you can access multiple endpoints from only one App service..

How to get a foundary service whitelist IPs

We have a GUI that manages Cloud Foundry, and there's a link that show an instance with IP white list external dependency (quite large) How can I easily re-create this config as JSON, and recreate to diff Foundry env ?
It's not entirely clear what is being presented in your GUI but it sounds like it might be the application security groups. You might try running cf security-groups or cf security-group <name> to see if this information matches up with what's displayed in the GUI.
If that's what you want, you can use the following API calls to obtain the JSON data & recreate it in another environment.
1.) List all the security groups: http://apidocs.cloudfoundry.org/1.40.0/security_groups/list_all_security_groups.html
2.) List security groups applied to all applications: http://apidocs.cloudfoundry.org/1.40.0/security_group_running_defaults/return_the_security_groups_used_for_running_apps.html
3.) List security groups applied to all staging containers: http://apidocs.cloudfoundry.org/1.40.0/security_group_staging_defaults/return_the_security_groups_used_for_staging.html
4.) Retrieve a particular security group: http://apidocs.cloudfoundry.org/1.40.0/security_groups/retrieve_a_particular_security_group.html
And you can find more details about the API calls here: http://apidocs.cloudfoundry.org/
You can also run the cf cli commands above with the -v flag to show the HTTP requests being made by the CLI to obtain the information that's displayed.
Hope that helps!