Is it possible to setup my ingress with a wildcard so I can fetch images with repositoryname.hostname where repositoryname is the wildcard?
I have Artifactory running on my k8s cluster. To download a remote repository in Artifactory 6.x one has to type the following:
http://<host>:<port>/artifactory/<remote-repository-name>/<artifact-path>
Is there any way I can prefix <remote-repository-name> behind <host> so that the address would be:
http://<remote-repository-name>.<host>:<port>/artifactory/<artifact-path>
or
http://<artifact-path><remote-repository-name>.<host>:<port>/artifactory
I searched everywhere in the Artifactory docs for some kind of wildcard DNS record (or subdomain). Only thing I could find was something about docker and reverse proxies, at my work though, we'd like to be able to pull any kind of repository this way.
Artifactory supports sub-domain access method, which can be setup in the Admin > Artifactory > HTTP Settings. More on this you can find here.
Related
I am newbie to K8s and still testing things. I have got prometheus running outside my cluster. I am using admin creds to hit kube api server to get metrics in to my prometheus which at the moment is working fine.
I want to create another user only to scrape metrics. While searching, i could not find any documentation on creating a user with user id and password.
Also, we are managing our repo in gitlab with pipeline. Is it possible to create user using yaml config instead of kubectl as given in the documentation.
Thanks
Eswar
According to Prometheus docs:
Prometheus does not directly support basic authentication (aka "basic auth") for connections to the Prometheus expression browser and HTTP API. If you'd like to enforce basic auth for those connections, we recommend using Prometheus in conjunction with a reverse proxy and applying authentication at the proxy layer.
In the link above there is a step-by-step guide in how to set up a nginx reverse proxy in front of Prometheus.
Have installed VSTS agent in a very locked down environment. It makes a connection to VSTS, gets job but fails when downloading artefact. Gives error
Error: in getBuild, so retrying => retries pending : 4.
It retries 4 times and fails.
The agent is going thru a proxy. Have setup the proxy using ./config --proxyurl and also set HTTP_PROXY AND HTTPS_PROXY system environment vars.
The proxy is very limiting in that URLS are locked down, there is no authentication required. Does anybody know what URLs the agent accesses? Am hoping if can get a definitive list this will solve the issue. If anybody knows how can get a list would be great. Or maybe I have misconfigured?
Any ideas?
Tyring to run VSTS agent thru a proxy which limits sites
According to the document
I'm running a firewall and my code is in Azure Repos. What URLs does the agent need to communicate with?:
To ensure your organization works with any existing firewall or IP
restrictions, ensure that dev.azure.com and dev.azure.com are open
and update your allow-listed IPs to include the following IP
addresses, based on your IP version. If you're currently allow-listing
the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in place,
as you don't need to remove them.
And With just the organization's name or ID, you can get its base URL using the global Resource Areas REST API (https://dev.azure.com/_apis/resourceAreas). This API doesn't require authentication and provides information about the location (URL) of the organization as well as the base URL for REST APIs, which can live on different domains.
Please check this document Best practices for working with URLs in Azure DevOps extensions and integrations for some more details.
Hope this helps.
I am using google-cloud-cpp (C++ API for Google Cloud Platform functions) to create/read/write to buckets. When I am working from within the organization's firewall, I have to use a proxy to be able to connect to google cloud.
I see that we can configure a proxy using the gcloud command line:
gcloud config set proxy/type http
gcloud config set proxy/address x.x.x.x
gcloud config set proxy/port
Can I do something similar when I use google-cloud-cpp?
If we look at the source code of the google-cloud-cpp library as found on GitHub, we seem to see that it is based on libcurl.
See:
https://github.com/googleapis/google-cloud-cpp/blob/master/google/cloud/storage/internal/curl_handle.cc
Following on from the comments by #Travis Webb, we then look at the docs for libcurl and find:
https://curl.haxx.se/libcurl/c/CURLOPT_PROXY.html
This documents API that can be used to set proxy settings for programs that use libcurl. However, if we read deeper, we find a section on environment variables that declares that http_proxy and https_proxy can be set.
I have a private registry and I want to allow worker nodes (running on Azure Kubernetes Services) to be able to pull images only from this registry.
Is there a way to allow worker nodes to only pull images from a specific private registry?
I would be surprised if the only way to achieve that is through complex firewall rules.
As far as I know Kubernetes does not have a feature which you are referring to.
You can read about Pull an Image from a Private Registry, which describes how to create a secret which holds the authorization token and how to use it.
On the other hand, I was able to find something in Docker called Content trust.
Content trust allows operations with a remote Docker registry to enforce client-side signing and verification of image tags. Content trust provides the ability to use digital signatures for data sent to and received from remote Docker registries. These signatures allow client-side verification of the integrity and publisher of specific image tags.
Currently, content trust is disabled by default. To enable it, set the DOCKER_CONTENT_TRUST environment variable to 1
Link to the documentation is available here, also you can read about it inside Docker blog A secure supply chain for Kubernetes, Part 2
If you have an private registry your docker tags of images inside it must have prefixed by the registry hostname. If not present, the command uses Docker’s public registry located at registry-1.docker.io by default. As example your registry hostname is docker.mycompany.net the tag of the image should be docker.mycompany.net/{image-name}:{image-version}. Documentation on docker tag can be found here.
So you just have to use that full tag of the image with the hostname prefix in your container specs as example to above scenario it should be like this.
containers:
- name: my-container
image: docker.mycompany.net/{image-name}:{image-version}
ports:
- containerPort: 80
I am able to create VM from a custom image using Azure resource management sdk for .net. Now, I want to download the RDP file for virtual machine programmatically. I have searched and able to find Rest API for azure 'Classic' deployments which contains an api call to download RDP file but i can't find the same in Rest API for 'ARM' deployment. Also, I can't find any such Method in .net sdk for azure.
Does there any way exist to achieve that? Please guide..
I don't know of a way to get the RDP file, but you can get all the information you need from the deployment itself. On the deployment, you can set outputs for the values you need like the publicIp dns. See this:
https://github.com/bmoore-msft/AzureRM-Samples/blob/master/VMCSEInstallFilePS/azuredeploy.json#L213-215
If your environment is more complex (load balancers, network security groups) you need to account for port numbers, etc.