How do I use variable substitution in Azure Service Fabric - azure-service-fabric

Trying to emulate compose file type deployment via Service Fabric service manifest specifically for environment variables in the container. Static values work fine, what is not working/documented is how do I pass something from the host into the container.
In compose following code will put hostname variable from container host into container environment variable, how do I do that in Service Fabric manifest?
environment:
- "SHELL=powershell.exe"
- "HostName=${hostname}"

It appears to be unsupported at this time, according to the referenced github issue.

Related

How to fix error with GitLab runner inside Kubernetes cluster - try setting KUBERNETES_MASTER environment variable

I have setup two VMs that I am using throughout my journey of educating myself in CI/CD, GitLab, Kubernetes, Cloud Computing in general and so on. Both VMs have Ubuntu 22.04 Server as a host.
VM1 - MicroK8s Kubernetes cluster
Most of the setup is "default". Since I'm not really that knowledgeable, I have only configured two pods and their respective services - one with PostGIS and the other one with GeoServer. My intent is to add a third pod, which is the deployment of a app that I a have in VM2 and that will communicate with the GeoServer in order to provide a simple map web service (Leaflet + Django). All pods are exposed both within the cluster via internal IPs as well as externally (externalIp).
I have also installed two GitLab-related components here:
GitLab Runner with Kubernetes as executor
GitLab Kubernetes Agent
In VM2 both are visible as connected.
VM2 - GitLab
Here is where GitLab (default installation, latest version) runs. In the configuration (/etc/gitlab/gitlab.rb) I have enabled the agent server.
Initially I had the runner in VM1 configured to have Docker as executor. I had not issues with that. However then I thought it would be nice to try out running the runner inside the cluster so that everything is capsuled (using the internal cluster IPs without further configuration and exposing the VM's operating system).
Both the runner and agent are showing as connected but running a pseudo-CI/CD pipeline (the one provided by GitLab, where you have build, test and deploy stages with each consisting of a simple echo and waiting for a few seconds) returns the following error:
Running with gitlab-runner 15.8.2 (4d1ca121)
on testcluster-k8s-runner Hko2pDKZ, system ID: s_072d6d140cfe
Preparing the "kubernetes" executor
Using Kubernetes namespace: gitlab-runner
ERROR: Preparation failed: getting Kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Will be retried in 3s ...
Using Kubernetes namespace: gitlab-runner
ERROR: Preparation failed: getting Kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Will be retried in 3s ...
Using Kubernetes namespace: gitlab-runner
ERROR: Preparation failed: getting Kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Will be retried in 3s ...
ERROR: Job failed (system failure): getting Kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
I am unable to find any information regarding KUBERNETES_MASTER except in issue tickets (GitLab) and questions (SO and other Q&A platforms). I have no idea what it is, where to set it. My guess would be it belongs in the runner's configuration on VM1 or at least the environment of the gitlab-runner (the user that contains the runner's userspace with its respective /home/gitlab-runner directory).
The only one possible solution I have found so far is to create the .kube directory from the user which uses kubectl (in my case microk8s kubectl since I use MicroK8s) to the home directory of the GitLab runner. I didn't see anything special in this directory (no hidden files) except for a cache subdirectory, hence my decision to simply create it at /home/gitlab-runner/.kube, which didn't change a thing.

How to set up secrets in ECS task definition for container environment variable?

I try to set up AWS ECS task definition of my docker frontend container to an AWS backend url.
In my .env.production:
REACT_APP_HOST=secrets.BACKEND_URL
how should I modify my secrets format or syntax, so that in my ECS task definition when I set container environment variable can be correctly used?
key: BACKEND_URL value:xxxxx
Thanks
You need to use Secrets block in ECS task definition, then during run time, ECS will retrieve the secret value and inject as env variable into container.
Some docs if you use with CF, CLI or TF are similar as well
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-taskdefinition.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-secret.html
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-secrets.html

Resolve service IP in environment variable in Kubernetes

When I pass service name in environment variable in YAML file, that service name is still string, it's not being resolved in real ip address.
Example:
env:
- name: ES
value: elasticsearch
Thanks
You should be able to use it directly and it should resolve fine:
curl $ES
If you use it inside your application it should also work.
Just consider that Kubernetes uses its internal DNS and the that "elasticsearch" name should only work inside the same namespace. In fact it will resolve to:
elasticsearch.<namespace>.svc.cluster.local.
If your elastic service is running in different namespace, make sure you use elastic.<target_namespace>.

OpenShift - External Database Environment Variables

This is the documentation on External Database Environment Variables. It says,
Using an external service in your application is similar to using an internal service. Your application will be assigned environment variables for the service and the additional environment variables with the credentials described in the previous step. For example, a MySQL container receives the following environment variables:
EXTERNAL_MYSQL_SERVICE_SERVICE_HOST=<ip_address>
EXTERNAL_MYSQL_SERVICE_SERVICE_PORT=<port_number>
MYSQL_USERNAME=<mysql_username>
MYSQL_PASSWORD=<mysql_password>
MYSQL_DATABASE_NAME=<mysql_database>
This part is not clear - Your application will be assigned environment variables for the service.
How should the application be configured so that the environment variables for the service are assigned? I understand that, the ones defined in DeploymentConfig will flow into the application in say NodeJS as process.env.MYSQL_USERNAME, etc. I am not clear, how EXTERNAL_MYSQL_SERVICE_SERVICE_HOST or EXTERNAL_MYSQL_SERVICE_SERVICE_PORT will flow into.
From Step 1 of the link that you posted, if you create a Service object
oc expose deploymentconfig/<name>
This will automatically generate environment variables (https://docs.openshift.com/container-platform/3.11/dev_guide/environment_variables.html#automatically-added-environment-variables) for all pods in your namespace. (The environment variables may not be immediately available if the Service was added after your pods were already created...delete the pods to have them added on restart)

Docker Compose v3 and link environment variables

Link environment variables has been deprecated since v2. What is the alternative for discovering the random port then? I have a dockerized java app that I can inform about data source via environment variables but now I can not. The vague mention that I should use link name is not helping. Is there an alternative?
So here is thing --link use to create so many unnecessary variables which were not required at all.
Now when you use docker-compose you can name your service anything you want. So if you are running mysql, you can name it mysql or db or dbservice or anything.
In your configs you can either use the service name mysql or db or dbservice. Or you can use environment variable inside the code to pickup the service name and pass that through your docker-compose.
Also you can have alias for the same container with different names.
About the ports, if I have a nginx image which exposes port 8080. Then I know in my config that it will always be port 8080 and hence no need to pass it