What is an example of Infrastructure as a Code(IaC)? [closed] - infrastructure-as-code

Closed. This question is opinion-based. It is not currently accepting answers.
Closed 10 months ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I ran into the word "IaaC" (or IaC) many times. When I googled it, it told me :
Infrastructure as code(IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.
Can yaml files used in Kubernetes be an example of IaC? Maybe even Dockerfile can be considered as such? If not, could you give me some examples of IaC?
For example :
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

There are a number of steps which a IP Ops need to do to release/update the application running on the internet. Few examples of the tasks are
Provisioning new virtual machines, like starting the VM with the required memory and specs.
Installing the required software and dependency
Managing and scaling the infrastructure.
Repeating all of the configurations we've made again and again.
Infrastructure as a code means automating the steps required to deploy our application on the internet. Since using docker and k8s we are automating the deployment process, it is also considered infrastructure as a code.
Example
# define services (containers) that should be running
services:
mongo-database:
image: mongo:3.2
# what volumes to attach to this container
volumes:
- mongo-data:/data/db
# what networks to attach this container
networks:
- raddit-network
raddit-app:
# path to Dockerfile to build an image and start a container
build: .
environment:
- DATABASE_HOST=mongo-database
ports:
- 9292:9292
networks:
- raddit-network
# start raddit-app only after mongod-database service was started
depends_on:
- mongo-database
# define volumes to be created
volumes:
mongo-data:
# define networks to be created
networks:
raddit-network:
This docker compose file installs the dependency mongo-database by itself also it installs the main application raddit-app, and specifies the port the application listens for.
Source: Artemmkin
/
infrastructure-as-code-tutorial

Related

Getting started with Kubernetes - deploy docker compose

I am trying to follow the instructions in this tutorial: https://docs.docker.com/docker-for-windows/kubernetes/#use-docker-commands. I have followed these steps:
1) Enable Kubernetes in Docker Desktop.
2) Create a simple asp.net core 3.1 app in Visual Studio 2019 and add container orchestration support (Docker Compose).
3) Run the app in Visual Studio 2019 to confirm it runs successfully in Docker.
4) Run the following command in DOS: docker-compose build kubernetesexample
5) Run the following command in DOS: docker stack deploy --compose-file docker-compose.yml mystack
6) Run the following command in DOS: kubectl get services. Here is the result:
How do I browse to my app? I have tried to browse to: http://localhost:5100 and http://localhost:32442.
Here is my docker-compose.yml:
services:
kubernetesexample:
environment:
- ASPNETCORE_ENVIRONMENT=Development
ports:
- "45678:80"
[1]: https://i.stack.imgur.com/FAkAZ.png
Here is the result of running: kubectl get svc kubernetesexample-published -o yaml:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-03-14T17:51:41Z"
labels:
com.docker.service.id: mystack-kubernetesexample
com.docker.service.name: kubernetesexample
com.docker.stack.namespace: mystack
name: kubernetesexample-published
namespace: default
ownerReferences:
- apiVersion: compose.docker.com/v1alpha3
blockOwnerDeletion: true
controller: true
kind: Stack
name: mystack
uid: 75f037b1-661c-11ea-8b7c-025000000001
resourceVersion: "1234"
selfLink: /api/v1/namespaces/default/services/kubernetesexample-published
uid: a8e6b35a-35d1-4432-82f7-108f30d068ca
spec:
clusterIP: 10.108.180.197
externalTrafficPolicy: Cluster
ports:
- name: 5100-tcp
nodePort: 30484
port: 5100
protocol: TCP
targetPort: 5100
selector:
com.docker.service.id: mystack-kubernetesexample
com.docker.service.name: kubernetesexample
com.docker.stack.namespace: mystack
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: localhost
Please note the port has now changed:
Update
Service - An abstract way to expose an application running on a set of Pods as a network service.
Rather use Kubernetes documentation. They've interactive, in browser, examples. I see you tried to use LoadBalancer, this must be supported on cloud provider or properly set up environments. All publishing services are summered here. Try using NodePort, simple configuration 'd be eg.:
apiVersion: v1
kind: Service
metadata:
name: np-kubernetesexample
labels:
app: kubernetesexample
spec:
type: NodePort
ports:
port: 5100
protocol: TCP
targetPort: 5100
selector:
app: kubernetesexample
... from what I get gather from provided SCs and description, please check port and labels. If successful, application should be available on localhost:3xxxx, 2nd port described under PORTS when you type kubectl get services, xxxx:3xxxx/TCP.
It seems to work if I change my docker-compose to this:
version: '3.4'
services:
kubernetesexample:
image: kubernetesexample
ports:
- "80:80"
build:
context: .
dockerfile: Dockerfile
Then browse on port 80 i.e. http://localhost.
It does not seem to work on any other port. The video here helped me: https://www.docker.com/blog/docker-windows-desktop-now-kubernetes/

How do I publish .NET Core to Digital Ocean Kubernetes

I am trying to publish a .NET Core Web App and a .NET Core API.
I have been googling and can't find a way to deploy 1 let alone 2 .NET Core apps to a Digital Ocean Kubernetes Cluster, I have 2 nodes and have created a valid manifest and build a Docker image locally and it seems to pass the validation. But I can't actually deploy it. I'm new to Kubernetes and anything I find seems to be related to Google's Kubernetes or Azure Kubernetes.
I don't, unfortunately, have more information than this.
I have one. Weird thing is that DO is actually smart to not have docs
since it doesn't have to. You can recycle Google's and Azure's K8
documentation to work on your DO cluster. The key difference is only
in the namings I suppose, there could be more differentiations but so
far, I haven't met a single problem while applying instructions from
GCP's docs.
https://nozomi.one is running on DO's k8 cluster.
Here's an awesome-dotnetcore-digitalocean-k8 for you.
Errors you may/will face:
Kubernetes - Error message ImagePullBackOff when deploy a pod
Utilising .NET Core appsettings in docker k8
Push the secret file here (Recommended only for staging or below, unless you have a super secret way to deploy this):
kubectl create secret generic secret-appsettings --from-file=./appsettings.secrets.json
And then create a deployment configuration similar to this. Notice that we've added the appsettings at the last few lines:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: xxxxx
spec:
replicas: 3
template:
metadata:
labels:
app: xxxxx
spec:
containers:
- name: xxxxx
image: xxxxx/xxxxxx:latest
ports:
- containerPort: 80
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Production"
volumeMounts:
- name: secrets
mountPath: /app/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: secret-appsettings
Deploying this script is as simple as:
kubectl create -f deployment.yaml
And if you want to test locally in docker first:
docker run --rm -p 8080:8080 gcr.io/${PROJECT_ID}/test-app:v1
All in all, everything above will help you to deploy your pods.
You need to understand that deploying a new project/app works in this systematic way:
Create a deployment, which is something that pulls the image for you and creates pods that will be deployed to the nodes.
Create a service, that will point proper ports and more (Never tried to do more lol) to your app/s.
This is how a service looks like:
apiVersion: v1
kind: Service
metadata:
name: nozweb
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 80
selector:
app: nozweb
Always ensure that spec:selector:app is specifically following:
spec:
replicas: 3
template:
metadata:
labels:
app: xxxxx
In your deployment configuration. That's how they symlink.
Create an ingress (Optional) that will help act as a reverse proxy to your .NET Core app/project. This is optional because we got kestrel running!

Cloud Composer unable to connect to Cloud SQL Proxy service

We launched a Cloud Composer cluster and want to use it to move data from Cloud SQL (Postgres) to BQ. I followed the notes about doing this mentioned at these two resources:
Google Cloud Composer and Google Cloud SQL
https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine
We launch a pod running the cloud_sql_proxy and launch a service to expose the pod. The problem is that Cloud Composer cannot see the service stating the error when attempting to use an ad-hoc query to test:
cloud not translate host name "sqlproxy-service" to address: Name or service not known"
Trying by the service IP address results in the page timing out.
The -instances passed to cloud_sql_proxy work when used in a local environment or cloud shell. The log files seem to indicate no connection is ever attempted
me#cloudshell:~ (my-proj)$ kubectl logs -l app=sqlproxy-service
me#2018/11/15 13:32:59 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here.
2018/11/15 13:32:59 using credential file for authentication; email=my-service-account#service.iam.gserviceaccount.com
2018/11/15 13:32:59 Listening on 0.0.0.0:5432 for my-proj:my-ds:my-db
2018/11/15 13:32:59 Ready for new connections
I see a comment here https://stackoverflow.com/a/53307344/1181412 that possibly this isn't even supported?
Airflow
YAML
apiVersion: v1
kind: Service
metadata:
name: sqlproxy-service
namespace: default
labels:
app: sqlproxy
spec:
ports:
- port: 5432
protocol: TCP
targetPort: 5432
selector:
app: sqlproxy
sessionAffinity: None
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sqlproxy
labels:
app: sqlproxy
spec:
selector:
matchLabels:
app: sqlproxy
template:
metadata:
labels:
app: sqlproxy
spec:
containers:
- name: cloudsql-proxy
ports:
- containerPort: 5432
protocol: TCP
image: gcr.io/cloudsql-docker/gce-proxy:latest
imagePullPolicy: Always
command: ["/cloud_sql_proxy",
"-instances=my-proj:my-region:my-db=tcp:0.0.0.0:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
The information you found in the answer you linked is correct - ad-hoc queries from the Airflow web server to cluster-internal services within the Composer environment are not supported. This is because the web server runs on App Engine flex using its own separate network (not connected to the GKE cluster), which you can see in the Composer architecture diagram.
Since that is the case, your SQL proxy must be exposed on a public IP address for the Composer Airflow web server to connect to it. For any services/endpoints listening on RFC1918 addresses within the GKE cluster (i.e. not exposed on a public IP), you will need additional network configuration to accept external connections.
If this is a major blocker for you, consider running a self-managed Airflow web server. Since this web server would run in the same cluster as the SQL proxy you set up, there would no longer be any issues with name resolution.

Kubernetes Yaml Generator UI , yaml builder for kubernetes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Is there any tool , online or self hosted , that takes all the values in UI as input and generate the full declarative yaml for the following kubernetes objects:
Deployment, with init containers and imagepullsecrets and other options
Service
ConfigMap
Secret
Daemonset
StatefulSet
Namespaces and quotas
RBAC resources
Edit:
I have been using kubectl create and kubectl run , but they dont spupport all the possible configuration options , and you still need to rememer all the options it supports , in UI one would be able to select from the give options for each resource.
The closest is kubectl create .... and kubectl run ...... Run them with -o yaml --dry-run > output.yaml. This won't create the resource, but will write the resource description to output.yaml file.
Found yipee.io that supports all the options and resources:
# Generated 2018-10-18T11:07:27.621Z by Yipee.io
# Application: nginx
# Last Modified: 2018-10-18T11:07:27.621Z
apiVersion: v1
kind: Service
metadata:
namespace: webprod
name: nginx
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 8080
name: nginx-hhpt
protocol: TCP
nodePort: 30003
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
namespace: webprod
annotations:
yipee.io.lastModelUpdate: '2018-10-18T11:07:27.595Z'
spec:
selector:
matchLabels:
name: nginx
component: nginx
app: nginx
rollbackTo:
revision: 0
template:
spec:
imagePullSecrets:
- name: imagsecret
containers:
- volumeMounts:
- mountPath: /data
name: nginx-vol
name: nginx
ports:
- containerPort: 80
protocol: TCP
name: http
imagePullPolicy: IfNotPresent
image: docker.io/nginx:latest
volumes:
- name: nginx-vol
hostPath:
path: /data
type: Directory
serviceAccountName: test
metadata:
labels:
name: nginx
component: nginx
app: nginx
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2
replicas: 1
revisionHistoryLimit: 3
I have tried to address the same issue using a Java client based on the most popular Kubernetes Java Client:
<dependency>
<groupId>io.fabric8</groupId>
<artifactId>kubernetes-client</artifactId>
<version>4.1.3</version>
</dependency>
It allows you to set the most exotic options... but the API is not very fluent (or I have not found yet the way to use it fluently) so the code becomes quite verbose... Building a UI is a challenge, because of the extreme complexity of the model.
yipee.io sounds promising though, but I didn't understand how to get a trial version.

Configuring different pod configuration for different environments (Kubernetes + Google Cloud or Minikube)

I have a (containerized) web service talking to an external CloudSQL service in Google Cloud. I've used the sidecar pattern in which a Google Cloud SQL Proxy container is next to the web service and authenticates+proxies to the external CloudSQL service. This works fine. Let's call this Deployment "deployment-api" with containers "api" + "pg-proxy"
The problem occurs when I want to deploy the application on my local minikube cluster which needs to have different configuration due to the service talking to a local postgres server on my computer. If I deploy "deployment-api" as is to minikube, it tries to run the "pg-proxy" container which barfs and the entire pod goes into a crash loop. Is there a way for me to selectively NOT deploy "pg-proxy" container without having two definitions for the Pod, e.g., using selectors/labels? I do not want to move pg-proxy container into its own deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-api
namespace: ${MY_ENV}
labels:
app: api
env: ${MY_ENV}
spec:
replicas: ${REPLICAS}
selector:
matchLabels:
app: api
env: ${MY_ENV}
template:
metadata:
labels:
app: api
env: ${MY_ENV}
spec:
containers:
- name: pg-proxy
ports:
- containerPort: 5432
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<redacted>:${MY_ENV}-app=tcp:5432",
"-credential_file=/secrets/cloudsql/${MY_ENV}-sql-credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: ${MY_ENV}-cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: api
image: ${DOCKER_IMAGE_PREFIX}api:${TAG}
imagePullPolicy: ${PULL_POLICY}
ports:
- containerPort: 50051
volumes:
- name: ${MY_ENV}-cloudsql-instance-credentials
secret:
secretName: ${MY_ENV}-cloudsql-instance-credentials
In raw Kubernetes means? No.
But I strongly encourage you to use Helm to deploy your application(s). With helm you can easily adapt manifest based on variables provided for each environment (or defaults). For example with variable postgresql.proxy.enabled: true in defaults and
{{- if .Values.postgresql.proxy.enabled }}
- name: pg-proxy
...
{{- end }}
in helm template you could disable this block completely on dev env by setting the value to false.