How to develop using OpenVSCode and a remote container - visual-studio-code

I would like to use OpenVSCode for cloud development in a microservices-orianted environment.
I was thinking on the following architecture/setup:
Use K8s as the runtime environment.
OVSC & Dev pods to run using dedicated/separated pods (Not sidecars).
Code sharing is done via NFS, syncthing, etc
The documentation are showcasing a setup of OVSC that operate/run as the Dev pod itself. While running as described above (IDE & Dev pods running on a separated pod), I noticed that dev-related libraries/missing (e.g. Golang packages) are not available as they are installed on the dev pod, etc:
Q:
What is needed in order to support such a setup?
Is it possible to init OVSC in such a way that it will execute commands/open the terminal on a remote container as default?
Thanks!

Related

Latest AWX version with docker-compose for production

Trying to configure AWX runtime using Docker with Docker Compose. With image quay.io/ansible/awx:21.7.0 it seems a little tricky. I don't want to set up Kubernetes and use AWX Operator - don't have resources and tasks for this complicity, just redundant tools. All I need - it is running Docker process with some additional services in my infrastructure (for example Traefik and SystemD services, AWX one of them).
Does anyone have moving on this way? Trying to find production Dockerfile (I think it uses in prod, right?) and prepare Django environment to work inside docker-compose (env vars, networks, resources, services).
I'll be updating this post with my results. Thanks guys, I hope I'm not alone with this problem.

Override deployd Helm-chart values on GKE with values from a file in the local machine?

I would like to change my deployed(GKE) Helm Chart values file with the ones that are inside my local file, basically to do this:
helm upgrade -f new-values.yml {release name} {package name or path}
So I've make all the changes inside my local file, but the deployment is inside the GKE cluster.
I've connected to my cluster via ssh, but how can I run the above command in order to perform the update if the file with the new values is on my local machine and the deployment is inside GKE cluster?
Maybe somehow via the scp command?
Solution by setting up required tools locally (you need a while or two for that)
You just need to reconfigure your kubectl client, which can be done pretty straighforward. When you log in to GCP Console -> go to Kubernetes Engine -> Clusters -> click on Actions (3 vertical dots to the right of the cluster name) -> select Connect -> copy the command, which may resemble the following one:
gcloud container clusters get-credentials my-gke-cluster --zone europe-west4-c --project my-project
It assumes you have your Cloud SDK and kubectl already installed on your local machine. If you have not, here you have step-by-step description how to do that:
Installing Google Cloud SDK [Debian/Ubuntu] (if you use a different OS, simply choose another tab)
Installing kubectl tool [Debian/Ubuntu] (choose your OS if it is something different)
Once you run the above command on your local machine, your kubectl context will be automatically set to your GKE Cluster even if it was set before e.g. to your local Minikube instance. You can check it by running:
kubectl config current-context
OK, almost done. Did I also mention helm ? Well, you will also need it. So if you have not installed it on your local machine previously, please do it now:
Install helm [Debian/Ubuntu]
Alternative slution using Cloud Shell (much quicker)
If installing and configuring it locally seems to you too much hassle, you can simply use a Cloud Shell (I bet you've used it before). In case you didn't, once logged in to your GCP Console click on the following icon:
Once logged into Cloud Shell, you can choose to upload your local files there:
simply click on More (3 dots again):
and choose Upload a file:

How do you install Python libraries on gcloud kubernetes nodes?

I have a gcloud Kubernetes cluster running and a Google bucket that holds some data I want to run on the cluster.
In order to use the data in the bucket, I need gcsfs installed on the nodes. How do I install packages like this on the cluster using gcloud, kubectl, etc.?
Check if a recipe like "Launch development cluster on Google Cloud Platform with Kubernetes and Helm" could help.
Using Helm, you can define workers with additional pip packages:
worker:
replicas: 8
limits:
cpu: 2
memory: 7500000000
pipPackages: >-
git+https://github.com/gcsfs/gcsfs.git
git+https://github.com/xarray/xarray.git
condaPackages: >-
-c conda-forge
zarr
blosc
I don't know if the suggestion given by VonC will actually work, but what I do know is that you're not really supposed to install stuff onto a Kubernetes Engine's worker node. This is evident by the fact that neither does it have a package manager nor does it allow to update individual programs separately.
Container-Optimized OS does not support traditional package managers (...) This design prevents updating individual software packages in the root filesystem independent of other packages.
Having said that, you can customize the worker nodes of a node pool if the number of nodes is static for that node pool via startup scripts. These still work as intended, but since you can't edit the instance template being used by the node pool, you'll have to edit those into the instances manually. So again, this clearly is not a very good way of doing things.
Finally, worker nodes have something called a "Toolbox", which is basically a special container you can run to get access to debugging tools. This container is ran directly on Docker, it's not scheduled by Kubernetes. You can customize this container image, so you can add some extra tools into it.

Instructions to install addons with Kubernetes 1.6 on bare metal machine?

I have setup my kubernetes cluster from scratch following this doc: https://kubernetes.io/docs/getting-started-guides/scratch/
My kubernetes master and worker are working correctly, but I didn't find the instruction to deploy dns addons.
Addons can be deployed through yaml files as well as using the addon manager. I have already installed dashboard, monitoring, DNS manually using the yaml files provided (with small modifications) in this repo.
Please note addon-manager is pretty special, You should copy all files into a directory then:
./kube-addons.sh
Btw I prefer installing addons manually instead of using addon manager.
DNS addon manual example:
Take the kubedns-controller.yaml.sed,
Replace the $DNS_DOMAIN with cluster.local(you should use the domain specified in your setup here). You can also set it as a variable. Please note there are multiple occurrences in this file.
Then:
mv kubedns-controller.yaml.sed kubedns-deployement.yaml
kubectl create -f kubedns-deployement.yaml

Google Cloud - Deploy App to Specific VM Instance

I am using Google Cloud / Google Compute to host my application. I was on Google App Engine and I am migrating my code to Google Compute in order to use a customized VM Instance.
I am using the tutorial here, and I am deploying my app using:
$ gcloud preview app deploy
I setup a custom VM Instance using the "Create Instance" option at the top of my Google Cloud Console:
However, when I use the standard deploy gcloud command, my app is deployed to Managed VMs (managed by Google), and I have no control over those servers. I need to run the app on my custom VM because it has some custom OS-level software.
Any ideas on how to deploy the app to my custom VM Instance only? Even when I delete all the Managed VMs and try to deploy, the VMs are just re-created by Google.
The gcloud app deploy command can only be used to deploy the app to classic AppEngine sandboxed environment or to the Managed VMs. It cannot deploy your application to an instance running on GCE.
You will need to incorporate your own deployment method/script depending on the programming language you're using. Of course, since GCE is just an infrastructure-as-a-service environment (versus AppEngine being a platform-as-a-service), you will also need to take care of high-availability (what happens when your instance becomes unavailable?), scalability (what happens when one instance is not enough to sustain the load of your application?), load balancing and many more topics you'll need to address.
Finally, If you need to install packages on your application servers you may consider taking the Managed VMs route. It manages for you all the infrastructure related matters (scalability, elasticity, monitoring etc) and still allows you to have your own custom runtime. It's still beta though...
How to create a simple static Website and deploy it on Google cloud VM instance
Recommended: Docker and Google Cloud SDK should be installed
Step:1
Create a Folder “personal-website” with index.html and frontend files on your local computer
Step:2
Inside “personal-website” folder create a Dockerfile
Write two lines
FROM httpd
COPY . /usr/local/apache2/htdocs/personal-website
Step:3
Build image with docker and push it to Google cloud registry
You should have google cloud sdk and project selected and docker authorized
Select Project using these commands:
gcloud config set project [PROJECT_ID]
gcloud config set compute/zone us-central1-b
After that Run these commands
1. export PROJECT_ID="$(gcloud config get-value project -q)"
2. docker build -t gcr.io/${PROJECT_ID}/personal-website:v1 .
3. gcloud auth configure-docker
4. docker push gcr.io/${PROJECT_ID}/personal-website:v1
Step:4
Create a VM instance with command with container running into it
Run Command
1. gcloud compute instances create-with-container apache-vm2 --container-image gcr.io/test-project-220705/personal-website:v1