Google Cloud - Deploy App to Specific VM Instance - deployment

I am using Google Cloud / Google Compute to host my application. I was on Google App Engine and I am migrating my code to Google Compute in order to use a customized VM Instance.
I am using the tutorial here, and I am deploying my app using:
$ gcloud preview app deploy
I setup a custom VM Instance using the "Create Instance" option at the top of my Google Cloud Console:
However, when I use the standard deploy gcloud command, my app is deployed to Managed VMs (managed by Google), and I have no control over those servers. I need to run the app on my custom VM because it has some custom OS-level software.
Any ideas on how to deploy the app to my custom VM Instance only? Even when I delete all the Managed VMs and try to deploy, the VMs are just re-created by Google.

The gcloud app deploy command can only be used to deploy the app to classic AppEngine sandboxed environment or to the Managed VMs. It cannot deploy your application to an instance running on GCE.
You will need to incorporate your own deployment method/script depending on the programming language you're using. Of course, since GCE is just an infrastructure-as-a-service environment (versus AppEngine being a platform-as-a-service), you will also need to take care of high-availability (what happens when your instance becomes unavailable?), scalability (what happens when one instance is not enough to sustain the load of your application?), load balancing and many more topics you'll need to address.
Finally, If you need to install packages on your application servers you may consider taking the Managed VMs route. It manages for you all the infrastructure related matters (scalability, elasticity, monitoring etc) and still allows you to have your own custom runtime. It's still beta though...

How to create a simple static Website and deploy it on Google cloud VM instance
Recommended: Docker and Google Cloud SDK should be installed
Step:1
Create a Folder “personal-website” with index.html and frontend files on your local computer
Step:2
Inside “personal-website” folder create a Dockerfile
Write two lines
FROM httpd
COPY . /usr/local/apache2/htdocs/personal-website
Step:3
Build image with docker and push it to Google cloud registry
You should have google cloud sdk and project selected and docker authorized
Select Project using these commands:
gcloud config set project [PROJECT_ID]
gcloud config set compute/zone us-central1-b
After that Run these commands
1. export PROJECT_ID="$(gcloud config get-value project -q)"
2. docker build -t gcr.io/${PROJECT_ID}/personal-website:v1 .
3. gcloud auth configure-docker
4. docker push gcr.io/${PROJECT_ID}/personal-website:v1
Step:4
Create a VM instance with command with container running into it
Run Command
1. gcloud compute instances create-with-container apache-vm2 --container-image gcr.io/test-project-220705/personal-website:v1

Related

How to develop using OpenVSCode and a remote container

I would like to use OpenVSCode for cloud development in a microservices-orianted environment.
I was thinking on the following architecture/setup:
Use K8s as the runtime environment.
OVSC & Dev pods to run using dedicated/separated pods (Not sidecars).
Code sharing is done via NFS, syncthing, etc
The documentation are showcasing a setup of OVSC that operate/run as the Dev pod itself. While running as described above (IDE & Dev pods running on a separated pod), I noticed that dev-related libraries/missing (e.g. Golang packages) are not available as they are installed on the dev pod, etc:
Q:
What is needed in order to support such a setup?
Is it possible to init OVSC in such a way that it will execute commands/open the terminal on a remote container as default?
Thanks!

how create service into existing application on App Engine, without effect existing running service

I have two running services on App Engine. One is default and another is node js service.
I want one another service deploy python service using docker.
how to do that any one have idea what are steps in term of without affecting existing services.
is that possible using UI or have to do using terminal.
It's no different from when you created your second service
Create the new service (e.g create a folder for the service in your project folder, create a yaml file to handle routing for your service)
Update your dispatch.yaml file to route traffic to this new service
Deploy the new service
// assuming your new service is called 'service3' and it's in folder called `service3`
$ gcloud app deploy --project <project> service3/service3.yaml
Deploy the updated dispatch.yaml
// assuming the dispatch.yaml is in your root folder
$ gcloud app deploy dispatch.yaml
See this article explaining/tying all the steps

Override deployd Helm-chart values on GKE with values from a file in the local machine?

I would like to change my deployed(GKE) Helm Chart values file with the ones that are inside my local file, basically to do this:
helm upgrade -f new-values.yml {release name} {package name or path}
So I've make all the changes inside my local file, but the deployment is inside the GKE cluster.
I've connected to my cluster via ssh, but how can I run the above command in order to perform the update if the file with the new values is on my local machine and the deployment is inside GKE cluster?
Maybe somehow via the scp command?
Solution by setting up required tools locally (you need a while or two for that)
You just need to reconfigure your kubectl client, which can be done pretty straighforward. When you log in to GCP Console -> go to Kubernetes Engine -> Clusters -> click on Actions (3 vertical dots to the right of the cluster name) -> select Connect -> copy the command, which may resemble the following one:
gcloud container clusters get-credentials my-gke-cluster --zone europe-west4-c --project my-project
It assumes you have your Cloud SDK and kubectl already installed on your local machine. If you have not, here you have step-by-step description how to do that:
Installing Google Cloud SDK [Debian/Ubuntu] (if you use a different OS, simply choose another tab)
Installing kubectl tool [Debian/Ubuntu] (choose your OS if it is something different)
Once you run the above command on your local machine, your kubectl context will be automatically set to your GKE Cluster even if it was set before e.g. to your local Minikube instance. You can check it by running:
kubectl config current-context
OK, almost done. Did I also mention helm ? Well, you will also need it. So if you have not installed it on your local machine previously, please do it now:
Install helm [Debian/Ubuntu]
Alternative slution using Cloud Shell (much quicker)
If installing and configuring it locally seems to you too much hassle, you can simply use a Cloud Shell (I bet you've used it before). In case you didn't, once logged in to your GCP Console click on the following icon:
Once logged into Cloud Shell, you can choose to upload your local files there:
simply click on More (3 dots again):
and choose Upload a file:

Can a Service Fabric Container project pull from Docker Hub?

I have created a new Service Fabric Container project in Visual Studio that I am trying to test by publishing to the local cluster. I have created a Windows Container image that I have run locally in Docker. I pushed the image to a private registry in Docker Hub.
When I publish the project to the local cluster, it deploys, but then I get an error:
Error event: SourceId='System.Hosting', Property='Download:1.0:1.0'.
There was an error during download.Failed to download container image docker.io/(username)/(repository)
All the examples show pulling an image from Azure Container Registry. Does Service Fabric only work with ACR, or do I have to add additional configuration to my service manifest to use a private Docker Hub registry?
Edit: also, it seems unable to find the container locally. I tried using the tagged local name of the image from the local repository (I checked using "docker images" and it is there). Same result. Service Fabric should be able to find it:
Service Fabric will pull down the image (if it's not already in the local registry) and launch a container based on the arguments you provide.
from MSDN blog on Service Fabric
It looks like the problem is that Service Fabric does not support container deployment on Windows 10 (and my dev machine is Win10, so local development/testing is out). There are notes to this effect on the Azure Documentation but I guess I didn't notice them or glossed over them...

Run kubernetes from source and configure cloud provider

Is it possible to run kubernetes from source (./hack/local-up-cluster.sh) and still properly configure the cloud provider from this type of setup? For example, if an instance is running on AWS EC2 and all prerequisites are met including proper exports, aws cli and configs but keep getting an error stating that the cloud provider was not found. KUBERNETES_PROVIDER=aws, Zone is set to us-west-2a, etc...
Failed to get AWS Cloud Provider. plugin.host.GetCloudProvider returned <nil> instead
I don't think hack/local-up-cluster.sh is designed to be run on a cloud provider. However, cluster/kube-up.sh is designed to work when building from source:
$ make release
$ export KUBERNETES_PROVIDER=aws
$ cluster/kube-up.sh # Uses the release built in step 1
There are lots of options which can be configured, and you can find more details here (just ignore the part about https://get.k8s.io).