how create service into existing application on App Engine, without effect existing running service - google-app-engine-python

I have two running services on App Engine. One is default and another is node js service.
I want one another service deploy python service using docker.
how to do that any one have idea what are steps in term of without affecting existing services.
is that possible using UI or have to do using terminal.

It's no different from when you created your second service
Create the new service (e.g create a folder for the service in your project folder, create a yaml file to handle routing for your service)
Update your dispatch.yaml file to route traffic to this new service
Deploy the new service
// assuming your new service is called 'service3' and it's in folder called `service3`
$ gcloud app deploy --project <project> service3/service3.yaml
Deploy the updated dispatch.yaml
// assuming the dispatch.yaml is in your root folder
$ gcloud app deploy dispatch.yaml
See this article explaining/tying all the steps

Related

Copy file from pod to GCE bucket

I am using gocd for ci/cd. Result is tar archive. I need to copy resulting tar to GCE bucket.
I have gocd-agent docker image with included google sdk.
I know how to use gcloud with service account from local machine, but not from inside pod.
How to use service account assigned to pod with gcloud auth on pod?
Final goal is to use gsutil to copy above mentioned archive to bucket in same project.
My first thought would be to create Secret based on the service account, reference it in a pod yaml definiton to mount to some file and then run gcloud auth from the pod using that file. There's more info in Google cloud docs.
Another option which is quite new is to use Workload Identitiy. Seems you'd need to configure GKE cluster to enable this option. And it's working for some versions of GKE.

How to Authenticate API for creating pod using Kubernetes certificates

Query: I want to authenticate users for a REST API(Creating pods) using certificates in Kubernetes.
So far: I have created the user's crt file, private key file, added roles in roles.YAML file and did role binding in YAML file and added that user in the config file as well. If I try to create a pod with the same user using terminal commands in that namespace. then that user is given an error (Permission denied) which is correct as no permission was given to this user for creating a pod.
Problem: I want to do the same thing in spring boot application but I have no idea how to do this.
You should use the fabric8 kubernetes client or kubernetes java client library in the spring boot application and use a service account(recommended) or kubeconfig file to invoke kubernetes APIs.

How to change OpenShift console URL and API URL

My company runs OpenShift v3.10 cluster consisting of 3 masters and 4 nodes. We would like to change URL of the OpenShift API and also the URL of the OpenShift web console. Which steps we need to take to successfully do so?
We have already tried to update the openshift_master_cluster_hostname and openshift_master_cluster_public_hostname variables to new DNS names, which resolve our F5 virtual hosts which load balances the traffic between our masters, and then started the upgrade Ansible playbook, but the upgrade fails. We have also tried to run the Ansible playbook which redeploys the cluster certificates, but after that step the OpenShift nodes status changes to NotReady.
We have solved this issue. What we had to do is to change the URL-s defined in the variables in the inventory file and then we executed the ANSIBLE playbook to update master configuration. The process of running that playbook is describe in the official documentation.
After that we also had to update the OpenShift Web Console configuration map with new URL-s and then scale down and scale up the web-console deployment. The process on how to update the configuration of the web-console is described here.

Setting up Spring Cloud Data Flow on Kubernetes

Do I need to install an instance of Spring Cloud Data Flow on the master server myself, or is this getting installed "automatically" as part of the deployment?
This isn't quite clear from the description at
http://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/current-SNAPSHOT/reference/htmlsingle/#_deploying_streams_on_kubernetes
I've followed the guide, though removed every config for MySQL. Maybe this is required. Though I'm somewhat stuck since it's just not assigning an external IP and I do not see why, how to debug, and whether I missed to install some required component.
Edit:
To clarify, I see a scdf service entry when I run
kubectl get svc
But this service never gets an external IP.
Do I need to install an instance of Spring Cloud Data Flow on the master server myself, or is this getting installed "automatically" as part of the deployment?
Spring Cloud Data Flow server needs to be setup either outside (that knows how to connect to the kubernetes environment) or you can use the Spring Cloud Data Flow server docker image to run inside the kubernetes while the latter approach is better.
Step 6 in the link you posted above runs the SCDF docker image inside the kubernetes cluster:
```
Deploy the Spring Cloud Data Flow Server for Kubernetes using the Docker image and the configuration settings you just modified.
$ kubectl create -f src/etc/kubernetes/scdf-config-kafka.yml
$ kubectl create -f src/etc/kubernetes/scdf-secrets.yml
$ kubectl create -f src/etc/kubernetes/scdf-service.yml
$ kubectl create -f src/etc/kubernetes/scdf-controller.yml
```
MySql is required, that's why it's in the steps.
Spring Cloud Data Flow uses an RDBMS instead of Redis for stream/task
definitions, application registration, and for job repositories.
You can also use any of the other supported RDMBSes.
You can install it using Helm Charts.
https://dataflow.spring.io/docs/installation/kubernetes/helm/
At first install Helm
Then install Spring Cloud Data Flow
helm install --name my-release stable/spring-cloud-data-flow
It will install and config relevant pods such as spring-cloud-dataflow-server, mysql, skipper, rabbitmq, etc.
Also you can customize versions and configurations.

Google Cloud - Deploy App to Specific VM Instance

I am using Google Cloud / Google Compute to host my application. I was on Google App Engine and I am migrating my code to Google Compute in order to use a customized VM Instance.
I am using the tutorial here, and I am deploying my app using:
$ gcloud preview app deploy
I setup a custom VM Instance using the "Create Instance" option at the top of my Google Cloud Console:
However, when I use the standard deploy gcloud command, my app is deployed to Managed VMs (managed by Google), and I have no control over those servers. I need to run the app on my custom VM because it has some custom OS-level software.
Any ideas on how to deploy the app to my custom VM Instance only? Even when I delete all the Managed VMs and try to deploy, the VMs are just re-created by Google.
The gcloud app deploy command can only be used to deploy the app to classic AppEngine sandboxed environment or to the Managed VMs. It cannot deploy your application to an instance running on GCE.
You will need to incorporate your own deployment method/script depending on the programming language you're using. Of course, since GCE is just an infrastructure-as-a-service environment (versus AppEngine being a platform-as-a-service), you will also need to take care of high-availability (what happens when your instance becomes unavailable?), scalability (what happens when one instance is not enough to sustain the load of your application?), load balancing and many more topics you'll need to address.
Finally, If you need to install packages on your application servers you may consider taking the Managed VMs route. It manages for you all the infrastructure related matters (scalability, elasticity, monitoring etc) and still allows you to have your own custom runtime. It's still beta though...
How to create a simple static Website and deploy it on Google cloud VM instance
Recommended: Docker and Google Cloud SDK should be installed
Step:1
Create a Folder “personal-website” with index.html and frontend files on your local computer
Step:2
Inside “personal-website” folder create a Dockerfile
Write two lines
FROM httpd
COPY . /usr/local/apache2/htdocs/personal-website
Step:3
Build image with docker and push it to Google cloud registry
You should have google cloud sdk and project selected and docker authorized
Select Project using these commands:
gcloud config set project [PROJECT_ID]
gcloud config set compute/zone us-central1-b
After that Run these commands
1. export PROJECT_ID="$(gcloud config get-value project -q)"
2. docker build -t gcr.io/${PROJECT_ID}/personal-website:v1 .
3. gcloud auth configure-docker
4. docker push gcr.io/${PROJECT_ID}/personal-website:v1
Step:4
Create a VM instance with command with container running into it
Run Command
1. gcloud compute instances create-with-container apache-vm2 --container-image gcr.io/test-project-220705/personal-website:v1