I would like to get shell access to the Persistent Volume I created on the Google Cloud Platform.
I tried using Google Cloud Shell for this. But to be able to do that I need to attach the Persistent Volume through the gcloud commands, and the command requires the instance name. But I don't see the instance name of the Google Cloud Shell when I list the instance names (in gcloud).
Is it possible to get shell access over Google Cloud Shell to the persistent disks? If not how can I get access to the Persistent Volume that I created?
Yes, all disks need to be attached to an instance to allow access to them - you will need to create a compute instance and mount the persistent disk with gcloud compute instances attach-disk [INSTANCE_NAME] --disk [DISK_NAME].
Once you create the new instance the instance name will become visible to you for usage by running gcloud compute instances list
You will then be able to access the disk by ssh'ing into the instance and mounting it.
The following will help with mounting:
https://cloud.google.com/compute/docs/disks/add-persistent-disk
You don't see the Cloud Shell's instance name in your list of VMs because it isn't owned by your project (and thus, to answer your question, you won't have permission to attach your persistent disks to it). You can verify this by querying the "/zone" endpoint of the Cloud Shell's metadata server, via curl "http://metadata.google.internal/computeMetadata/v1/instance/zone" as described in the GCE docs.
As Ryank mentioned, you'd need to attach the disk to an instance owned by your project, then SSH into it.
Related
I'm using GKE to run K8 workloads and want to add TPU support. From GCP docs, I "need" to attach a GCS bucket so the Job can read models and store logs. However, we already create shared NSF mounts for our k8 clusters. How hard of a requirement is it to "need" GCS to use TPUs? Can shared Filestore NFS mounts work just fine? What about using GCS Fuse?
I'm trying to avoid having the cluster user know about the back end file system (NFS vs GCS), and just know that that the files they provide will be available at "/home/job". Since the linked docs show passing a gs://mybucket/some/path value as needed for file system parameters, I'm not sure if a /home/job value will still work. Does the TPU access the filesystem directly and is only compatible with GCS? Or do the nodes access the filesystem (preferring GCS) and then share the data (in memory) with the TPUs?
I'll try it out to learn the hard way (and report back), but curious if others have experience with this already.
We are Containerizing dotnet application on GKE cluster(Windows node-pool). We have a requirement, where multiple pods can access same shared space(persistent volume). Also it should support "ReadWriteMany" AccessMode. We have explored below option:
GCE Persistent disk accessed by Persistent volume.(It doesn't support ReadWriteMany. Only one pod can access the disk).
Network File Share(NFS). Currently not supported for windows node pools.
Filestore fits the solutions but expensive and managed by google.
We are looking other options to fit our requirement. Please help.
You are right by saying that NFS isn't yet supported on Windows, at least, not for the built-in client v4. So as long as there is no support for NFS v4, Kubernetes team could not start up this work in k8s. source
With this constraint, the only solution I can see remains the Filestore.
I've been trying to solve the same problem - accessing shared filesystem from 2 Windows pods (ASP.NET application on IIS + console application). I wasn't able to use the Filestore because it requires an NFSClient (Install-WindowsFeature NFS-Client) and I couldn't install it into the containers (during container build or runtime) since it requires a computer restart - maybe i'm missing sth here.
The options I've found:
If you need to create a simple temporary demo application that can run on single VM you can run both pods on a single instance, create a Persistent Disk, attach it to the instance with gcloud compute instances attach-disk, RDP into the instance, mount the disk and provide the disk to the pods as a hostPath.
That's the solution I'm using now.
Create an SMB share (on a separate VM or using a Docker container https://hub.docker.com/r/dperson/samba/ and access it from the pods using New-SmbMapping -LocalPath $shareletter -RemotePath $dhcpshare -Username $shareuser -Password $sharepasswd -Persistent $true. This solution worked for my console application but the web application couldn't access the files (even though I've set the application pool on IIS to run as Local System). The SMB could also be mounted from the instance using the New-SmbGlobalMapping - the flexvolume does that https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows. I haven't explored that option and I think it would have the same problem (IIS not seeing the files).
I think the best (most secure and reliable) solution would be to setup an Active Directory Domain Controller and SMB share on separate VM and provide access to it to the containers using gMSA: https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts
https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/
That doesn't seem easy though.
I am using Auto Scaling with a Load Balancer and have attached 2 EBS volumes.
Now whenever an instance is terminated it stores the snapshot of the EBS volumes.
I have gone through several links but cannot find how to retrieve/mount the EBS volume when a Launch Configuration launches a new instance.
Can I can get any reference or PowerShell script to identify a volume via tag name from the volume list and mount it when the instance is initiating?
There is no automatic facility to mount an existing EBS snapshot or volume when Auto Scaling launches an instance.
Best practice for Auto Scaling is to store data off-instance, such as in Amazon S3 or Amazon EFS. This way, the data is accessible to all instances simultaneously and can be used by new instances that are launched.
There is also no automatic facility to create an EBS snapshot when an Auto Scaling instance is terminated. Rather, there is the option to Delete on Termination, which controls whether the EBS volume should be deleted when the instance is terminated. If this option is off, then the EBS volumes will remain after an instance is terminated. You could write some code (eg in a User Data script) that re-attached an EBS volume to a new instance launched by Auto Scaling but this can get messy. (For example: Which instance to attach? What happens if more instances are launched?)
Bottom line: Yes, you could write a script to do this, but it is a poor architectural design.
Yes, you can attach (mount) an EBS volume to an EC2 instance using the AWS CLI command line tool. You run this command in the EC2 User Data at instance launch.
Running Commands on Your Linux Instance at Launch
AWS CLI attach-volume
Note: There is a problem with this strategy. The ASG Launch Configuration is for creating new EC2 instances that are identical. This would mean that you would be attempting to attach the same EBS volume to each instance which will fail. You may want to consider using EFS instead.
Amazon Elastic File System
Mount EFS on EC2 using the AWS CLI
Note: Use IAM roles to provide your instances with credentials instead of storing credentials on the EC2 instance.
Once you have configured your "master" EC2 instance create a new AMI for your ASG launch configuration.
When mounted on Amazon EC2 instances, an Amazon EFS file system provides a standard file system interface and file system access semantics, allowing you to seamlessly integrate Amazon EFS with your existing applications and tools. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, allowing Amazon EFS to provide a common data source for workloads and applications running on more than one Amazon EC2 instance.
I'm following the Spring Cloud Data Flow "Getting Started" guide here (section 13): http://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/current-SNAPSHOT/reference/htmlsingle/#_deploying_streams_on_kubernetes
I'm new to cloud computing, and I'm stuck at at the point where I should create a disk for a MySQL DB via gcloud:
gcloud compute disks create mysql-disk --size 200 --type pd-standard
Well, that throws:
The required property [project] is not currently set.
There is one thing that I quite don't understand yet (not my main question): Gcloud requires me to register a project un my google account. I wonder how my Google account (and cloud project there), the to-be-created disk image and the server are related to each another. How does this all relate to another?
Though my actual question is, how can I create the disk for the master sserver locally without using gcloud? Because I don't want a cloud server connected to my account on google.
Kubernetes does not manage any remote storage on its own. You can manage local storage by mounting an emptyDir volume.
Gcloud creates cloud bloc storage on your Google cloud account, and on Google Container Engine (GKE) Kubernetes is configured to be able to access these resources by ID, and can mount this type of volume into your Pod.
If you're not running Kubernetes on GKE, then you can't really mount a Google Cloud volume into your pod: the resources need to be managed by the same provider.
Background
I have a Java-Servlet application that runs in tomcat, which runs in a docker container, which runs on the Google Container Engine. It is no big deal to extend the docker image so that it also fetches and refreshes the certificates (there is only a single pod per domain, so no inter-pod-communication is required). However certbot needs to save it's credentials and certificates somewhere and the pod's filesystem seems like a bad idea because it is ephemeral and won't survive a pod restart. According to the table with storage options. Google Cloud storage seems like a good idea, because it is very cheap, the volume is auto sized and I can also access it from multiple locations (I don't need to create one disk for each individual pod which will be pretty much empty) including the web-UI (the later may be useful for debugging) and throuput and latency are really no issue for this usecase.
Question
I created a bucket and now I want to access that bucket from a container. Google describes here and yet again here that I can mount the buckets using FUSE. What they don't mention is that you need to make the container privileged to use FUSE which does not feel quite right for me. Additionally I need to install the whole google cloud SDK and set up authentication (which I am going to store... where?). But actually I don't really need fuse access. Just downloading the config on startup and uploading the config after each refresh would be enough. So something that works similar to SCP would do...
There is gcloud which can access files from command line without the need for FUSE, but it still needs to be initialized somehow with credentials.
Here user326502 mentions
It won't work with zero configuration if the App Engine SDK is installed [..] As long as the container lives on a Google Compute Engine instance you can access any bucket in the same project.
He explains further that I magically don't need any credentials when I just use the library. I guess I could write my own copy application with those libraries, but it feels like the fact that I did not find something like this from anyone on the net makes me feel that I am completely on the wrong track.
So how would one actually access a google cloud storage bucket from within a container (as simple as possible)?
You can use gsutil to copy from the bucket to the local disk when the container starts up.
If you are running in Google Container Engine, gsutil will use the service account of the cluster's nodes (to do this, you'll need to specify the storage-ro scope when you create your cluster).
Alternatively, you can create a new service account, generating a JSON key. In Container Engine, you can store that key as a Kubernetes secret, and then mount the secret in the pod that needs to use it. From that pod, you'd configure gsutil to use the service account by calling gcloud auth activate-service-account--key-file /path/to/my/mounted/secret-key.json