I have a working VM instance that I'm trying to copy to allow redundancy behind google load balancer.
A test run with a dummy instance worked fine, creating a new instance from a snapshot of a running one.
Now, the real "original" instance have a persistent disk attached and this cause a problem in starting up the cloned instance because of the (obviously) missing persistent disk mount.
Logs from serial console output is as:
* Stopping cold plug devices[74G[ OK ]
* Stopping log initial device creation[74G[ OK ]
* Starting enable remaining boot-time encrypted block devices[74G[ OK ]
The disk drive for /mnt/XXXX-log is not ready yet or not present.
keys:Continue to wait, or Press S to skip mounting or M for manual recovery
As I understand there is no way to send any of this key strokes to the instance, is there any other way to overcome this issue? I know that I could unmount the disk before the snapshot, but the workflow I would like to instate is creating period snapshots of production servers, so un-mounting disks every time before performing it would require instance downtime (plus all the unnecessary risks of doing an action that would seem pointless).
Is there a way to boot this type of cloned instances successfully, and attach a new persistence disk afterwards?
Is this happening because the original persistent disk is in use, or the same problem would occur even if the original instance is offline (for example due to a failure in which case I would try to created a new instance from a snapshot)?
One workaround that I am using to get away from the same issue is that I dont't actually unmount the disk rather just comment out the the mount line in /etc/fstab and take the snapshot. This way my instance has no downtime or down disks while snapshoting. (I am using Ubuntu 14.04 as OS if that matters)
Later I fix and uncomment it when I use that snapshot on a new instance.
However you can also look into adding the nofail option in the commented line to get a better solution.
By the way I am doing a similar task building a load balanced setup with multiple webserver nodes. Each being cloned from the said snapshot with extra persistent disks mounted for eg uploads,data and logs etc
I'm a little unclear as to what you're trying to accomplish. It sounds like you're looking to periodically snapshot the data volumes of a production server so you can clone them later.
In all likelihood, you simply need to sync and fsfreeze to before you make your snapshot, rather than just unmounting/remounting it. The GCP documentation has a basic example of this in the Snapshots documentation.
Related
I want to learn what is the proper way to reach the processed files when restarting the spring batch application on Kubernetes. Especially if the target type is file, it is being deleted together with the pod after the job failed.
We are considering to use persistent volume or backing up the created file somewhere such as DB or sftp server by implementing a listener.
Is there anyone have the experience of persistent volume usage(nfs or other solutions) for file operations. We are concerned about the performance and unexpected problems. Do you have any suggestions?
Thank you.
You should not rely on the ephemeral file system of a Pod for files that should persist and survive a Job (Pod) failure.
You need to use a persistent volume for that, so that Spring Batch can find the (incomplete) output file in a restart scenario and resume writing where it left off.
If you want data persistence, you may begin by using hostPath volumes first. This will restrict which nodes your pods may be spawned on. But is the simplest and gives you the best performance.
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
If you want dynamic allocation, you will need to configure storage solutions such as GlusterFS, NFS, CEPH etc.
I have an application (JHipster Gateway, UAA, Registry, 5 microservices) and each application source builds a Docker image and pushes to GitLab registry. Currently I'm running everything on Rancher using a Docker-Compose file. My volumes for Mongo databases are currently in each container.
I need advice about volume mounts. Here are my options as I see them.
Leave data in containers and monitor and backup
Use external mounts and monitor volumes on host.
If I leave Mongo data in the containers, do I just set up to just cluster and when the internal volumes fill, the database just scales? I am looking for some explanation to help my choice with Mongo database mounts, internal or external (on host)?
Thanks in advance,
David L. Whitehurst
Never store any data you care about directly in containers. There are good arguments in favor of both named volumes (native to Docker, some support in a multi-host Swarm environment, fewer host-specific dependencies) and host bind mounts (much easier to back up and maintain, possible to examine directly if needed) but use some sort of mounted storage.
The most important note here is that it's fairly routine to delete and recreate containers. If the software you're running or its underlying library stack has a security issue, you generally need to get (or build) an updated image, delete your existing container, and rebuild it against the new image. If data is stored only inside a container, then during this very routine delete-and-recreate operation, there's significant risk of losing data.
In principle, if you're really careful, and you have a replicated data store, you can roll this over without external volumes and not lose data. It's tricky, and takes a lot of patience; you'll be forced to take down one replica, wait for its data to be rebalanced across the other replicas, start up a new replica, wait for it to accept some of the data, and so on. If you can take a point release by stopping a container, deleting it, starting a new one with the same data store, and have it come up instantly with populated data, that's much easier to manage.
(The other corollary here is that you don't "back up containers", since they don't have any data you care about. You do back up the data stored on the host or in Docker named volumes, and you can always recreate the container from its image plus the external data.)
I created a project on Google Cloud a long time ago and I am currently having some problems with it. The only result I seem to be receiving is Internal Server Error.
I tried connecting to the compute instance through ssh, but it does not help much because :
as far as I remember, I used to be able to see all the code on the compute instance. It's no longer there, the home folder only has some hidden files. I am not sure where to look for the actual project files.
the only error I managed to get from a log file was : Error syncing pod 9c8e56bc-4298-11e6-ab50, skipping: failed to "StartContainer" for "postgres" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=postgres pod=postgres_default(9c8e56bc-4298-11e6-ab50); this makes me think there are some issues with Postgres, which has a persistent disk to its own, but there seems to be no easy way to find out how much of that disk is occupied.
even though I am admin on that project and I should receive detailed (with stacktrace) emails every time there is an error, I am not receiving anything at all.
This behaviour started today, all of a sudden, and I haven't touched the project in almost 2 years, so I am completely lost.
Thanks.
How can I check the remaining size of a persistent disk on Google
Cloud?
For this part, I finally found a way to do it today. I'll describe it all here with print screens so that it is easy for anyone.
First, go to the Google Console, Disks page : https://console.cloud.google.com/compute/disks
Identify the persistent disk you are interested in. In my case, this was called pg-data-disk. Click on the respective VM instance; this will be on the column "In use by", link in the image below :
This will open a SSH connection to the VM instance to which your persistent disk is attached. In the SSH window, run the following command : sudo lsblk. The result should be like in the image below :
You will thus discover the DISK ID (in my case this was sdb), so you can now run : sudo df -h <YOUR DISK ID>. This command will give you the exact disk usage, as shown below :
As for the other part of the question, I was actually using Docker containers which were orchestrated by Kubernetes. And I totally forgot about it.
Will upgrade my RAM and get back to work.
Thank you all.
I am running an Amazon EC2 instance with a MongoDB running on it.
Since I will need to use it only for some time, I was wondering if it is possible to keep only image of the system for the usage time with Amazon Machine Image. Any idea?
You can actually create an AMI from your server and then terminate the server when you don't need it.
When you need it again you can relaunch a new server based on the AMI you created. The downside to this is that your latest data may not be up to date. So I recommend creating the AMI right before you terminate the server.
Another alternative is to just use EBS backed storage/instances and just shutdown the instance when you don't need it. You can just start the instance when you need it. There's little cost associated with keeping an EBS volume around. Certainly much less than keeping your EC2 instance running all the time.
Hope this helps.
A machine stopped it´s a machine that Amazon don´t charge you.
You get charged for:
Online time
Storage space (assumably you store the image on S3 [EBS])
Elastic IP addresses
Bandwidth
But Amazon charge you for your AMI´s created.
So you can stop your machine and just start it when you need to use it.
I know that MongoDB can scale vertically. What about if I am running out of disk?
I am currently using EC2 with EBS. As you know, I have to assign EBS for a fixed size.
What if the MongoDB growth bigger than the EBS size? Do I have to create a larger EBS and Copy & Paste the files?
Or shall we start more MongoDB instance and each connect to different EBS disk? In such case, I could connect to a different instance for different databases.
If you're running out of disk, you obviously need to get a bigger disk.
There are several ways to migrate your data, it really depends on the type of up-time you need. First steps of course involve bundling the machine and creating the new volume.
These tips go from easiest to hardest.
Can you take the database completely off-line for several minutes?
If so, do this (migration by copy):
Mount new EBS on the server.
Stop your app from connecting to Mongo.
Shut down mongod and wait for everything to write (check the logs)
Copy all of the data files (and probably the logs) to the new EBS volume.
While the copy is happening, update your mongod start script (or config file) to point to the new volume.
Start mongod and check connection
Restart your app.
Can you take the database off-line for just a few minutes?
If so, do this (slaving and switch):
Start up a new instance and mount the new EBS on that server.
Install / start mongod as a --slave pointing at the current database. (you may need to re-start the current as --master)
The slave will do a fresh synchronization. Once the slave is up-to-date, you'll do a "switch" (next steps).
Turn off writes from the system.
Shut down the original mongod process.
Re-start the "new" mongod as a master instead of the slave.
Re-activate system writes pointing at the new master.
Done correctly those last three steps can happen in minutes or even seconds.
Can you not afford any down-time?
If so, do this (master-master):
Start up a new instance and mount the new EBS on that server.
Install / start mongod as a master and a slave against the current database. (may need to re-start current as master, minimal down-time?)
The new computer should do a fresh synchronization.
Once the new computer is up-to-date, switch the system to point at the new server.
I know it seems like this last version is actually the best, but it can be a little dicey (as of this writing). The reason is simply that I've honestly had a lot of issues with "Master-Master" replication, especially if you don't start with both active.
If you plan on using this method, I highly suggest a smaller practice run first. If something bombs here, Mongo might simply wipe all of your data files which will have the effect of taking more stuff down.
If you get a good version of this please post the commands, I'd like to see it in action.
Doesn't the E in EBS stand for elastic meaning something like resizing on the fly?
Currently the MongoDB team is working on finishining sharding which will allow you horizontal scaling by partitioning data separately on different servers. Give it a month or two and it will work fine. The developers are quite good at keeping their promises.
http://api.mongodb.org/wiki/current/Sharding%20Introduction.html
http://api.mongodb.org/wiki/current/Sharding%20Limits.html
You could slave the bigger disk off the smaller until it's caught up
or
fsync+lock and take a file system snapshot and copy it onto the bigger disk.
well, I am using Mongo DB now. I am pretty amazed the performance it generated, especially on some simple sorting.
I believe it's a good tool for simple web application logic. The remaining concern for is how to scale and backup. I will continue to explore.
The only disadvantage I have is that I didn't have any good tools to reveal the data stored inside. For example, I want to put my logging from MYSQL into Mongo as well. However, it's pretty difficult for me to view the log. Previously, i can use MYSQL query to fetch what I want easily.
Anyway, it's a good tool and I will continue to use it.