How to change the path of /tmp folder in ubuntu 16 - ubuntu-16.04

I am using Ubuntu 16.04 server. I have a couple of apps that are running and therefore can't reboot occasionally. Because of this my /tmp folder is filling up pretty fast and sometimes I have problem with logging in to the server
I want help with redirecting /tmp to another path like /mnt so I will be able to clear it periodically.

Related

Rsnapshot filepermission problem with network hdd over raspberry pi

After trying to solve this for days, I want to ask for help here:
I want to make backups with rsnapshot, which usually runs on a server and manages local backups. In my case, I want to run rsnapshot on my computer and let rsnapshot manage my backups on an externel harddrive. This externel harddrive is connected to my raspberry pi and mounted to my computer with following command:
sudo sshfs -o default_permissions,allow_other,idmap=user,IdentityFile=/home/user/.ssh/id_rsa pi#192.168.0.1:/mnt/externelHdd /mnt/backupHdd
Here, /mnt/backupHdd is the local root for rsnapshots backup directory.
Additionally, I want to connect the external harddrive directly to my computer for bigger backup jobs. For this purpose I wrote a script, which mounts the external harddrive either locally or over network with upper command. Afterwards, it starts the rsnapshot job with sudo rsnapshot daily. When the harddrive is connected locally, everthing works fine. When it's connected over sshfs, I get permission denied errors.
Rsnapshot apperently is not allowed to manage files per sshfs, when the files/directories were created with physical connection (different users: local and rasppi). I tried to solve this with the option allow_other and idmap=user but I think there is more to do. So Im asking you guys: How can I give permissions to rsnapshot?
Thanks for any help!
edit:
I get the following error:
/bin/cp: cannot create directory '/mnt/backupHdd/daily.1': Permission denied
----------------------------------------------------------------------------
rsnapshot encountered an error! The program was invoked with these options:
/usr/bin/rsnapshot daily
----------------------------------------------------------------------------
ERROR: /bin/cp -al /mnt/backupHdd/daily.0 /mnt/backupHdd/daily.1 failed (result 256, exit status 1).
ERROR: Error! cp_al("/mnt/backupHdd/daily.0/", "/mnt/backupHdd/daily.1/")
daily.0 was created when the hdd was connected to my local computer. daily.1 should be created with my hdd mounted over sshfs.
I'm assuming your running rsnapshot as root and root owns the remote backup directory. This command:
sudo sshfs -o default_permissions,allow_other,idmap=user,IdentityFile=/home/user/.ssh/id_rsa pi#192.168.0.1:/mnt/externelHdd /mnt/backupHdd
Is not going to work out as I think you are intending. Even though you are using sudo on the local side of the connection, your still SSH-ing in as "pi" meaning everything done on the far side of the connection is done by the user pi. No option to sshfs can change this fact. You'd need to enable root login and then ssh in as root, or at least some user that has full R/W access to that drive.

mount: unknown filesystem type 'vmhgsf'

I'm trying to mount my windows shared folder in centOS using command:
~mount -t vmhgfs .host:/shared-folder /var/www/html/
Unfortunatelly I get :
~monut: unknown filesystem type 'vmhgfs'
error. I tried to use:
~/usr/bin/vmhgfs-fuse /mnt
but mountpoint is not empty...
Is there any way to mount this folder on VMware player?
Cyb
Try this:
vmhgfs-fuse .host:/shared-folder /var/www/html/
you might need to use sudo on this
Working from a MacBook Pro running Big Sur and using VMware to host the virtual machine with CentOS 7 operating system. Had issues with loading in shared folders after VMware tools were installed. What worked for me is to use this exact command:
sudo /usr/bin/vmhgfs-fuse .host:/ /mnt/hgfs -o subtype=vmhgfs-fuse,allow_other
Hope this saves others the trouble of tracking down this solution.
The below is working perfectly fine for me. Might be useful for someone.
Already I have mapped the required folders in "SharedFolder" settings. But it was not showing up.
Additionally running this command shares the windows directories.
sudo /usr/bin/vmhgfs-fuse .host:/ /home/user/win -o subtype=vmhgfs-fuse,allow_other

How to Mount Disk for Google Cloud Compute Engine to use with /home?

I have a VM Instance with a small 10GB boot disk running CentOS 7 and would like to mount a larger 200GB Persistent Disk to contain data relating to the /home directory from a previous dedicated server (likely via scp).
Here's what I tried:
Attempt #1, Symlinks Might work, but some questions.
mounted the disk to /mnt/disks/my-persistent-disk
created folders on the persistent disk that mirror the folders in the old server's /home directory.
created a symlink in the /home directory for each folder, pointing to the persistent disk.
scp from old server to the VM /home/example_account for the first account. Realized scp does not follow symlinks (oops) and therefore the files went to the boot drive instead of the disk.
I suppose I could scp to /mnt/disks/my-persistent-disk and manage the symlinks and folders. Would this pose a problem? Would making an image of the VM with this configuration carry over to new instances (with autoscaling etc)?
Attempt #2, Mounting into /home.
Looking for a more 'natural' configuration that works with ftp, scp etc, I mounted the disk in /home/example_account
$ sudo mkdir -p /home/example_account
$ sudo mount -o discard,defaults /dev/sdc /home/example_account
$ sudo chmod a+w /home/example_account
#set the UUID for mounting at startup
$ sudo blkid /dev/sdc
$ sudo nano /etc/fstab
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /
sdc 8:32 0 200G 0 disk /home/example_account
scp from old server to the VM in the /home/example_account works fine. Yay. However, I would like to have more than just 1 folder in the /home directory. I suppose I could partition the disk but this feels a bit cumbersome and I'm not exactly sure how many accounts I will use in the future.
Attempt #3, Mount as /home
I felt the best solution was to have the persistent disk mount as the /home directory. This would allow for easily adding new accounts within /home without symlinks or disk partitions.
Attempted to move /home directory to /home.old but realized the Google Cloud Compute Engine would not allow it since I was logged into the system.
Changed to root user, but still said myusername#instance was logged in and using the /home directory. As root, I issued pkill -KILL -u myusername and the SSH terminated - apparently how the Google Cloud Compute Engine works with their SSH windows.
As I cannot change the /home directory, this method does not seem viable unless there is a workaround.
My thoughts:
Ideally, I think #3 is the best solution but perhaps there is something I'm missing (#4 solution) or one of the above situations is the preferable idea but perhaps with better execution.
My question:
In short, how to I move an old server's data to a Google Cloud VM with a persistent disk?

installations disappearing on google compute engine

I'm experiencing some weird behavior on the google compute engine. I made a new instance with ubuntu on it. I installed a node app I'm working on, pulled code from github etc...
then I installed mongodb and nginx. The weird thing is, every time I leave the session, and reconnect, my mongodb and nginx installation files disappear.
for example, when I install nginx I find the nginx installation on /etc/nginx where I can find like nginx.conf. but when I left the compute engine console session, and reconnected later, that directory was gone. same thing is happening with mongodb.
my node installation under /home/abdul/mystuff doesn't disappear though.
is this normal? is it a setting?
details:
this is an ubuntu image (idk which version, and not sure how to check)
using the following to install nginx:
sudo apt-get update
sudo apt-get install nginx
result of command
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /var/lib/docker/aufs
sdb 8:16 0 5G 0 disk
└─sdb1 8:17 0 5G 0 part /home
Looks like you're running a Docker container on your instance (/var/lib/docker/aufs) and installing the software inside the container.
If you want to save changes back to the image, it is possible to use the docker commit command, but this is almost definitely not what you want.
Instead, use a Dockerfile to build images and update it whenever you want to make a change. This way you can easily recreate the image and make changes without starting from scratch. For persistence (.e.g. config files and databases) use volumes, which are just directories stored outside of the Union File System as normal directories on the host.

Docker mongodb - add database on disk to container

I am running Docker on windows and I have a database with some entries on disk at C:\data\db.
I want to add this database to my container. I have tried numerous ways to do this but failed.
I tried: docker run -p 27017:27017 -v //c/data/db:/data/db --name mongodb devops-mongodb
In my dockerfile I have:
RUN mkdir -p /data/db
VOLUME /data/db
But this doesn't add my current database on disk to the container. It creates a fresh /data/db directory and persists the data I add to it.
The docs here https://docs.docker.com/userguide/dockervolumes/ under 'Mount a host directory as a data volume' specifically told me to execute the -v //c/data/db:/data/db but this isn't working.
Any ideas?
You're using Boot2Docker (which runs inside a Virtual Machine). Boot2Docker uses VirtualBox guest additions to make directories on your Windows machine available to Docker running inside the Virtual Machine.
By default, only the C:\Users directory (on Windows), or /Users/ directory (on OS X) is shared with the virtual machine. Anything outside those directories is not shared with the Virtual Machine, which results in Docker creating an empty directory at the specified location for the volume.
To share directories outside C:\Users\ with the Virtual Machine, you have to manually configure Boot2Docker to share those. You can find the steps needed in the VirtualBox guest addition section of the README;
If some other path or share is desired, it can be mounted at run time by doing something like:
$ mount -t vboxsf -o uid=1000,gid=50 your-other-share-name /some/mount/location
It is also important to note that in the future, the plan is to have any share which is created in VirtualBox with the "automount" flag turned on be mounted during boot at the directory of the share name (ie, a share named home/jsmith would be automounted at /home/jsmith).
Please be aware that using VirtualBox guest additions have a really bad impact on performance (reading/writing to the volume will be really slow). Which could be fine for development, but should be used with caution.