Unable to connect to SSH neither throught cloud shell or SCP - powershell

Before this error happens:
I have a VM, I tried to change the permission of all folders to 777, in order to get past an error from data transfer to Cloud Run.
leads to "sudo: /etc/sudo.conf is world writable sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set" when I use SSH
I fixed it by mounting this infected disk to a temp Instance , changed it back with "chmod 755 /etc/sudo.conf
chmod 4755 /usr/bin/sudo"
Now,
I have 2 problems.
I still not able to connect to SSH.
I tried troubleshooting and all ticks are green, and I did not have an IAP problem before.
Nor FTP (I use Puttygen to create a private key then update VM's meta)
the 20 GB disk became 65 GB. Is this what caused the problem? anyway to revert back to 20GB without damaging the disk
Right now, I can still access the site, it runs fine. https://www.nasavape.com

Related

Rsnapshot filepermission problem with network hdd over raspberry pi

After trying to solve this for days, I want to ask for help here:
I want to make backups with rsnapshot, which usually runs on a server and manages local backups. In my case, I want to run rsnapshot on my computer and let rsnapshot manage my backups on an externel harddrive. This externel harddrive is connected to my raspberry pi and mounted to my computer with following command:
sudo sshfs -o default_permissions,allow_other,idmap=user,IdentityFile=/home/user/.ssh/id_rsa pi#192.168.0.1:/mnt/externelHdd /mnt/backupHdd
Here, /mnt/backupHdd is the local root for rsnapshots backup directory.
Additionally, I want to connect the external harddrive directly to my computer for bigger backup jobs. For this purpose I wrote a script, which mounts the external harddrive either locally or over network with upper command. Afterwards, it starts the rsnapshot job with sudo rsnapshot daily. When the harddrive is connected locally, everthing works fine. When it's connected over sshfs, I get permission denied errors.
Rsnapshot apperently is not allowed to manage files per sshfs, when the files/directories were created with physical connection (different users: local and rasppi). I tried to solve this with the option allow_other and idmap=user but I think there is more to do. So Im asking you guys: How can I give permissions to rsnapshot?
Thanks for any help!
edit:
I get the following error:
/bin/cp: cannot create directory '/mnt/backupHdd/daily.1': Permission denied
----------------------------------------------------------------------------
rsnapshot encountered an error! The program was invoked with these options:
/usr/bin/rsnapshot daily
----------------------------------------------------------------------------
ERROR: /bin/cp -al /mnt/backupHdd/daily.0 /mnt/backupHdd/daily.1 failed (result 256, exit status 1).
ERROR: Error! cp_al("/mnt/backupHdd/daily.0/", "/mnt/backupHdd/daily.1/")
daily.0 was created when the hdd was connected to my local computer. daily.1 should be created with my hdd mounted over sshfs.
I'm assuming your running rsnapshot as root and root owns the remote backup directory. This command:
sudo sshfs -o default_permissions,allow_other,idmap=user,IdentityFile=/home/user/.ssh/id_rsa pi#192.168.0.1:/mnt/externelHdd /mnt/backupHdd
Is not going to work out as I think you are intending. Even though you are using sudo on the local side of the connection, your still SSH-ing in as "pi" meaning everything done on the far side of the connection is done by the user pi. No option to sshfs can change this fact. You'd need to enable root login and then ssh in as root, or at least some user that has full R/W access to that drive.

How to change the path of /tmp folder in ubuntu 16

I am using Ubuntu 16.04 server. I have a couple of apps that are running and therefore can't reboot occasionally. Because of this my /tmp folder is filling up pretty fast and sometimes I have problem with logging in to the server
I want help with redirecting /tmp to another path like /mnt so I will be able to clear it periodically.

How to Mount Disk for Google Cloud Compute Engine to use with /home?

I have a VM Instance with a small 10GB boot disk running CentOS 7 and would like to mount a larger 200GB Persistent Disk to contain data relating to the /home directory from a previous dedicated server (likely via scp).
Here's what I tried:
Attempt #1, Symlinks Might work, but some questions.
mounted the disk to /mnt/disks/my-persistent-disk
created folders on the persistent disk that mirror the folders in the old server's /home directory.
created a symlink in the /home directory for each folder, pointing to the persistent disk.
scp from old server to the VM /home/example_account for the first account. Realized scp does not follow symlinks (oops) and therefore the files went to the boot drive instead of the disk.
I suppose I could scp to /mnt/disks/my-persistent-disk and manage the symlinks and folders. Would this pose a problem? Would making an image of the VM with this configuration carry over to new instances (with autoscaling etc)?
Attempt #2, Mounting into /home.
Looking for a more 'natural' configuration that works with ftp, scp etc, I mounted the disk in /home/example_account
$ sudo mkdir -p /home/example_account
$ sudo mount -o discard,defaults /dev/sdc /home/example_account
$ sudo chmod a+w /home/example_account
#set the UUID for mounting at startup
$ sudo blkid /dev/sdc
$ sudo nano /etc/fstab
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /
sdc 8:32 0 200G 0 disk /home/example_account
scp from old server to the VM in the /home/example_account works fine. Yay. However, I would like to have more than just 1 folder in the /home directory. I suppose I could partition the disk but this feels a bit cumbersome and I'm not exactly sure how many accounts I will use in the future.
Attempt #3, Mount as /home
I felt the best solution was to have the persistent disk mount as the /home directory. This would allow for easily adding new accounts within /home without symlinks or disk partitions.
Attempted to move /home directory to /home.old but realized the Google Cloud Compute Engine would not allow it since I was logged into the system.
Changed to root user, but still said myusername#instance was logged in and using the /home directory. As root, I issued pkill -KILL -u myusername and the SSH terminated - apparently how the Google Cloud Compute Engine works with their SSH windows.
As I cannot change the /home directory, this method does not seem viable unless there is a workaround.
My thoughts:
Ideally, I think #3 is the best solution but perhaps there is something I'm missing (#4 solution) or one of the above situations is the preferable idea but perhaps with better execution.
My question:
In short, how to I move an old server's data to a Google Cloud VM with a persistent disk?

Greenplum Database :psql: could not connect to server: No such file or directory

I am bashing my head against the wall. its been 4 days.but psql is not connecting.
We have a small array of Greenplum database.In that, We have the master node. when i am trying to use psql utility
Getting this error :
[gpadmin#master gpseg-1]$ psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
We tried
on searching for postmaster.pid files.
We have removed it.But still, error remains.
Use a command like ( netstat -ln; ps -ef ) | egrep '(postgres)|(postmaster)|(5432)' to try to determine whether or not an instance of the postgres server is running.
If the postmaster is not running, remove the postmaster.pid file and restart the database. While I don't use the Greenplum database, I see that instructions are here: Starting and Stopping the Greenplum Database. Do not remove the postmaster.pid file without making sure the database is not running, and note that removing the postmaster.pid file without starting the database is pointless.
It may be wise to open your postgresql.conf file and see if the listen_addresses, port, unix_socket_directory, unix_socket_group, and unix_socket_permissions settings might be a source of issues.
Since the error message referenced specifically mentions the socket file, look most closely at unix_socket_directory, unix_socket_group, and unix_socket_permissions.
If unix_socket_directory is pointing somewhere other than /tmp, then various workarounds exist.
Alternatively, and presuming that the server is running, one might try to locate the socket file without looking in the postgresql.conf file, though this might make it a bit harder to address permissions, port, etc. issues. A tool like locate, find, etc., may be used in conjunction with sudo or by the root user.
$ sudo find /tmp /var -name .s.PGSQL.5432
Presuming that the location of the .s.PGSQL.5432 file issue is the root cause of your problem, specifying the socket file location on the psql command-line is probably the most straightforward workaround. In example, if the *.s.PGSQL.5432 file is in /var/pgsql_socket directory as it is on some systems, try this, but, of course, use the actual directory where .s.PGSQL.5432 is located:
$ pgsql -h /var/pgsql_socket
If the .s.PGSQL.5432 file IS in /tmp, then the problem is more likely one of permissions, and consulting the postgresql.conf file is advised, and probably the user attempting the psql command will have to be added to a group that has access to the socket file. (Remember, log out and back in after changing group membership.)
Though the page does not necessarily seem to directly relate to this issue, do consider the Accessing the Database help as needed.
What does gpstate show? If it cannot connect, make sure the GPDB master is running:
ps ax | grep 'M master'
If the master is running, it will also show the port the master is listening on.
For the gpadmin Linux account, look in the ~/gpAdminLogs directory. There should be one or more startup logs that you can check.
That error normally means that the database is stopped. You should never remove this file unless it was left behind after a bad crash, where the file was never cleaned out. You would normally detect that situation when you start the db again - it would complain the file already existed.
I tend to look for
ps -eaf|grep -i silent
to see the postmaster processes.
If the master is down, but the segments are up, you will need to start the master only
gpstart -m
then stop everything with
gpstop -M fast
Causes for failures should be in $MASTER_DATA_DIRECTORY/pg_log and possibly in the corresponding segment pg_log directories. There may also be core files if the master or segments had a panic.

Permission denied on network/samba share while accessing HG repo

I am using EclipsePHP in Ubuntu 10.10 and try to use Mercurial (HG) to work with a repository that's located on my network-connected staging server (samba share).
When trying to refresh the repository from within Eclipse (hg status really) , I get the following error thrown in my face: abort: Operation not permitted: /media/sharename/myrepository/.hg/.dirstate* .
Whilst trying to find out what's wrong, I went to the network share from terminal and wrote hg status - the same error occurs, so it's not only occuring from within Eclipse. I tested to CHMOD the files from both my computer as well as the server - chmod 777 /media/sharename/myrepository/ -R - nothing changes.
But when I accidentally ran sudo hg status from the repo directory, Mercurial started the fireworks and worked like a charm.
What on earth is going wrong with my computer? Why can't i run my hg commands without being root?
You can mount your network drive like this. Open your /etc/fstab/, and then enter the following line.
//IP_OR_HOSTNAME/DIRECTORY_NAME /MOUNT_DIRECTORY cifs user=sambauser,pass=sambapassword,auto,exec,umask=002,gid=1000,uid=1000,file_mode=0777,dir_mode=0777 0 0
Hope it works.
chmod will not help you here I guess. The ownership and permissions on the server side are not replicated to the client (no unix extensions on server) or your UID/GID differ between both machines. You can override file ownership when mounting via:
mount -t cifs //SERVER/SHARE /MOUNTPOINT -o uid=USERNAME
This is from memory though, check man mount.cifs for details. Also, alternative networked filesystems like NFS might serve you better in this case, or try sshfs.