gcloud component installation errors out with permissions denied on windows 10 WSL 1, even after executing with sudo permissions - gcloud

I am facing a weird issue with gcloud component installation on WSL (v1) of my windows 10 system.
» sudo gcloud components install beta
[sudo] password for <user>:
Your current Cloud SDK version is: 345.0.0
Installing components from version: 345.0.0
┌─────────────────────────────────────────────┐
│ These components will be installed. │
├──────────────────────┬────────────┬─────────┤
│ Name │ Version │ Size │
├──────────────────────┼────────────┼─────────┤
│ gcloud Beta Commands │ 2019.05.17 │ < 1 MiB │
└──────────────────────┴────────────┴─────────┘
For the latest full release notes, please visit:
https://cloud.google.com/sdk/release_notes
Do you want to continue (Y/n)? y
╔════════════════════════════════════════════════════════════╗
╠═ Creating update staging area ═╣
ERROR: (gcloud.components.install) [Errno 13] Permission denied: '<path>/google-cloud-sdk.staging/.install/.download': [<path>/google-cloud-sdk/.install/.download]
Ensure you have the permissions to access the file and that the file is not in use.
(base) --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
»
I checked the directories - google-cloud-sdk and google-cloud-sdk.staging - I am the owner on both of them.
I even switched to root and executed the command, but ended with the same error.
Any pointer would be appreciated.

Related

Why can't I enable ingress in minikube?

I am trying to enable ingress in minkube. When I run minikube addons enable ingress it hangs for a while then I get the following error message:
❌ Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.15/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
stdout:
namespace/ingress-nginx unchanged
serviceaccount/ingress-nginx unchanged
configmap/ingress-nginx-controller unchanged
configmap/tcp-services unchanged
configmap/udp-services unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-nginx unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
service/ingress-nginx-controller-admission unchanged
stderr:
error: error validating "/etc/kubernetes/addons/ingress-deploy.yaml": error validating data: [ValidationError(Service.spec): unknown field "ipFamilies" in io.k8s.api.core.v1.ServiceSpec, ValidationError(Service.spec): unknown field "ipFamilyPolicy" in io.k8s.api.core.v1.ServiceSpec]; if you choose to ignore these errors, turn validation off with --validate=false
waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ Please also attach the following file to the GitHub issue: │
│ - /tmp/minikube_addons_2c0e0cafd16ea0f95ac51773aeef036b316005b6_0.log │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
This the the minikube start command I used:
minikube start --kubernetes-version=v1.19.15 --vm-driver=docker
I have tried reinstalling minikube. It was working fine last week when I ran the same command.
If more specific information is needed please let me know and I will edit the question. Does anyone know how I could go about fixing this?
Thanks in advance.
Downgrading to minikube v1.23.2 fixed the issue.
Bit late, but I hope someone find this useful, this happens becouse minikube could not pull the image(ingress-nginx-controller) in time, the way to know is:
kubectl get pod -n ingress-nginx
If the ingress-nginx-controller-xxxx (xxxx is the identifier of the pod) has a status of ImagePullBackOff or something like that, you are on this scenario.
To fix you will need to first describe you pod:
kubectl describe pod ingress-nginx-controller-xxxxx -n ingress-nginx
Look under containers/controller/images and copy its value(don't need to copyp the #sha256:... if it contains it). You must to manually pull it, but before probably delete the related deployment as well:
kubectl delete deployment ingress-nginx-controller -n ingress-nginx
And then pull the image from the vm itself, in my case looks like this:
minikube ssh docker pull k8s.gcr.io/ingress-nginx/controller:v1.2.1
Wait for it and then try to "addons enable ingress" again and see if it works, it did it for me.
Which operating system are you using?
The ingress, and ingress-dns addons are currently only supported on Linux. https://minikube.sigs.k8s.io/docs/drivers/docker/
You still can run minikube with vmware
# If you are using Mac:
brew install docker-machine-driver-vmware
# Start a cluster using the vmware driver:
minikube start --driver=vmware
# To make vmware the default driver:
minikube config set driver vmware
Upgrading to minikube v1.26.0 fixed the issue.
The error in my case is:
X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [NewSession: new client: new client: Error creating new native config from ssh using: docker, &{[] [C:\Users<user>.minikube\machines\minikube\id_rsa]}: open C:\Users<user>.minikube\machines\minikube\id_rsa: Access is denied. waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]
Simple fix is to run PowerShell using Administrator (I'm on Windows).
I am using v1.26.0.
minikube delete --all --purge
minikube addons enable ingress

Permission denied when installing Anaconda3 on Raspberry Pi Network Attached Storage (NAS)

Samba
I installed Raspbian Lite and Samba on my Raspberry Pi 4b. I access the Raspberry Pi from a Linux (Ubuntu 18.04.5 LTS) client. I am using bash and ufw is inactive on both machines.
Below is my smb.conf file.
[global]
workgroup = WORKGROUP
log file = /var/log/samba/log.%m
max log size = 1000
logging = file
panic action = /usr/share/samba/panic-action %d
server role = standalone server
obey pam restrictions = yes
unix password sync = yes
passwd program = /usr/bin/passwd %u
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
pam password change = yes
map to guest = bad user
usershare allow guests = yes
[homes]
comment = Home Directories
browseable = no
read only = yes
create mask = 0700
directory mask = 0700
valid users = %S
[printers]
comment = All Printers
browseable = no
path = /var/spool/samba
printable = yes
guest ok = yes
read only = yes
create mask = 0700
[print$]
comment = Printer Drivers
path = /var/lib/samba/printers
browseable = yes
read only = no
guest ok = no
[home]
path = /mnt/raid1
writeable = yes
create mask = 0777
directory mask = 0777
public = no
read only = no
browseable = yes
I am trying to install the latest version of Anaconda on a Linux x86_64 machine with PREFIX set to a folder that is located on a NAS. I am trying to install Anaconda as explained in the documentation.
I can install Anaconda on an external hard drive or the local hard drive without any problems. I also access the NAS from a Windows 10 (64-bit) client. When I install Anaconda on the Windows 10 client and select a folder on my NAS as the destination folder, it works too. However, when I try to install Anaconda3 on the Linux machine with PREFIX set to a folder on my NAS, I get the following error:
Unpacking payload ...
Downloads/Anaconda3-2020.11-Linux-x86_64.sh: Line 412: /media/samba/niko/anaconda3/conda.exe: Permission Denied
Downloads/Anaconda3-2020.11-Linux-x86_64.sh: Line 414: /media/samba/niko/anaconda3/conda.exe: Permission Denied
I tried installing Anaconda on the same Linux client with a different Samba account and got the same error.
I tried installing the latest version of Miniconda with both Samba users on the Ubuntu 18 client and got the same error.
I tried installing Anaconda3 on another Linux machine (Ubuntu 16.04.7 LTS) on my network with the same two Samba accounts. Unfortunately, I get the following error for both users:
PREFIX=/Path/To/anaconda3
Unpacking payload ...
0%| | 0/36 [00:00<?, ?it/s]
Could not remove or rename /$PREFIX/pkgs/libedit-3.1.20191231-h14c3975_1o4380296/pkg-libedit-3.1.20191231-h14c3975_1.tar.zst. Please remove this file manually (you may need to reboot to free file handles)
concurrent.futures.process._RemoteTraceback:
'''
Traceback (most recent call last):
File "concurrent/futures/process.py", line 368, in _queue_management_worker
File "multiprocessing/connection.py", line 251, in recv
TypeError: __init__() missing 1 required positional argument: 'msg'
'''
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "entry_point.py", line 69, in <module>
File "concurrent/futures/process.py", line 484, in _chain_from_iterable_of_lists
File "concurrent/futures/_base.py", line 611, in result_iterator
File "concurrent/futures/_base.py", line 439, in result
File "concurrent/futures/_base.py", line 388, in __get_result
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
[5437] Failed to execute script entry_point
Also on the Ubuntu 16 client, I tried installing the latest version of Miniconda and I got the same error as shown above for both Samba accounts.
Below are the rights and owner of the mount point from my NAS and the anaconda3 directory on my Linux client
drwxr-xr-x niko niko mount point (niko is the user account on my Linux client Ubuntu 18)
├── drwxr-xr-x 2 niko niko anaconda3
│ └── -rwxr-xr-x 1 niko niko conda.exe
└── some folder
On the Ubuntu 16 client, it looks exactly like this, except the user's name is different, but they both have uid=1000 and gid=1000.
Here are the rights and owners of the folders on the mount point of my hard drive on the Raspberry Pi that can be accessed over the network using SMB protocol
drwxr-xr-x 7 pi pi mount point
├── some folder
├── drwx------ 2 pi pi
├── some folder
├── drwx------ 4 pi pi
│ ├── drwxrwxrwx 2 pi pi anaconda3
│ │ └── -rwxrw-rw- 1 pi pi conda.exe
│ └── drwxrwxrwx 3 pi pi
│ └── drwxrwxrwx 20 pi pi
│ ├── drwxrwxrwx 41 pi pi
│ └── -rwxrw-rw- 1 pi pi
└── some folder
Instead of the folder and file names except for the anaconda3 folder, I added the rights (user, group, other) and the name of the user and group which owns the folders and files on the Raspberry Pi. As you can see, when I am logged in with the user pi, every file, folder, and subfolder in the mount point directory belongs to the user pi and the group pi.
Here is the line from /etc/fstab I use to automount the Samba server
//192.168.178.96/home /media/samba cifs credentials=/Path/To/My/Credentials,users,uid=1000,gid=1000 0 0
NFS
I installed and configured the NFS server on my Raspberry Pi.
Then I also tried installing the latest version of Anaconda and Miniconda on my Ubuntu 18 client using the NFS protocol. But I get the same error that I get when using the SMB protocol.
The rights and owners of the mount point of the NFS server on the Ubuntu 18 client and the mount point of the hard drives on the Raspberry Pi are identical to the two mount points mentioned in the Samba section.
Below is my /etc/exports file on the Raspberry Pi
/mnt/nfs/niko/Ubuntu 192.168.178.0/24(rw,sync,insecure,no_subtree_check,no_root_squash,anonuid=1000,anongid=1000)
Here is the line from /etc/fstab I use to automount the NFS server
192.168.178.96:/mnt/nfs/niko/Ubuntu /media/nfs nfs rw,user,hard,intr 0 0
Here are the rights and owner of the anaconda-installer.sh and miniconda-installer.sh file, which are located on the Linux clients
-rw-rw-r-- 1 username groupname Anaconda3-2020.11-Linux-x86_64.sh
-rw-rw-r-- 1 username groupname Miniconda3-latest-Linux-x86_64.sh
Thanks in advance for your help and feedback!
Note that currently /opt/anaconda must be a supported filesystem such as ext4 or xfs and cannot be an NFS mountpoint. Subdirectories of /opt/anaconda may be mounted through NFS. For example, the Object Storage service supports an NFS backend.
(https://enterprise-docs.anaconda.com/en/docs-site-5.0.6/admin-guide/install/requirements.html)

Error while building vlc-unity with Docker-Compose on Win 10

My Machine:
WIN 10 PRO x64 (DUal XEOn with 40 cores, 64G RAM)
Installed 64 bit versions of GitHub desktop and Docker Compose.
Successfully checked out https://github.com/videolan/vlc-unity.git
in a Windows ADMIN command Prompt, after CD-ing into the local directory with vlc-unity repo when I try to run:
docker-compose -f .gitlab-ci.yml up
I get the following error: ERROR: In file '..gitlab-ci.yml', services 'stages' must be a mapping not an array.
I have verified that the indentation is correct in gitlab-ci.yml file, any other suggestions ?
After comments below did the following (EDIT: 16 Sep 2020):
I installed Ubuntu 20 on a VM Ware Workstation for Win.
checked out VLC repo under "BuildingVLC/vlc",
cd into "/BuildingVLC/vlc/extras/ci" and then ran
"gitlab-runner exec docker uwp64-libvlc-llvm"
throws an error:
Runtime platform arch=amd64 os=linux pid=11242 revision=fd488787 version=13.4.0-rc1
FATAL: open .gitlab-ci.yml: no such file or directory
What am I still missing ?

Firebird 2.5 Database Server on FreeBSD 11.2

I install a Firebird database server (ver. 2.5) according to the instructions on https://www.howtoforge.com/the-perfect-database-server-firebird-2.5-and-freebsd-8.1 and I get this message "Please do not build firebird as 'root' because this may cause conflicts with SysV semaphores of running services".
Trying to compile as normal user failed because I do not have access to write in this directory.
After Firebird installation as root, when I try to create local database I got error:
# isql-fb
Use CONNECT or CREATE DATABASE to specify a database
SQL> CREATE DATABASE '/test/my.fdb';
Bus error (core dumped)
Can someone help me please?
The easiest way would be to install the package as root user, for example:
# pkg install firebird25-server
If you would like to use the ports try this:
# cd /usr/ports/databases/firebird25-server
# make install clean
In either case, the message you get will be something like this (you could ignore it to continue with the installation, just need to wait 5 seconds and then it will proceed):
> pkg install firebird25-server
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
Updating poudriere repository catalogue...
poudriere repository is up to date.
All repositories are up to date.
Updating database digests format: 100%
The following 2 package(s) will be affected (of 0 checked):
New packages to be INSTALLED:
firebird25-server: 2.5.8_1 [FreeBSD]
firebird25-client: 2.5.8_1 [FreeBSD]
Number of packages to be installed: 2
The process will require 22 MiB more space.
5 MiB to be downloaded.
Proceed with this action? [y/N]: y
[1/2] Fetching firebird25-server-2.5.8_1.txz: 100% 2 MiB 2.4MB/s 00:01
[2/2] Fetching firebird25-client-2.5.8_1.txz: 100% 3 MiB 943.7kB/s 00:03
Checking integrity... done (0 conflicting)
[1/2] Installing firebird25-client-2.5.8_1...
[1/2] Extracting firebird25-client-2.5.8_1: 100%
[2/2] Installing firebird25-server-2.5.8_1...
===> Creating groups.
Creating group 'firebird' with gid '90'.
===> Creating users
Creating user 'firebird' with uid '90'.
###############################################################################
** IMPORTANT **
Keep in mind that if you build firebird server as 'root', this may cause
conflicts with SysV semaphores of running services.
If you want to cancel it, press ctrl-C now if you need check some things
before of build it.
###############################################################################
Here sleeps for 5 seconds and then continues:
[2/2] Extracting firebird25-server-2.5.8_1: 100%
Message from firebird25-server-2.5.8_1:
###############################################################################
Firebird was installed.
1) Support for Super Server has been added
2) Before start the server ensure that the following line exists in /etc/services:
gds_db 3050/tcp #InterBase Database Remote Protocol
3) If you use inetd (Classic Server) then add the following line to /etc/inetd.conf
gds_db stream tcp nowait firebird /usr/local/sbin/fb_inet_server fb_inet_server
And finally restart inetd.
4) If you want to use SuperClassic Server then you must add the following lines
to /etc/rc.conf file.
firebird_enable="YES"
firebird_mode="superclassic"
5) If you want to use Super Server then you must add the following lines to
/etc/rc.conf file.
firebird_enable="YES"
firebird_mode="superserver"
Note: Keep in mind that you only can add one of them but never both modes on
the same time
6) It is STRONGLY recommended that you change the SYSDBA
password with:
# gsec -user SYSDBA -pass masterkey
GSEC> modify SYSDBA -pw newpassword
GSEC> quit
before doing anything serious with Firebird.
7) See documentation in /usr/local/share/doc/firebird/ for more information.
8) Some firebird tools were renamed for avoid conflicts with some other ports
/usr/local/bin/isql -> /usr/local/bin/isql-fb
/usr/local/bin/gstat -> /usr/local/bin/fbstat
/usr/local/bin/gsplit -> /usr/local/bin/fbsplit
9) Enjoy it ;)
To start it just add to /etc/rc.conf as indicated in the message in point 4 or 5, for example:
firebird_enable="YES"
firebird_mode="superserver"
To compile it as non-root an easy way could be to change the owner of the port dir to your user, for example:
# chown -R foo:foo /usr/ports/databases/firebird25-server
Then as your user cd to the port and build by typing only make:
$ cd /usr/ports/databases/firebird25-server
$ make
Then switch back to root to install the port:
# make install
Here is a procedure I used to get around this issue in the past (based on FreeBSD 10.2). This is for firebird client, but should work similarly for server. This procedure assumes sudo is set up for the user performing the installation.
cd /usr/ports
sudo chown non-root-user-name distfiles (was root)
cd /usr/ports/databases
sudo chown non-root-user-name firebird25-client (was root)
cd /usr/ports/databases/firebird25-client
make -DPACKAGE_BUILDING (Note: No sudo is used here! This process can take a long time.)
(Note: You may be required to supply root password on this step)
make install clean (Note: You may be required to supply root password on this step)
cd /usr/ports
sudo chown root distfiles
cd /usr/ports/databases
sudo chown root firebird25-client
As for FreeBSD 11.x and Firebird...I was seeing the same "Bus error". I have concluded for now (perhaps incorrectly) that Firebird is not yet compatible with FreeBSD 11.x. If you revert to FreeBSD 10.x, you should not see this problem.

How do I migrate my local postgresql database to azk?

There's directions for migrating an existing MySQL database to an azk image here: http://images.azk.io/#/mysql?_k=yvigvq
How can I do the same for postgresql?
Thanks!
There are basically three ways to restore a dump file to a database running inside azk (they also work with other DBs such as MySQL and MariaDB):
1- Using a local client (graphical tool or command line tool):
Before connecting to the database, you need to find out the database running port:
$ azk start postgres # Ensures the database is running
$ azk status postgres
┌───┬──────────┬───────────┬──────────────┬─────────────────┬─────────────┐
│ │ System │ Instances │ Hostname/url │ Instances-Ports │ Provisioned │
├───┼──────────┼───────────┼──────────────┼─────────────────┼─────────────┤
│ ↑ │ postgres │ 1 │ dev.azk.io │ 1-data:32831 │ - │
└───┴──────────┴───────────┴──────────────┴─────────────────┴─────────────┘
Now, we can connect to the database using the host dev.azk.io and the resulting port from the previous command (32381). The username, password and database name are defined in the Azkfile.
2- Using azk shell and the database CLI:
Running the same steps described above to find out the database running port, you can run the following command:
$ azk shell postgres
$ psql --host dev.azk.io --port 32831 --username ${POSTGRES_USER} \
--password=${POSTGRES_PASS} --dbname=${POSTGRES_DB} < dbexport.sql
3 - Using autoload script from the database image:
Most of the official Docker Images for databases has an entrypoint script, which looks for files in the folder /docker-entrypoint-initdb.d/ and run them when the database is initialized. Given that, you can simply mount your dump files (.sql) in that location, like described in the following Azkfile:
systems({
postgres: {
image: { docker: "azukiapp/postgres" },
mounts: {
"/docker-entrypoint-initdb.d": sync("./dumps"),
}
}
});
Starting the postgres system with the command azk start postgres, the dump files will be run automatically.
Obs: As you can see in the Postgres' script and in the Mysql's, the dump files can be plain text (.sql), compressed (.sql.gz) or even shell scripts (.sh).
I just make a PR adding the instructions in the repository of the image:
https://github.com/azukiapp/docker-postgres/pull/3
see section: Migrating an existing PostgreSQL Server