How to put Nextcloud in kubernetes in maintenance mode - kubernetes

I'm trying to migrate my Nextcloud instance to a kubernetes cluster. I've succesfully deployed a Nextcloud instance using openEBS-cStor storage. Before I can "kubectl cp" my old files to the cluster, I need to put Nextcloud in maintenance mode.
This is what I've tried so far:
Shell access to pod
Navigate to folder
Run OCC command to put next cloud in maintenance mode
These are the commands I used for the OCC way:
kubectl exec --stdin --tty -n nextcloud nextcloud-7ff9cf449d-rtlxh -- /bin/bash
su -c 'php occ maintenance:mode --on' www-data
# This account is currently not available.
Any tips on how to put Nextcloud in maintenance mode would be appreciated!

The su command fails because there is no shell associated with the www-data user.
What worked for me is explicitly specifying the shell in the su command:
su -s /bin/bash www-data -c "php occ maintenance:mode --on"

Related

Start interactive shell into a sql server 2019 container running in an aks pod

I am using the mssql docker image (Linux) for sql server 2019. The default user is not root but mssql.
I need to perform some operations as root inside the container:
docker exec -it sql bash
mssql#7f5a78a63728:/$ sudo <command>
bash: sudo: command not found
Then I start the shell as root:
docker exec -it --user=root sql bash
root#7f5a78a63728:/# <command>
...
This works.
Now I need to do this in a container deployed in an AKS cluster
kubectl exec -it rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
mssql#rms-sql-1-sql-server-host:/$ sudo <command>
bash: sudo: command not found
as expected. But then:
kubectl exec -it --user=root rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
error: auth info "root" does not exist
So when the container is in an AKS cluster, starting a shell as root doesn't work.
I then try to ssh into the node and use docker from inside:
kubectl debug node/aks-agentpool-30797540-vmss000000 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
Creating debugging pod node-debugger-aks-agentpool-30797540-vmss000000-xfrsq with container debugger on node aks-agentpool-30797540-vmss000000.
If you don't see a command prompt, try pressing enter.
root#aks-agentpool-30797540-vmss000000:/# docker ...
bash: docker: command not found
Looks like a Kubernetes cluster node doesn't have docker installed!
Any clues?
EDIT
The image I used locally and in Kubernetes is exactly the same,
mcr.microsoft.com/mssql/server:2019-latest untouched
David Maze has well mentioned in the comment:
Any change you make in this environment will be lost as soon as the Kubernetes pod is deleted, including if you need to update the underlying image or if its node goes away outside of your control. Would building a custom image with your changes be a more maintainable solution?
Generally, if you want to change something permanently you have to create a new image. Everything you described behaved exactly as it was supposed to. First you have exec the container in docker, then logged in as root. However, in k8s it is a completely different container. Perhaps a different image is used. Second, even if you made a change, it would exist until the container dies. If you want to modify something permanently, you have to create your new image with all the components and the configuration you need. For more information look at pod lifecycle.

Install postgres extension into Bitnami container as superuser on initial startup

I am using the Bitnami Postgres Docker container and noticed that my ORM which uses UUIDs requires the uuid-ossp extension to be available. After some trial and error I noticed that I had to manually install it using the postgres superuser since my custom non-root user created via the POSTGRESQL_USERNAME environment variable is not allowed to execute CREATE EXTENSION "uuid-ossp";.
I'd like to know what a script inside /docker-entrypoint-initdb.d might look like that can execute this command into the specific database, to be more precise to automate the following steps I had to perform manually:
psql -U postgres // this requires interactive password input
\c target_database
CREATE EXTENSION "uuid-ossp";
I think that something like this should work
PGPASSWORD=$POSTGRESQL_POSTGRES_PASSWORD psql -U postgres // this requires interactive password input
\c target_database
CREATE EXTENSION "uuid-ossp";
If you want to do it on startup you need to add a file to the startup scripts. Check out the config section of their image documentation: https://hub.docker.com/r/bitnami/postgresql-repmgr/
If you're deploying it via helm you can add your scripts in the postgresql.initdbScripts variable. values.yaml.
If the deployment is already running, you'll need to connect as the repmgr user not the Postgres user you created. That's default NOT a superuser for security purposes. This way most of your connections are not privileged.
For example I deployed bitnami/postgresql-ha via helm to a k8s cluster in a namespace called "data" with the release name "prod-pg". I can connect to the database with a privileged user by running
export REPMGR_PASSWORD=$(kubectl get secret --namespace data prod-pg-postgresql-ha-postgresql -o jsonpath="{.data.repmgr-password}" | base64 --decode)
kubectl run prod-pg-postgresql-ha-client \
--rm --tty -i --restart='Never' \
--namespace data \
--image docker.io/bitnami/postgresql-repmgr:14 \
--env="PGPASSWORD=$REPMGR_PASSWORD" \
--command -- psql -h prod-pg-postgresql-ha-postgresql -p 5432 -U repmgr -d repmgr
This drops me into an interactive terminal
$ ./connect-db.sh
If you don't see a command prompt, try pressing enter.
repmgr=#

Entando 6 Installation Issue

I have been trying to install Entando 6 on my Mac following the instructions on http://docs.entando.com, however when deploying to Kubernetes I get an error with quickstart-kc-deployer. Has anyone managed to successfully go through with the installation?
deployment failure
Also I am new to Kubernetes and trying to access any logs, however as of now I have not been able to access logs and understand a bit more what the root cause of the failure is. Help on that is also more than welcome as well.
Thanks.
If you're in a local development environment the best bet would be to try the new instructions at dev.entando.org. If you're installing on a cloud Kubernetes provider try the updated instructions here.
I've reproduced them here for completeness:
Install Multipass (https://multipass.run/#install
Launch VM
multipass launch --name ubuntu-lts --cpus 4 --mem 8G --disk 20G
Open a shell multipass shell ubuntu-lts
Install k3s curl -sfL https://get.k3s.io | sh -
Download Entando custom resource definitions
curl -L -C - https://raw.githubusercontent.com/entando/entando-releases/v6.2.0/dist/qs/custom-resources.tar.gz | tar -xz
Create custom resources
sudo kubectl create -f dist/crd
Create namespace
sudo kubectl create namespace entando
Download Helm chart
curl -L -C - -O https://raw.githubusercontent.com/entando/entando-releases/v6.2.0/dist/qs/entando.yaml
Configure access to your cluster
IP=$(hostname -I | awk '{print $1}')
sed -i "s/192.168.64.25/$IP/" entando.yaml
If you want to deploy on a cloud provider (EKS, AKS, GKE) then there are new instructions under the Configuration and Operations section at
https://dev.entando.org/next/tutorials

Create multiple Postgres instances on same machine

To test streaming replication, I would like to create a second Postgres instance on the same machine. The idea is that if it can be done on the test server, then it should be trivial to set it up on the two production servers.
The instances should use different configuration files and different data directories. I tried following the instructions here http://ubuntuforums.org/showthread.php?t=1431697 but I haven't figured out how to get Postgres to use a different configuration file. If I copy the init script, the scripts are just aliases to the same Postgres instance.
I'm using Postgres 9.3 and the Postgres help pages say to specify the configuration file on the postgres command line. I'm not really sure what this means. Am I supposed to install some client for this to work? Thanks.
I assume you can work your way out on using postgresql utilities.
Create the clusters
$ initdb -D /path/to/datadb1
$ initdb -D /path/to/datadb2
Run the instances
$ pg_ctl -D /path/to/datadb1 -o "-p 5433" -l /path/to/logdb1 start
$ pg_ctl -D /path/to/datadb2 -o "-p 5434" -l /path/to/logdb2 start
Test streaming
Now you have two instances running on ports 5433 and 5434. Configuration files for them are in data dirs specified by initdb. Tweak them for streaming replication.
Your default installation remains untouched in port 5432.
On Debian based distros you could use pg_createcluster instead of initdb:
$ pg_createcluster -u [user] -g [group] -d /path/to/data -l /path/to/log -p 5433
Also pg_ctlcluster is an alternative to pg_ctl.
Steps to create New Server Instance on PostgreSQL 9.5
On command prompt run:
initdb -D Instance_Directory_path -U username -W
(prompts for password)
Once the new Instance Directory is created. Run command prompt as Administrator
pg_ctl register -N service_name -D Instance_Directory_path -o "-p port_no"
After the service is registered, start server
pg_ctl start -D Instance_Directory_path -o "-p port_no"
To complete other answers, on CentOS 6 AND 7.
After running something like
$ initdb -D /path/to/newdb
You'll have to change at least port configuration option and, probably, listen_addresses in config file postgresql.conf.
Instead of starting inmediatly this new instance, which has been explained in previous answers, maybe you want new instance to run automatically on system start (in case of shutdown, e.g.). To do this, as CentOS doesn't have pg_ctl register option (only for Windows) you'll have to create a new service file and register it in order systemctl or service can start it up automatically.
Centos 6
Follow next commands to get service's init file:
[root#machine ~]# service postgresql-9.6 edit
Usage: /etc/init.d/postgresql-9.6 {start|stop|status|restart|upgrade|condrestart|try-restart|reload|force-reload|initdb|promote}
[root#machine ~]# cd /etc/init.d # Now we know where service file is
[root#machine init.d]# cp -p postgresql-9.6 postgresql-9.6_5433
[root#machine init.d]# vi postgresql-9.6_5433
Now you can change PGDATA directory with the one where new instance resides. If you're using Postgresql version previous to 9.4 (which you shouldn't by the time of this answer) you'll have to change PGPORT too with the value where new instance is listening to.
The name of the new service is up to you. I usually take original service name and add port number at the end.
Now you only have to register new service:
[root#machine init.d]# chkconfig postgresql-9.6_5433 on # service registered!
[root#machine init.d]# service postgresql-9.6_5433 start
Iniciando servicios postgresql-9.6_5433: [ OK ]
[root#machine init.d]# service postgresql-9.6_5433 status
Se está ejecutando postgresql-9.6_5433 (pid 120993)...
Centos 7
In CentOS 7 instead of service to control services running on the machine you have systemctl and commands and paths change a bit. But the process is the same: create new service file, edit with the new location/port, register and start:
[root#localhost ~]# locate postgresql.service
/etc/systemd/system/multi-user.target.wants/postgresql.service
/usr/lib/systemd/system/postgresql.service
[root#localhost ~]# cd /usr/lib/systemd/system
[root#localhost ~]# cp -p postgresql.service postgresql_5433.service
[root#localhost ~]# vi postgresql_5433.service
# Change PGDATA and maybe PGPORT if PG version <9.4
[root#localhost ~]# systemctl enable postgresql_5433.service
[root#localhost ~]# systemctl start postgresql_5433.service
[root#localhost ~]# systemctl list-unit-files | grep postgres
postgresql.service enabled
postgresql_5433.service enabled

How to run a container in Kubernetes without creating Deployment or Job?

I'm trying to run an interactive Pod (container) in Kubernetes that does not create a Job or Deployment and deletes itself after completing.
The purpose of the container is to give our developers an easy way to access our database, which doesn't have a public IP address.
Currently, we are using this command:
kubectl run -i --tty proxy-pgclient --image=private-registry.com/pgclient --restart=Never --env="PGPASSWORD=foobar" -- psql -h dbhost.local -p 5432 -U pg_admin -W postgres
which works the first time you run it, however, after exiting the session if you try to run the above again to connect to the database again, we get:
Error from server: jobs.extensions "proxy-pgclient" already exists
Forcing the developer to delete the job with:
kubectl delete job proxy-pgclient
before they can run the command and connect again.
Is there any way of starting up an interactive container (Pod) in Kubernetes without creating a Job or Deployment object and having that container be deleted when the interactive session is closed?
Adding the "--rm" flag to the original command resulted in the Job (and Pod) being deleted at the completion of the interactive session, which is what I was after. The command then becomes:
kubectl run -i --tty --rm proxy-pgclient --image=private-registry.com/pgclient --restart=Never --env="PGPASSWORD=foobar" -- psql -h dbhost.local -p 5432 -U pg_admin -W postgres
There isn't a short kubectl command that will do exactly what you want. Instead, you can create a yaml/json file with your pod description and run kubectl create -f pod.yaml. Your pod can be set to never restart, so it will terminate once it exits.