QNAP Container Station Gitlab Email Server - docker-compose

I have a QNAP TS453a NAS. In the Container Station I installed sameersbn's Docker Gitlab 10.4.2. But I couldn't find any manual how to configure an email server so that Gitlab can send emails when someone forgets his password for example. Can anyone help me?

I installed the Sameersbn version of Gitlab in Container Station as well and I found it quite restrictive. My personal recommendation would be to just use the standard CE version that Gitlab provide.
However at the time I used Sameersbn version of Gitlab there was no way that I could find to successfully configure the email server (Not saying there isn't I just couldn't figure it out). However it doesn't mean you can't do it yourself manually.
I would recommend that you mount your volumes to somewhere on disk instead of within the Container Station so it makes it easier to reconfigure any settings manually.
Here is what my docker-compose file looks like. Very simple and really the only things you need to care about are the volumes and where you are mounting them too.
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: <HOTST_NAME>
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url <EXTERNAL_URL>
ports:
- '10080:80' // Insecure port
- '10443:443' // Secure port
- '10020:22' // SSH port
volumes:
- '/share/Gitlab/config:/etc/gitlab' // To configure the Email Server we care about this one.
- '/share/Gitlab/logs:/var/log/gitlab'
- '/share/Gitlab/data:/var/opt/gitlab'
The one we care about is '/share/Gitlab/config:/etc/gitlab'. If you don't know much about volumes and mounting them it is pretty much like this '<your_local_location>:<container_location>'. So if I navigate to /share/Gitlab/config on my QNAP NAS I will find all the configuration for my GitLab instance.
In /share/Gitlab/config you should see a file called gitlab.rb, this is a ruby file that contains all the configuration for your GitLab instance. If you search in this file you will find the configuration below:
### GitLab email server settings
###! Docs: https://docs.gitlab.com/omnibus/settings/smtp.html
###! **Use smtp instead of sendmail/postfix.**
# gitlab_rails['smtp_enable'] = true
# gitlab_rails['smtp_address'] = "smtp.server"
# gitlab_rails['smtp_port'] = 465
# gitlab_rails['smtp_user_name'] = "smtp user"
# gitlab_rails['smtp_password'] = "smtp password"
# gitlab_rails['smtp_domain'] = "example.com"
# gitlab_rails['smtp_authentication'] = "login"
# gitlab_rails['smtp_enable_starttls_auto'] = true
# gitlab_rails['smtp_tls'] = false
All you need to do is uncomment (# means comment so just remove) and fill in your SMTP details.
This will require you to reconfigure your Gitlab instance. So you will need to ssh into your GitLab Container and just run reconfigure command.
Essentially you need to find away of getting to the gitlab.rb file so you can amend the SMTP Email Server Settings.
Some good reading material for installing GitLab via Docker are:
https://docs.gitlab.com/omnibus/docker/
https://docs.gitlab.com/ee/install/docker.html
https://developer.ibm.com/code/2017/07/13/step-step-guide-running-gitlab-ce-docker/
https://www.digitalocean.com/community/tutorials/how-to-build-docker-images-and-host-a-docker-image-repository-with-gitlab
(Please note that there could be some additional configuration to allow your system to write to /share/Gitlab/config you can do this with chmod command via ssh)

Related

Which is the correct PiHole DNS Entry

In the last couple of weeks I moved from clicking pihole in portainer to using stacks / docker-compose.yaml
However, this also limited the functionality of my pihole. At some point it was no longer possible to perform the gravity update via the web interface of the pihole. For this I always had to go to the console of the pihole and run
pihole -g
Also manually added black and whitelist entries were only taken into account after a manual update. The deactivation of the pihole in the web interface did not work anymore.
I was able to fix this by removing the following entries in my docker-compose file:
environment:
PIHOLE_DNS_: 9.9.9.9#53;9.9.9.9#53
DNS1: 9.9.9.9 # Quad9 (filtered, DNSSEC)
DNS2: 9.9.9.9 # If we don't specify two, it will auto pick google.
security_opt:
- no-new-privileges:true
cap_add:
- NET_ADMIN
dns:
- 127.0.0.1
- 9.9.9.9
The config lead to 9.9.9.9 in custom1 upstream DNS server. Currently I clicked the upstream server (on the left in settings) manually. Which of the DNS entry do I have to reuse and why does the pihole think its a custom and not one of the standard dns entries?
Are these settings stored in one of the volumes? I could not find any entries in Portainer environment variables when I removed them explicitly.

Lando wtih ParcelJS: exposing port

I'm trying to use ParcelJS with Lando and there's one problem if you want HMR to work. You need to expose a port and that seems to be much harder than it should be with Lando. :(
So I know I need to do this for my ParcelJS watch command:
parcel watch dev/scripts.js --out-dir prod/ --hmr-port 6101
Then I need to expose the port I've assigned, in this case "6101" to Docker (via my Lando config file). But that's where it's tricky, apparently, because of the proxy setup Lando uses.
My current .lando.yml config is below, but it doesn't work as expected and the port is not exposed. I still get a "scripts.js:224 WebSocket connection to 'wss://testwp.lndo.site:6101/' failed:" error message from my ParcelJS generated script file in my browser's dev tools:
name: testwp
recipe: wordpress
config:
php: '8.0'
via: nginx
webroot: wordpress
database: mysql:8.0
services:
appserver:
portforward: 6101
I saw a similar post about a problem with LocalWP which does about the same thing Lando does.
Can you maybe try to add the flag --hmr-hostname localhost.
Its ether that or --hmr-hostname testwp.lndo.site.
UPDATE:
After checking the parcel CLI docs the flag could also be --hmr-host localhost try that aswell.

cloud-init ignoring static IP network configuration

I running the Ubuntu 18.04 cloud image and trying to configure networking through cloud-init. For some reason it is ignoring my networking when I try to assign a static IP and just falls back to using DHCP. I'm not sure why and I'm not sure how to debug it. Does anyone know if I am doing something wrong or how I should further troubleshoot this:
Here is my config.yaml I'm using to generate my config.img:
#cloud-config
network:
version: 2
ethernets:
ens2:
dhcp4: false
dhcp6: false
addresses: [10.0.0.40/24]
gateway4: 10.0.0.1
password: secret # for the 'ubuntu' user in case we can't SSH in
chpasswd: { expire: false }
ssh_pwauth: true
users:
- default
- name: brennan
ssh_import_id: gh:brennancheung
sudo: ALL=(ALL) NOPASSWD:ALL
hostname: vm
runcmd:
- [ sh, -xc, "echo Here is the network config for your instance" ]
- [ ip, a ]
final_message: "Cloud init is done. Woohoo!"
Everything else in the config seems to be working, it's as if it doesn't even see the network portion though.
I'm attaching the .img as a cdrom to read the cloud-init. You can see how I'm running it here: https://github.com/brennancheung/playbooks/blob/master/cloud-init-lab/Makefile
NOTE: Once I'm logged into the box I can replace the config in /etc/netplan with the network section above and re-apply it and the networking comes up fine with a static IP. So I think there aren't any obvious errors that I am missing. This leads me to believe it is related to the cloud-init networking module(s) and not netplan itself.
I finally figure it out. Hopefully this helps someone else.
Apparently you can't supply networking configuration in user-data. You have to specify it in the cloud provider's data source or in metadata. In order to do that you have to move the network section into its own file and build the cloud-init image with the --network-config=... option.
Ex:
cloud-localds -v --network-config=network-config-v2.yaml seed.img user-data.yaml
I have the complete setup for configuring and booting a cloud instance in a local KVM if it helps anyone else out.
https://github.com/brennancheung/playbooks/tree/master/cloud-init-lab
If you notice, in /etc/cloud/cloud.cfg.d there exists a file called 99-fake-cloud.cfg (or something similar). If you delete this, then cloud-init will configure the network using the parameters in your user-data file (i.e. - /etc/cloud/cloud.cfg)

Making nextcloud work on a prefixed path (using docker and caddy)

I'm trying to setup my own instance of nextcloud on my server but I'm running into a problem as I want nextcloud to be available under https://example.com/cloud/.
Next cloud is running in a CoreOS virtual machine called let's say myvm.
So this is the way I setup my CaddyFile:
example.com {
gzip
proxy /cloud myvm:8080 {
transparent
without /cloud
}
}
I have other proxies that work fine for other services or VMs that are written similarily.
With this, and publishing port 8080 in my docker-compose file, I manage to connect to the nextcloud instance. But every time I go to example.com/cloud/ it will redirect me to example.com/apps/files/ instead of example.com/cloud/apps/files/.
If I enter this last url manually, I can access to nextcloud, but also the page doesn't load properly because all the contents cannot be loaded because they are not prompted with the prefix cloud/.
Is there a way to explain nextcloud about this prefix through the configuration of docker-compose file? (It's the only configuration I created, it works with just that and no extra work, I use one similar to the one available here (the apache one).)
Or maybe I can improve the CaddyFile config? (By the way, if I don't use the without option, it will just not work at all and return 404 when I go to the url).

RabbitMQ failed to start, TCP connection succeeded but Erlang distribution failed

I'm a new one just start to learn and install RabbitMQ on Windows System.
I install Erlang VM and RabbitMQ in custom folder, not default folder (Both of them).
Then I have restarted my computer.
By the way,My Computer name is "NULL"
I cd to the RabbitMQ/sbin folder and use command:
rabbitmqctl status
But the return message is:
Status of node rabbit#NULL ...
Error: unable to perform an operation on node 'rabbit#NULL'.
Please see diagnostics information and suggestions below.
Most common reasons for this are:
Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
Target node is not running
In addition to the diagnostics info below:
See the CLI, clustering and networking guides on http://rabbitmq.com/documentation.html to learn more
Consult server logs on node rabbit#NULL
DIAGNOSTICS
attempted to contact: [rabbit#NULL]
rabbit#NULL:
connected to epmd (port 4369) on NULL
epmd reports node 'rabbit' uses port 25672 for inter-node and CLI tool traffic
TCP connection succeeded but Erlang distribution failed
Authentication failed (rejected by the remote node), please check the Erlang cookie
Current node details:
node name: rabbitmqcli70#NULL
effective user's home directory: C:\Users\Jerry Song
Erlang cookie hash: 51gvGHZpn0gIK86cfiS7vp==
I have try to RESTART RabbitMQ, What I get is:
ERROR: node with name "rabbit" already running on "NULL"
By the way,My Computer name is "NULL"
And I have enable all ports in firewall.
https://groups.google.com/forum/#!topic/rabbitmq-users/a6sqrAUX_Fg
describes the problem where there is a cookie mismatch on a fresh installation of Rabbit MQ. The easy solution on windows is to synchronize the cookies
Also described here: http://www.rabbitmq.com/clustering.html#erlang-cookie
Ensure cookies are synchronized across 1, 2 and Optionally 3 below
%HOMEDRIVE%%HOMEPATH%\.erlang.cookie (usually C:\Users\%USERNAME%\.erlang.cookie for user %USERNAME%) if both the HOMEDRIVE and HOMEPATH environment variables are set
%USERPROFILE%\.erlang.cookie (usually C:\Users\%USERNAME%\.erlang.cookie) if HOMEDRIVE and HOMEPATH are not both set
For the RabbitMQ Windows service - %USERPROFILE%\.erlang.cookie (usually C:\WINDOWS\system32\config\systemprofile)
The cookie file used by the Windows service account and the user running CLI tools must be synchronized by copying the one from C:\WINDOWS\system32\config\systemprofile folder.
If you are using dedicated drive folder locations for your development tools/software in Windows10(Not the windows default location), one way you can synchronize the erlang cookie as described by https://www.rabbitmq.com/cli.html is by copying the cookie as explained below.
Please note in my case HOMEDRIVE and HOMEPATH environment variables both are not set.
After copying the "C:\Windows\system32\config\systemprofile\.erlang.cookie" to "C:\Users\%USERNAME%\.erlang.cookie" ,
the error "tcp connection succeeded but Erlang distribution failed" is resolved.
Now I am able to use "rabbitmqctl.bat status" command successfully. Hence there is no mandatory need to install in default location to resolve this error as synchronizing cookie will resolve that error.
In my case similar issue (Authentication failed because of Erlang cookies mismatch) solved by copying .erlang.cookie file from Windows system dir - C:\Windows\system32\config\systemprofile\.erlang.cookie to %HOMEDRIVE%%HOMEPATH%\.erlang.cookie (where %HOMEDRIVE% was set to H: and %HOMEPATH% to \ respectively)
Quick setup TODO for Windows, Erlang OTP 24 and RabbitMQ 3.8.19:
Download & Install Erlang [OTP 24] (needs Admin rights) from:
https://www.erlang.org/downloads
set ERLANG_HOME (should point to install dir)
Download & Install recent [3.8.19] RabbitMQ (needs Admin rights) from:
https://github.com/rabbitmq/rabbitmq-server/releases/
Follow: https://www.rabbitmq.com/install-windows.html and/or
https://www.rabbitmq.com/install-windows-manual.html
set RABBITMQ_SERVER (should point to install dir)
update %PATH% by adding: ;%RABBITMQ_SERVER%\sbin
Fix Erlang-cookie issue from above, follow: https://www.rabbitmq.com/cli.html#erlang-cookie
Enable Web UI by running command: %RABBITMQ_SERVER%/sbin/rabbitmq-plugins.bat enable rabbitmq_management
From item #8 (above) got error because of missing file: %USERPROFILEDIR%/AppData/Roaming/RabbitMQ/enabled_plugins -> had to create it and run %RABBITMQ_SERVER%/sbin/rabbitmq-plugins.bat enable rabbitmq_management again!
Run/restart on the way might be required
Finally, login to: http://localhost:15672/ (guest:guest)
, or check by cURL:
curl -i -u guest:guest http://localhost:15672/api/vhosts
should receive response like:
HTTP/1.1 200 OK
cache-control: no-cache
content-length: 186
content-security-policy: script-src 'self' 'unsafe-eval' 'unsafe-inline';
object-src 'self'
content-type: application/json
date: Tue, 13 Jul 2021 11:21:12 GMT
server: Cowboy
vary: accept, accept-encoding, origin
[{"cluster_state":{"rabbit#hostname":"running"},"description":"Default virtual host","metadata":{"description":"Default virtual host","tags":[]},"name":"/","tags":[],"tracing":false}]
P.S. Some useful RabbitMQ CLI commands (copy-paste):
%RABBITMQ_SERVER%/sbin/rabbitmqctl start_app
%RABBITMQ_SERVER%/sbin/rabbitmqctl stop_app
%RABBITMQ_SERVER%/sbin/rabbitmqctl status
P.P.S. UPDATE: great article for this subject: https://www.journaldev.com/11655/spring-rabbitmq
I have reinstalled the RabbitMQ in my computer by using default setup folder
Then checked with the command :
rabbitmqctl status
It works now, not the problem of Erlang VM .(Means Er can install at another folder)
It will cause some problem (like this one) that I couldn't find out now if we don't use the RabbitMQ default setup require folder (C:\Program Files\RabbitMQ Server)
If anyone finds it out, I hope you can tell me why and how to fix.
How I resolved mine
It's mostly caused by cookie mismatch on a fresh installation of Rabbit MQ
follow this 2 steps
1. copy the .erlang.cookie file from C:\Windows\System32\config\systemprofile paste it into
C:\Users\["your user nameusername"] folder
2. run rabbitmq-service.bat stop and rabbitmq-service.bat start
Done it should work now when you run 'rabbitmqctl start_app' good luck.
note if you have more than one user put it in the correct user folder
In Centos.
add ip nodename pair to /etc/hosts on each node.
restart rabbitmq-server service on each slave node.
works for me.
i got error like this, i just stop my rabbitMQ with close port 25672
here syntax for linux:
kill -9 $(lsof -t -i:25672)
Error Images:
Just adding my experience if it helps others down the line.
I wrote a Powershell .ps1 script to install and configure RabbitMQ which would be used as one of the stept to provision a server with Packer.
I wrote the code on a fresh AWS W2016 Server build. It worked fine when run on the box (as administrator, from an admin PS console) but when the same code was moved over to the Packer build server, it would fall over when doing the rabbitmqctl.bat configuration steps via packer, despite both using (as far as I can tell) Administrator to run the scripts.
So this worked on the coding box:
$pathvargs = {cmd.exe /c "rabbitmqctl.bat" add_user Username Password}
Invoke-Command -ScriptBlock $pathvargs
$pathvargs = {cmd.exe /c "rabbitmqctl.bat" set_user_tags User administrator}
Invoke-Command -ScriptBlock $pathvargs
$pathvargs = {cmd.exe /c "rabbitmqctl.bat" set_permissions -p "/" User "^User-.*" ".*" ".*"}
Invoke-Command -ScriptBlock $pathvargs
Write-Host "Did RabbitMQ"
But I had to prelude this with...
copy "C:\Windows\system32\config\systemprofile\.erlang.cookie" "C:\Program Files\RabbitMQ Server\rabbitmq_server-3.7.17\sbin\.erlang.cookie"
copy "C:\Windows\system32\config\systemprofile\.erlang.cookie" $env:userprofile\.erlang.cookie -force
... On the Packer box.
I am guessing there is some context issue going on but I'm using
"winrm_username": "Administrator",
in the Packer builders block, so I thought this would suffice.
TL;DR - Use the Cookie even though it works without it in some instances.
I have encountered the same error after installing Erlang VM and RabbitMQ using the default installation folders in Windows 10. Managed to start the management and access it via HTTP, but status failed with this error.
The cookie was fine in all folders (%HOMEDRIVE%%HOMEPATH%, %USERPROFILE%, C:\WINDOWS\system32\config\systemprofile).
I had to perform a restart the Windows to make it work. After restart it set up something to run at startup + asked permission to make an exception in the firewall.
In my case, the file was at c:\\Windows\.erlang.cookie, just copied it to C:\Users{USERNAME} and all works, thanks to everyone for the hits
Another thing to check after making sure the cookie file is in all the locations.. is to realize that you installed 32 bit Erlang.. not 64..
Happened to me. Removed 32 bit Erlang and Installed 64 and rabbitmqctl status returns expected results.