Using Certbot i have difficulties with seperate certificates for multiple domains - certificate

my problem is, if i run certbot multiple times for multiple domains, only the last domain (certificate request) seems to be operational.
certbot certonly --standalone --rsa-key-size 4096 -d example1.com
certbot certonly --standalone --rsa-key-size 4096 -d example2.com
Also the files i.e. /etc/letsencrypt/live/example1.com are overwritten by /etc/letsencrypt/live/example2.com. I would expect to have two folders in /etc/letsencrypt/live/ one for example1.com and one for example2.com.
My expectation would be, that both certificates (and their files unter /etc/letsencrypt/live/) could exist parallel.
Before trying this, i used a single certificete for the multiple domains with all domains listed after -d option.
certbot certonly --standalone --rsa-key-size 4096 -d example1.com -d example2.com
This worked perfect, but does not fit my needs. I need a single certificate per domain.
I appreciate your help and hope i could make my point clear.
I am Using Ubuntu 18.04 and certbot 0.27.0

The problem was deninitly on my side ... i was using a selfmade script and was overlooking that at the beginnig i was deleting the whole /etc/letsencrypt directory.
After removing this delete command everything worked as expected.

Related

setting up teleport - Configure Domain Name and obtain TLS certificates using Let's Encrypt

ok so what I want to do is set up teleport on a remote machine so i can access it over the internet. I am following the tutorial but I am no expert on any of this stuff. i try to copy paste the commands from here:
https://goteleport.com/teleport/docs/quickstart/
section 1c:
certbot certonly \
--manual \
--preferred-challenges=dns \
--agree-tos \
--manual-public-ip-logging-ok \
--email foo#example.com \
-d "teleport.example.com, *.teleport.example.com"
i changed the email and domain name, as it says in the tutorial, but the it asks me to :
Please deploy a DNS TXT record under the name
_acme-challenge.teleport.example.com with the following value:
XXXX
how am i supposed to change that? and where?
thank you!
When choosing --preferred-challenges certbot will ask you to confirm you own the domain by placing a TXT record. You can do that if you own the domain and you can access the configuration.
You want to create your own domain (check Freenom to obtain one at no cost) which you have full control on and you can add the TXT record.
Here is a reference to Create DNS records at Freenom for Microsoft

Self sign certificate bigbluebutton

I have a local server without any domain or public IP for that. I'm gonna to setup SSL self sign certificate for BigBlueButton. How I can do it in my local server?
Without host and domain names, self-signed certificates will be the only option which means they will not be valid SSL certificates. I don't know BigBlueButtom but it's documentation doesn't recommend this set up for production environments. Not every browser will accept it either.
However, if you want to give it a try, you can generate self-signed SSL certs on Linux using this command:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout selfsigned.key -out selfsigned.crt
These options will create both a key file and a certificate. You will be asked a few questions about the server in order to embed the information correctly in the certificate.
And then you can try to adapt the instructions here.
I was setting up BBB environment recently.
Self-signed certificate is no good. To get it working I had to:
Use a real server setup (with let's encrypt) and a real domain to get real certificates
copy the certificates to my local development setup (and update nginx config of course)
set up /etc/hosts locally
Use real SSL certificate. I had to:
Install BBB. Use ip instead hostname. See
https://docs.bigbluebutton.org/2.2/install.html#configure-nginx-to-use-https
Example:
wget -qO- https://ubuntu.bigbluebutton.org/bbb-install.sh | bash -s -- -v bionic-230 -s 10.211.55.9 -e me#example.com -a -w
Configure nginx to use HTTPS for you real domain (Order of certificates is very important). See
https://docs.bigbluebutton.org/2.2/install.html#configure-nginx-to-use-https
Add to hosts file ip and you domain. Example:
10.211.55.9 example.com
Use command to change domain.
bbb-conf --setip example.com

rsync unable to connect error

I am trying to use rsync to upload files to my server alongside Travis and GitHub, I have this line in a deploy.sh script rsync -avhP $f deploy#multicrew.co.uk:/var/www/test/ and whenever I try to upload the $f files I get this error:
ssh: connect to host multicrew.co.uk port 22: Cannot assign requested address
Within my .travis.yml file I have this code
addons:
ssh_known_hosts: multicrew.co.uk
before_install:
- openssl aes-256-cbc -K $encrypted_8c9513462553_key -iv $encrypted_8c9513462553_iv -in deploy/deploy_rsa.enc -out /tmp/deploy_rsa -d
- eval "$(ssh-agent -s)"
- chmod 600 /tmp/deploy_rsa
- ssh-add /tmp/deploy_rsa
- chmod +x deploy/deploy.sh
after_success: "deploy/deploy.sh"
I do not know why rsync cannot assign the requested address, I have an A name record set up within CloudFlare that forwards multicrew.co.uk to my server's IP
The error you are getting looks like it is caused by an outstanding issue with IPv6 on Travis CI.
However because, at the time of writing, your multicrew.co.uk domain is proxied by Cloudflare and Cloudflare only proxies HTTP traffic, the suggested fix of disabling IPv6 will not work.
You'll need to either create a separate non-proxied (grey cloud) hostname to use with SSH/RSYNC, change the rsync command to connect directly to the server IP address or disable Cloudflare proxying for the multiview.co.uk hostname.
Note that adding a non-proxied hostname in DNS will expose your server's IP address. You might want to restrict access on your server to just the Travis CI and Cloudflare IP ranges (e.g. with firewall rules or in the web server configuration).

Validate haproxy.cfg

Is there any way to validate the HAProxy haproxy.cfg file before restarting the HAProxy service? For example: There might be a small spelling/syntax error in a larger haproxy.cfg file. I searched through several forums, but was unable to find anything in relation to validating the haproxy.cfg files for syntax errors.
As of now, I use a trial and error basis on a developer machine before I upload the changes to a Production Server.
The official HaProxy configuration file check was buried in the help sections.
/usr/local/sbin/haproxy --help
There are two ways to check the haproxy.cfg syntax is to use..
One way is the /usr/local/sbin/haproxy -c -V -f /etc/haproxy/haproxy.cfg
which validates the file syntax. The -c switch in the command represents the Check, while the others denote "Verbose" & "file".
Another way is to sudo service haproxy configtest
I hope this helps anyone looking to check the syntax of the haproxy.cfg file before restarting the service.
We are using this command
sudo haproxy -f /etc/haproxy/haproxy.cfg -c

wsgi nginx error: permission denied while connecting to upstream

There seem to be many questions on StackOverflow about this but unfortunately nothing has worked for me.
I'm getting a 502 bad gateway on nginx, and the following on the logs: connect() to ...myproject.sock failed (13: Permission denied) while connecting to upstream
I'm running wsgi and nginx on ubuntu, and I've been following this guide from Digital Ocean. I apparently configured wsgi correctly since uwsgi -s myproject.sock --http 0.0.0.0:8000 --module app --callable app worked, but I keep getting the nginx permission denied error and I have no idea why:
After coming across this question and this other one, I changed the .ini file and added the chown-socket, chmod-socket, uid and gid parameters (also tried just setting the first two, either or, and a couple of different permission settings --and even the most permissive didn't work).
This one seemed promising, but I don't believe selinux is installed on my Ubuntu (running sudo apt-get remove selinux gives "Package 'selinux' is not installed, so not removed" and find / -name "selinux" doesn't show anything). Just in case, though, I tried what this post recommended as well. Uninstalling apparmor (sudo apt-get install apparmor) didn't work either.
Every time I make a change, I run sudo service nginx restart, but I only see the 502 Gateway Error (and the permission denied error when I read the logs).
This is is my nginx configuration file:
server {
listen 80;
server_name 104.131.110.156;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/user/myproject/web_server/myproject.sock;
}
}
.conf file:
description "uWSGI server instance configured to serve myproject"
start on runlevel [2345]
stop on runlevel [!2345]
setuid user
setgid www-data
env PATH=/root/.virtualenvs/my-env/bin
chdir /home/user/myproject/web_server
exec uwsgi --ini /home/user/myproject/web_server/myproject.ini
.ini file:
[uwsgi]
module = wsgi
master = true
processes = 5
socket = /home/user/myproject/web_server/myproject.sock
chown-socket=www-data:www-data
chmod-socket = 664
uid = www-data
gid = www-data
vacuum = true
die-on-term = true
(If it helps, these are the specs of my Digital Ocean machine: Linux 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux)
Please let me know if there's anything I can do, and thank you very much.
After following all the advice in this thread I was still getting permission errors. The finally missing piece was to correct the nginx user in the /etc/nginx/nginx.conf file:
# old: user nginx;
user www-data;
I also followed that tutorial and ran into the same issue. After quite a bit of trial and error, the following steps allowed me to run uWSGI and nginx successfully:
My nginx.config file:
server {
listen 80;
server_name localhost;
location / { try_files #yourapplication; }
location #yourapplication; {
include uwsgi_params;
uwsgi_pass unix:/PATH_TO_PROJECT/PROJECT.sock;
}
}
My .ini file wasn't working very well, so I decided to take advantage of uWSGI's extensive arguments that are available. Here's what I used:
uwsgi -s /PATH_TO_PROJECT/PROJECT.sock -w wsgi:app -H /PATH_TO_PROJECT/venv --http-processes=4 --chmod-socket=666 --master &
Where:
-s /PATH_TO_PROJECT/PROJECT.sock = the location of my .sock file
-w wsgi:app = the location of my wsgi.py file and app being the name of my Flask object
-H /PATH_TO_PROJECT/venv = the location of my virtual environment
--http-processes=4 = the number of http processes for uWSGI to create
--chmod-socket=666 = the permissions to set on the socket
--master = allow uWSGI to run with its master process manager
& = run uWSGI in the background
To summarize what others have said to solve permission denied error in nginx (which you can look into /var/log/nginx/error.log is usually due to the following:
you are writing .sock file at a place nginx does not have permission
SELinux is causing the problem
To solve 1: First, don't write .sock file at /tmp as suggested here server fault answer because different services see different /tmp in fedora. You can write at some place such as ~/myproject/mysocket.sock. The nginx user must have access to our application directory in order to access the socket file there. By default, CentOS locks down each user's home directory very restrictively, so we will add the nginx user to our user's group so that we can then open up the minimum permissions necessary to grant access.
You can add the nginx user to your user group with the following command. Substitute your own username for the user in the command:
sudo usermod -a -G $USER nginx
Now, we can give our user group execute permissions on our home directory. This will allow the Nginx process to enter and access content within:
chmod 710 /path/to/project/dir
If the permission denied error is still there:
then the hack sudo setenforce 0 will do the trick.
The path: unix:/PATH_TO_PROJECT/PROJECT.sock should be placed in /tmp this fixed my problem.
(13: Permission denied)
This indicates that Nginx was unable to connect to the uWSGI socket because of permissions problems. Usually, this happens when the socket is being created in a restricted environment or if the permissions were wrong. While the uWSGI process is able to create the socket file, Nginx is unable to access it.
This can happen if there are limited permissions at any point between the root directory (/) the socket file. We can see the permissions and ownership values of the socket file and each of its parent directories by passing the absolute path to our socket file to the namei command:
namei -nom /PATH_TO_YOUR_SOCKET_FILE/YOUR_SOCKET.sock
The output should be similar to this (Your case might have different folder name)
f: /run/uwsgi/firstsite.sock
drwxr-xr-x root root /
drwxr-xr-x root root run
drwxr-xr-x sammy www-data uwsgi
srw-rw---- sammy www-data firstsite.sock
The output displays the permissions of each of the directory components. By looking at the permissions (first column), owner (second column) and group owner (third column), we can figure out what type of access is allowed to the socket file.
In the above example, each of the directories leading up to the socket file have world read and execute permissions (the permissions column for the directories end with r-x instead of ---). The www-data group has group ownership over the socket itself. With these settings, Nginx process should be able to access the socket successfully.
If any of the directories leading up to the socket are not owned by the www-data group or do not have world read and execute permission, Nginx will not be able to access the socket. Usually, this means that the configuration files have a mistake.
So you fix this issue but giving all the upper folder the permission using this command:
chmod 755 directory_name
I know it is late, but to help others overcome the issue faster, I have posted this answer. Hope it helps, good luck.
This may happen if user www-data may not have the permission to create new
socket in the given path, so use root user.
Relpace user www-data to user root in nginx.conf;
ex: #nginx.conf
#user www-data;
user root;
worker_processes auto;
pid /var/run/nginx.pid;
.............
If you have tested all the permissions and it is still not working then maybe SELinux is enabled, this will cause the same behaviour.
Run getenforce and if the result is Enforcing then that will not help.
Quick fix is to disable it, setenforce 0 but a restart is required.
2 things helped me
I had correct configuration in nginx I also was seeing /tmp/wsgi.sock with my two eyes in the folder but there was still permission denied or directory not exists:
In file /lib/systemd/system/nginx.service set PrivateTmp=false and restart nginx (do not forget systemctl daemon reload to refresh config)
Run command setenforce 0
Bonus material:
/usr/local/bin/uwsgi --chdir /home/biohazard/myproject -s /tmp/wsgi.sock -w api:app --chmod-socket=777 --master --thunder-lock --http-processes=2 where in api:app api stands for /home/biohazard/myproject/api.py
And
location = /api { rewrite ^ /api/; }
location /api { try_files $uri #api; }
location #api {
include uwsgi_params;
uwsgi_pass unix:/tmp/wsgi.sock;
}
Where nginx is serving my http://example.org/api endpoint only
Check user field on the first line in nginx.conf file. By default it is www-data. Change the name to user root in nginx.conf file if you logged in as root.
I Am Also Getting Same Issue While Deploying Flask Using Nginx And Gunicorn.
I Solved This Issue By putting .Sock file in /temp folder.
There are many things that can cause this particular error, in my case it was the ownership of my PROJECT.socket file that caused it.
Instead of:
srwxr-xr-x 1 yourusername yourusername 0 Nov 4 22:32 PROJECT.sock
it should be: srwxr-xr-x 1 www-data www-data 0 Nov 4 22:32 PROJECT.sock
Just run sudo chown www-data:www-data PROJECT.sock and that's it.