Gitlab, docker and sendmail ports - email

I have got gitlab running through docker using this image. In the image documentation there are instructions for how to configure an optional SMTP server for emails, but little information on what happens if SMTP is not set up. The gitlab documentation indicates that sendmail is used by default, so I assume that is what happens, and for my purposes (a few private repositories with only a couple of users) I don't think I really need any more than sendmail. I tried just ignoring the SMTP configuration and it all runs fine, but emails are not sent. I don't know enough about email servers or sendmail to know how to find the problem, but my guess is that some port it needs is blocked.
My questions:
Can anyone confirm than sendmail is used, and that I don't need to configure something?
Is there some easy way to test sendmail locally to see if there are issues with blocked ports? All the guides I find start out with several pages of configuration details.
What ports would sendmail need open to work? Do I need to expose additional ports on the container or on my firewall?

Shuo's answer worked for me except I changed:
supervisord reload # restart the service
to
supervisorctl reload
Another approach is to build your own Docker image and update the production.rb environment file. Here's what you're Dockerfile might look like.
FROM sameersbn/gitlab:7.14.0
MAINTAINER "leo.o'donnell#pearson.com"
# sed the production.rb environment file to use a configured email method converting
#
# config.action_mailer.delivery_method = :sendmail
#
# to
# config.action_mailer.delivery_method = (ENV['SMTP_DELIVERY_METHOD'] || :sendmail).to_sym
RUN sed -E -e "s/(action_mailer.delivery_method[^\:]+)([^ \t\#]+)(.*)/\1\(ENV\[\'SMTP_DELIVERY_METHOD\'\] \|\| \2\).to_sym\3/" -i config/environments/production.rb
or you can just use my image
docker pull leopoldodonnell/gitlab

I met the same problem yesterday and upvoted your question. Now I managed to make smtp work without send_mail.
sudo docker exec -it gitlab /bin/bash # go into the container
vi /home/git/gitlab/gitlab/config/environments/production.rb # The path may not exactly match, but you can guess
now search email and the method is :send_mail, change it to :smtp
supervisord reload # restart the service

Related

How to configure telnet service for yocto image

telnet is necessary in order to maintain compatibility with older software in this case. I'm working with the Yocto Rocko 2.4.2 distribution. when I try to telnet to the board I'm getting the oh so detailed message "connection refused".
Using the method here and the options here I modified the busybox configuration per suggestion. When the board is booted up and logged in, if you execute: telnet, it spits out usage info and a quick directory check shows that telnet is installed to /usr/bin/telnet. My guess is that the telnet client is installed but the telnet server is not running?
I need to get telnetd to start manually at least so I know it will work with an init script in place. The second reference link there suggests that 'telnetd will not be started automatically though...' and that there will need to be an init script. How can I start telnetd manually for testing?
systemctl enable telnetd
returns: Unit telnetd.service could not be found
UPDATE
telnetd in located in /usr/sbin/telnetd. I was able to manually start the telnetd service for testing from there. After manually starting the service telnet login now works. looking into writing a systemd init script to auto start the telnetd service, so I suppose this issue is closed. unless anyone would like to offer up detailed telnet busybox configuration and setup steps as an answer to 'How to configure telnet service for yocto image'
update
Perhaps there is something more? I created a unit file that looks like this:
[Unit]
Description=auto start telnetd
[Service]
ExecStart=/usr/sbin/telnetd
[Install]
WantedBy=multi-user.target
on reboot, systemd indicates the process executed and succeeded:
systemctl status telnetd
.
.
.
Process: 466 ExecStart=/usr/sbin/telnetd (code=exited, status=0/SUCCESS)
.
.
.
The service is not running however. netstat -l does not list it and telnet login fails. Something I'm missing?
last update...i think
so following this post, I managed to get telnet.socket service to startup on reboot.
systemctl status telnet.socket
shows that it is running and listening on 23. Now however, when I try to remote in with telnet I'm getting
Connection closed by foreign host
Everything I've read so far has been talking about xinetd service (which I do not have...). What is confusing is that, if I just navigate to /usr/sbin/ and execute telnetd, the server is up and running and I can telnet into the board, so I do not believe I'm missing any utilities or services (like the above mentioned xinetd), but something is still not being configured correctly. any ideas?

MongoDB: DNS issue of resolv.conf connecting to MongoDB

I want to export some data from MongoDB Atlas.
If I execute the command below, It tries connect localhost and export the data.
mongoexport --uri="mongodb+srv://<username>:<password>#name-of-project-x2lpw.mongodb.net/test" --collection users --out /tmp/testusers.json
Note: If you run this command from Windows CMD, it works fine
After researching the problem and with the help of a user, everything seems to point to a DNS problem and to the related resolv.conf file.
Below the original /etc/resolv.conf:
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "systemd-resolve --status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0
search name.com
At the beginning that resulted into a connection failure as shown below:
But if I would change that address into the following public available address according to what advised on this post to 1.1.1.1 the connection is successful, see below:
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "systemd-resolve --status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 1.1.1.1
options edns0
search name.com
Which resulted into a connection success as shown below:
HOWEVER the problem is that instead of explicitly connecting to the name of the MongoDB cluster, it will connect to the localhost, which is very strange as I successfully exported the files I was looking for from the real connection.
Which means that the machine was correctly connecting to the database but via localhost.
Everything seems to lead, also according to this source and also here to a DNS problem while connecting to MongoDB via terminal to export collections.
Now from this last post it not advisable to manually change this address for several reasons, therefore right after successfully exporting the data using DNS 1.1.1.1 I changed it back to its original DNS 127.0.0.53.
However I don't think this should be a proper behavior as every time I need to export data I will have to continuously and manually change this address.
What could be the reason for this strange behavior? And therefore what could be a long term solution without manually switching between DNS addresses?
Thanks for pointing to the right direction for solving this issue.
It seems you all ready have the answer in the links you mentioned. I will summarize this:
Install resolvconf (for Ubuntu apt install resolvconf), add the line nameserver 8.8.8.8 to /etc/resolvconf/resolv.conf.d/base, then run sudo resolvconf -u and to be sure service resolvconf restart.
To verify run systemd-resolve --status.
You should see on the first line your DNS server like here:
DNS Servers: 8.8.8.8
DNS Domain: sa-east-1.compute.internal
DNSSEC NTA: 10.in-addr.arpa
16.172.in-addr.arpa
This solution persists between reboots.

Bootstrapping issues in Chef

I have setup a basic infrastructure using chef. This includes a local chef server(ubuntu based), workstation and an ubuntu based server(to be used as the node). Please note that the entire infrastructure lies behind the firewall in my office network. And I have made necessary proxy settings for the servers to access the internet.
So here is the problem - When I try to bootstrap the node using -
knife bootstrap <node's ip> --sudo -x <username> -P <password> -N "<name>"
i get the following error
<node's ip> --2014-02-19 10:47:10-- https://www.opscode.com/chef/install.sh
<node's ip> Resolving www.opscode.com (www.opscode.com)... 184.106.28.91
<node's ip>1 Connecting to www.opscode.com (www.opscode.com)|184.106.28.91|:443... failed:Connection refused.
<node's ip> bash: line 83: chef-client: command not found
I was not able to find a solution to this. However I came across the knife[:bootstrap_proxy] = "http://username:password#proxyIP:port/" setting that can be added to knife.rb . I did this (by entering my office proxy details) and then the connection during bootstrap was successfull and the chef client was downloaded on the node. However this setting only defines the proxy that should be used by the node. So, this led to the http_proxy = "http://username:password#proxyIP:port/" being set in client.rb. But because I have already made all the proxy settings in my server, the chef client failed to launch. So I manually removed the http_proxy and https_proxy settings from client.rb and ran the command chef-client which was then successful.
I have two questions -
1) why did knife[:bootstrap_proxy] = "http://username:password#proxyIP:port/" work? because it only defines the proxy that should be used by the node.
2) Also, alll the proxy setting for the node has already been done. I do not want any proxy settings in client.rb. How do I achieve this?
Please help!
When it comes to your client.rb I'd suggest looking into https://github.com/opscode-cookbooks/chef-client
It's a wrapper script for client.rb(s).
Not sure about your knife[:bootstrap_proxy] though. Ideally that cookbook should take care of it. If you are still stumpped you can run chef-client -VV and knife -VV to see exactly what it's doing.

What is a faster alternative to Python's http.server (or SimpleHTTPServer)?

Python's http.server (or SimpleHTTPServer for Python 2) is a great way of serve the contents of the current directory from the command line:
python -m http.server
However, as far as web servers go, it's very slooooow...
It behaves as though it's single threaded, and occasionally causes timeout errors when loading JavaScript AMD modules using RequireJS. It can take five to ten seconds to load a simple page with no images.
What's a faster alternative that is just as convenient?
http-server for node.js is very convenient, and is a lot faster than Python's SimpleHTTPServer. This is primarily because it uses asynchronous IO for concurrent handling of requests, instead of serialising requests.
Installation
Install node.js if you haven't already. Then use the node package manager (npm) to install the package, using the -g option to install globally. If you're on Windows you'll need a prompt with administrator permissions, and on Linux/OSX you'll want to sudo the command:
npm install http-server -g
This will download any required dependencies and install http-server.
Use
Now, from any directory, you can type:
http-server [path] [options]
Path is optional, defaulting to ./public if it exists, otherwise ./.
Options are [defaults]:
-p The port number to listen on [8080]
-a The host address to bind to [localhost]
-i Display directory index pages [True]
-s or --silent Silent mode won't log to the console
-h or --help Displays help message and exits
So to serve the current directory on port 8000, type:
http-server -p 8000
I recommend: Twisted (http://twistedmatrix.com)
an event-driven networking engine written in Python and licensed under the open source MIT license.
It's cross-platform and was preinstalled on OS X 10.5 to 10.12. Amongst other things you can start up a simple web server in the current directory with:
twistd -no web --path=.
Details
Explanation of Options (see twistd --help for more):
-n, --nodaemon don't daemonize, don't use default umask of 0077
-o, --no_save do not save state on shutdown
"web" is a Command that runs a simple web server on top of the Twisted async engine. It also accepts command line options (after the "web" command - see twistd web --help for more):
--path= <path> is either a specific file or a directory to be
set as the root of the web server. Use this if you
have a directory full of HTML, cgi, php3, epy, or rpy
files or any other files that you want to be served up
raw.
There are also a bunch of other commands such as:
conch A Conch SSH service.
dns A domain name server.
ftp An FTP server.
inetd An inetd(8) replacement.
mail An email service
... etc
Installation
Ubuntu
sudo apt-get install python-twisted-web (or python-twisted for the full engine)
Mac OS-X (comes preinstalled on 10.5 - 10.12, or is available in MacPorts and through Pip)
sudo port install py-twisted
Windows
installer available for download at http://twistedmatrix.com/
HTTPS
Twisted can also utilise security certificates to encrypt the connection. Use this with your existing --path and --port (for plain HTTP) options.
twistd -no web -c cert.pem -k privkey.pem --https=4433
go 1.0 includes a http server & util for serving files with a few lines of code.
package main
import (
"fmt"; "log"; "net/http"
)
func main() {
fmt.Println("Serving files in the current directory on port 8080")
http.Handle("/", http.FileServer(http.Dir(".")))
err := http.ListenAndServe(":8080", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
Run this source using go run myserver.go or to build an executable go build myserver.go
Try webfs, it's tiny and doesn't depend on having a platform like node.js or python installed.
If you use Mercurial, you can use the built in HTTP server. In the folder you wish to serve up:
hg serve
From the docs:
export the repository via HTTP
Start a local HTTP repository browser and pull server.
By default, the server logs accesses to stdout and errors to
stderr. Use the "-A" and "-E" options to log to files.
options:
-A --accesslog name of access log file to write to
-d --daemon run server in background
--daemon-pipefds used internally by daemon mode
-E --errorlog name of error log file to write to
-p --port port to listen on (default: 8000)
-a --address address to listen on (default: all interfaces)
--prefix prefix path to serve from (default: server root)
-n --name name to show in web pages (default: working dir)
--webdir-conf name of the webdir config file (serve more than one repo)
--pid-file name of file to write process ID to
--stdio for remote clients
-t --templates web templates to use
--style template style to use
-6 --ipv6 use IPv6 in addition to IPv4
--certificate SSL certificate file
use "hg -v help serve" to show global options
Here's another. It's a Chrome Extension
Once installed you can run it by creating a new tab in Chrome and clicking the apps button near the top left
It has a simple gui. Click choose folder, then click the http://127.0.0.1:8887 link
https://www.youtube.com/watch?v=AK6swHiPtew
I found python -m http.server unreliableā€”some responses would take seconds.
Now I use a server called Ran https://github.com/m3ng9i/ran
Ran: a simple static web server written in Go
Also consider devd a small webserver written in go. Binaries for many platforms are available here.
devd -ol path/to/files/to/serve
It's small, fast, and provides some interesting optional features like live-reloading when your files change.
If you have PHP installed you could use the builtin server.
php -S 0:8080
give polpetta a try ...
npm install -g polpetta
then you can
polpetta ~/folder
and you are ready to go :-)
Using Servez as a server
Download Servez
Install It, Run it
Choose the folder to serve
Pick "Start"
Go to http://localhost:8080 or pick "Launch Browser"
Note: I threw this together because Web Server for Chrome is going away since Chrome is removing support for apps and because I support art students who have zero experience with the command line
Yet another node based simple command line server
https://github.com/greggman/servez-cli
Written partly in response to http-server having issues, particularly on windows.
installation
Install node.js then
npm install -g servez
usage
servez [options] [path]
With no path it serves the current folder.
By default it serves index.html for folder paths if it exists. It serves a directory listing for folders otherwise. It also serves CORS headers. You can optionally turn on basic authentication with --username=somename --password=somepass and you can serve https.
I like live-server. It is fast and has a nice live reload feature, which is very convenient during developpement.
Usage is very simple:
cd ~/Sites/
live-server
By default it creates a server with IP 127.0.0.1 and port 8080.
http://127.0.0.1:8080/
If port 8080 is not free, it uses another port:
http://127.0.0.1:52749/
http://127.0.0.1:52858/
If you need to see the web server on other machines in your local network, you can check what is your IP and use:
live-server --host=192.168.1.121
And here is a script that automatically grab the IP address of the default interface. It works on macOS only.
If you put it in .bash_profile, the live-server command will automatically launch the server with the correct IP.
# **
# Get IP address of default interface
# *
function getIPofDefaultInterface()
{
local __resultvar=$1
# Get default route interface
if=$(route -n get 0.0.0.0 2>/dev/null | awk '/interface: / {print $2}')
if [ -n "$if" ]; then
# Get IP of the default route interface
local __IP=$( ipconfig getifaddr $if )
eval $__resultvar="'$__IP'"
else
# Echo "No default route found"
eval $__resultvar="'0.0.0.0'"
fi
}
alias getIP='getIPofDefaultInterface IP; echo $IP'
# **
# live-server
# https://www.npmjs.com/package/live-server
# *
alias live-server='getIPofDefaultInterface IP && live-server --host=$IP'
I've been using filebrowser for the past couple of years and it is the best alternative I have found.
Features I love about it:
Cross-platform: It supports Linux, MacOs and Windows (+). It also supports docker (+).
Downloading stuff is a breeze. It can automatically convert a folder into zip, tar.gz and etc. for transferring folders.
You can file or folder access to every use.

How can I automate deployment through multiple ssh firewalls (using PW auth)?

I'm stuck in a bit of annoying situation.
There's a chain of machines between my desktop and the production servers. Something like this:
desktop -> firewall 1 -> firewall 2 -> prod_box 1
-> prod_box 2
-> ...
I'm looking for a way to automate deployment to the prod boxes via ssh.
I'm aware there are a number of solutions in general, but my restrictions are:
No changes permitted to firewall 2
No config changes permitted to prod boxes (only content)
firewall 1 has a local user account for me
firewall 2 and prod are accessed as root
port 22 is the only open port between each link
So, in general the command sequence I do to deploy is:
scp archive.tar user#firewall1:archive.tar
ssh user#firewall1
scp archive.tar root#firewall2:/tmp/archive.tar
ssh root#firewall2
scp /tmp/archive.tar root#prod1:/tmp/archive.tar
ssh root#prod1
cd /var/www/
tar xvf /tmp/archive.tar
Its a bit more complex than that in reality, but that's a basic summary of the tasks to do.
I've put my ssh key in firewall1:/home/user/.ssh/authorized_keys, so that's no problem.
However, I can't do this for firewall2 or prod boxes.
It'd be great if I could run this (commands above) from a shell script locally, type my password in 4 times and be done with it. Sadly I cannot figure out how to do that.
I need some way to chain ssh commands. I've spent all afternoon trying to use python to do this and eventually given up because the ssh libraries don't seem to support password-entry-style login.
What can I do here?
There must be some kind of library I can use to:
login via ssh using either a key file OR a dynamically entered password
remote remote shell commands through the chain of ssh tunnels
I'm not really sure what to tag this question, so I've just left it as ssh, deployment for now.
NB. It'd be great to use ssh tunnels and a deployment tool to push these changes out, but I'd still have to manually login to each box to setup the tunnel, and that wont work anyway, because of the port blocking.
I am working on Net::OpenSSH::Gateway, an extension for my other Perl module Net::OpenSSH that does just that.
For instance:
use Net::OpenSSH;
use Net::OpenSSH::Gateway;
my $gateway = Net::OpenSSH::Gateway->find_gateway(
proxies => ['ssh://user#firewall1',
'ssh://password:root#firewall2'],
backend => 'perl');
for my $host (#prod_hosts) {
my $ssh = Net::OpenSSH->new($host, gateway => $gateway);
if ($ssh->error) {
warn "unable to connect to $host\n";
next;
}
$ssh->scp_put($file_path, $destination)
or warn "scp for $host failed\n";
}
It requires Perl available in both firewalls, but no write permissions or installing any additional software there.
Unfortunately this isn't possible to do as one shell script. I did try, but ssh's password negotiation requires an interactive terminal, which you don't get with chained ssh commands. You could do it with passwordless keys, but since that's highly insecure and you can't do it anyway, nevermind.
The basic idea is that each server sends a bash script to the next one, which is then activated and sends the next one (and so on) until it reaches the last one, which does the distribution.
However, since this requires an interactive terminal at each stage, you're going to need to follow the payload down the chain manually executing each script as you go, somewhat as you do now but with less typing.
Obviously, you will need to customise them a bit, but try these scripts:
script1.sh
#!/bin/bash
user=doug
firewall1=firewall_1
#Minimise password entries across the board.
tar cf payload1.tar script3.sh archive.tar
tar cf payload2.tar script2.sh payload1.tar
scp payload2.tar ${user}#${firewall1}:payload2.tar
ssh ${user}#${firewall1} "tar xf payload2.tar;chmod +x script2.sh"
echo "Now connect to ${firewall1} and run ./script.sh"
script2.sh
#!/bin/bash
user=root
firewall2=firewall_2
# Minimise password entries
scp payload1.tar ${user}#${firewall2}:/tmp/payload1.tar
ssh ${user}#${firewall2} "cd /tmp;tar xf payload1.tar;chmod +x script3.sh"
echo "Now connect to ${firewall2} and run /tmp/script3.sh"
script3.sh
#!/bin/bash
user=root
hosts="prod1 prod2 prod3 prod4"
for host in $hosts
do
echo scp archive.tar ${user}#${host}:/tmp/archive.tar
echo ssh ${user}#${host} "cd /var/www; tar xvf /tmp/archive.tar"
done
It does require 3 password entries per firewall which is a bit annoying, but such is life.
This do you any good?